Groq’s cover photo
Groq

Groq

Semiconductor Manufacturing

Mountain View, California 167,668 followers

Groq is fast, low cost inference. The Groq LPU delivers inference with the speed and cost developers need.

About us

Groq is the AI inference platform delivering low cost, high performance without compromise. Its custom LPU and cloud infrastructure run today’s most powerful open AI models instantly and reliably. Over 2 million developers use Groq to build fast and scale with confidence.

Industry
Semiconductor Manufacturing
Company size
201-500 employees
Headquarters
Mountain View, California
Type
Privately Held
Founded
2016
Specialties
ai, ml, artificial intelligence, machine learning, engineering, hiring, compute, innovation, semiconductor, llm, large language model, gen ai, systems solution, generative ai, inference, LPU, and Language Processing Unit

Locations

Employees at Groq

Updates

  • View organization page for Groq

    167,668 followers

    January 2023. The world lost its mind over ChatGPT. But they saw something everyone missed. If machines can write like us, how do we know what’s real? Over winter break, Alex and Edward built a tiny demo to find out. Within 48 hours, it blew up. Millions piled in. The traffic was so insane it took down Streamlit. Classrooms, newsrooms, publishers all wanted to know what was real again. Today, their tool is used by over 10 million people to help keep writing human. This is the story of how Alex Cui and Edward Tian built GPTZero with no funding, no plan, just curiosity Alex knew a thing or two about machine learning. He studied computer science at Caltech, worked at Facebook and Uber, and joined a self-driving truck startup during his PhD. But he was always preoccupied with the impact that tech had on society. In 2017, he built a misinformation detector that got national attention on NPR, Fox News, and Capitol Hill. So when ChatGPT dropped, he and his high school friend Edward didn’t see a toy like everyone else. They saw a turning point, a flood of machine-written words no one would be able to trace. Over winter break, Edward decided to build something that could. He started coding. No roadmap. No budget. Just an idea. He built a simple model that looked for the little quirks in writing that gave AI away. He called it GPTZero. He threw it on a basic server, shared it with Alex, sent the link around, and went to bed. By morning, it had completely blown up. Reporters were covering it. Teachers were testing it. Students frantically were checking their essays. After traffic brought the site down, they knew the world needed it. Alex and Edward kept it running by hand. Paying for servers out of pocket, fixing bugs between classes, adding features long after midnight. By early 2023, Alex and Edward turned it into a company and raised a seed round. ✅ Turned a viral demo into a real company ✅ Reached 3,000+ schools and millions of users ✅ Set out to build the world’s most trusted AI detector But with that growth came a new challenge: keeping up. Running it was still expensive, and the servers were hanging on by a thread. At one point, everything was running on a single server handling over a million visits a day. The models worked, but they couldn’t keep up in real time. Then Alex and Edward tried Groq. They set up a system using models like Llama 3.1 8B to handle feedback, fact-checking, and credibility checks in parallel. Suddenly, GPTZero became faster and cheaper to run. Today, GPTZero spots writing from GPT-5 with about 98 percent accuracy. It can flag AI “humanizers” and even tell when a human draft has been lightly polished by AI. More than 10 million people and 3,000 institutions now use it to keep writing real and honest. Alex and Edward didn’t build GPTZero to stop AI. They built it to keep humans honest with it. When the world gets faster, truth needs to move just as fast. 

    • No alternative text description for this image
  • Groq reposted this

    View profile for Benjamin Klieger

    Building @ Groq | Researching @ Stanford | Prev. 2x Startup Founder

    I am hiring a full time software engineer for my team at Groq to focus on building out fast AI coding agents and supporting our internal use of AI tools! Groq is home to world-class engineers with a unique diversity across the stack: hardware, compiler, distributed systems, software, application layer, and more. You would be working with these engineers to support their use of AI tools (Codex, Claude Code, Cursor, etc.) and build out new AI development tools powered by Groq. Compound (our research agent) is successful not only because it is fast, but because that speed allows it to do more work, and thus have higher quality results than alternatives. We believe a similar story can be true for coding. Help us prove that. Comment with a github link to a coding tool you built, forked, or made significant contributions to and I will send you an application link and personally fast track your application.

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • Groq reposted this

    View profile for Aarush Sah

    Head of Evals @ Groq

    🚀 OPENBENCH 0.5.0 IS HERE This is our biggest release yet - a huge step forward for open evaluation infrastructure. We’ve added: - 350+ new evals across BigBench, BBH, Global-MMLU, GLUE, BLiMP, AGIEval, and more - ARC-AGI 1 & 2 (in partnership with the ARC Prize Foundation) — testing true fluid intelligence in models - A plugin system for external benchmarks — register your own without forking - Provider routing with fallbacks and ordering for multi-provider setups - Coding harnesses that you can mix and match - Tool-calling evals via LiveMCPBench We’ve also introduced Exercism, a coding benchmark for evaluating AI code agents on real-world programming tasks across languages and frameworks. Exercism is a comprehensive coding benchmark that evaluates AI code agents’ ability to solve real-world programming exercises across multiple programming languages. This evaluation tests not just code generation capabilities, but the full coding workflow including reading specifications, navigating file systems, implementing solutions, and passing test suites. This release brings openbench closer to being a unified evaluation platform — one that makes it easier for anyone to measure capability, reliability, and reasoning in modern AI systems. A huge thank you to the ARC Prize Foundation, Gregory Kamradt, the OpenRouter team, and everyone contributing datasets, plugins, and ideas. openbench 0.5.0 -> measure what matters, with clarity and speed. #AI #Evaluation #openbench #Groq #benchmarks #evals

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

Groq 8 total rounds

Last Round

Series E

US$ 750.0M

See more info on crunchbase