Nvidia just made its biggest acquisition ever, paying $20 billion for AI chip startup Groq’s technology. However, here’s the twist: it’s technically a “non-exclusive licensing deal,” not an acquisition. Industry analysts aren’t buying it. Bernstein’s Stacy Rasgon calls the structure designed to keep “the fiction of competition alive“—legal engineering to dodge antitrust regulators while Nvidia eliminates a competitor that was beating its GPUs 10x on inference speed.
The “Licensing” Deal That Looks Like an Acquisition
On December 24, Nvidia announced a non-exclusive licensing agreement for Groq’s Language Processing Unit (LPU) inference technology. Nevertheless, peel back the corporate language and you’ll find a classic acqui-hire. Groq CEO Jonathan Ross and President Sunny Madra are joining Nvidia. Similarly, 90% of Groq’s employees will join, and they’ll be paid out in cash and Nvidia stock. Groq continues as a “standalone company” under new leadership—with 10% of its original staff.
Can a company with 10% of its workforce meaningfully compete? That’s the fiction Rasgon called out. Moreover, this deal follows the Microsoft-Inflection and Amazon-Adept playbook: structure the transaction to avoid traditional M&A antitrust review while absorbing the talent and technology. Regulators in the UK and US are catching on, but acqui-hires still fly under the radar compared to standard acquisitions.
Why Groq Was a Real Threat to Nvidia’s AI Dominance
Nvidia didn’t spend $20 billion—three times its previous record acquisition—on a talent play. Instead, Groq was an existential threat to Nvidia’s inference dominance. The startup’s LPU chips delivered 300 to 1,660 tokens per second running Llama 3 70B, compared to Nvidia’s H100 GPUs managing 60-100 tokens per second. Furthermore, Groq’s deterministic, assembly-line architecture broke the “memory wall” that limits GPU inference performance, achieving response times of 0.2-0.3 seconds versus Nvidia’s 0.5-1.0 seconds.
Developers noticed. Consequently, over one million developers joined GroqCloud, drawn by speeds 7x faster and costs 89% lower than alternatives. When Groq went viral, it received 3,000 API requests and one million prompts in 48 hours. Therefore, for real-time chatbots, transcription, and interactive AI agents, Groq wasn’t just competitive—it was demonstrably superior.
Perfect Timing for a Land Grab
The timing of this deal is no coincidence. In 2026, AI inference spending surpassed training spending for the first time—$20.6 billion versus $16.8 billion, according to market analysts. Nvidia dominates training with 70-95% market share, but Groq proved that specialized inference chips could outperform general-purpose GPUs by an order of magnitude. As a result, as inference workloads climb to two-thirds of all AI compute, Nvidia couldn’t afford to let a 10x-faster alternative thrive independently.
Jonathan Ross, Groq’s founder, has a track record of building world-changing AI chips. Specifically, he created Google’s Tensor Processing Unit (TPU) as a 20% project in his twenties, and it now powers over half of Google’s infrastructure. He left to build the LPU specifically to challenge incumbents. Now he’s joining Nvidia—the incumbent he was outperforming.
Antitrust Red Flags Nvidia Is Ignoring
Nvidia is already under intense regulatory scrutiny. The US Department of Justice has issued subpoenas investigating whether Nvidia penalizes customers who don’t exclusively use its GPUs. Additionally, France raided Nvidia’s offices in 2023 on suspicion of price fixing and unfair contracts. China ruled that Nvidia violated anti-monopoly laws in its Mellanox acquisition. Meanwhile, the UK, EU, and South Korea have all launched inquiries.
DOJ Antitrust Division Chief Gail Slater has stated that enforcement must focus on “preventing exclusionary conduct over the resources needed to build competitive AI systems.” Eliminating a proven 10x-faster alternative should be a textbook case. Therefore, the acqui-hire structure—designed to stay below reporting thresholds and avoid traditional M&A review—looks like exactly the kind of conduct regulators are targeting.
What Developers Stand to Lose
One million developers now face uncertainty. Will Nvidia maintain GroqCloud’s API, or sunset it like so many acquired services? Groq offered 89% cost savings and near-instant response times—will those advantages survive, or will Nvidia price GroqCloud to push users toward its own ecosystem? The LPU’s innovative architecture could improve Nvidia’s chips, but it could just as easily be shelved to protect GPU sales.
This deal eliminates choice. Developers building AI applications already face limited alternatives to Nvidia’s CUDA ecosystem for training. Now inference options are narrowing too. Consequently, with Groq absorbed, what’s left? AMD, Cerebras, and SambaNova are all significantly smaller players. If you want cutting-edge AI performance, you’re increasingly locked into one vendor.
Is Competition Already Over?
Groq proved that purpose-built inference chips could beat GPUs decisively—10x faster, one-tenth the energy, at a fraction of the cost. Yet even that overwhelming technical advantage couldn’t sustain independence against Nvidia’s market power and $60 billion cash pile. Therefore, if a 10x performance edge isn’t enough to compete, what would be?
Regulators have a narrow window to act. If this “licensing” deal stands unchallenged, it sets a precedent: dominant players can neutralize superior competitors through clever legal structures that bypass antitrust review. For developers, the question is simple. Do we want one company controlling both training and inference, or should there be actual competition in AI infrastructure?
The fiction of competition is easy to spot. Seeing regulators call it out would be the real surprise.








