Industry Analysis

Nvidia’s $20B Groq Deal: Licensing vs Acquisition

Nvidia announced a $20 billion deal for Groq’s assets on December 24, 2025—its largest acquisition ever, nearly 3x bigger than the 2019 Mellanox purchase. But here’s the twist: CNBC reports it as an “acquisition,” while Nvidia calls it a “non-exclusive licensing agreement.” Semantics matter when $20 billion is at stake. This isn’t just corporate wordplay—it’s potentially an antitrust avoidance strategy while eliminating a genuine competitive threat. Groq, founded by ex-Google TPU inventor Jonathan Ross, built Language Processing Units (LPUs) that delivered 18x faster AI inference than Nvidia’s GPUs.

Deal Structure: When “Licensing” Walks Like an Acquisition

Call it what you want, but when you buy all assets and hire the CEO, it walks like an acquisition. CNBC’s exclusive states Nvidia is “buying Groq’s assets for $20 billion.” Nvidia’s CEO Jensen Huang told employees otherwise: “While we are adding talented employees to our ranks and licensing Groq’s IP, we are not acquiring Groq as a company.” Groq officially described it as a “non-exclusive licensing agreement” without disclosing the price.

So what did Nvidia actually get? All of Groq’s assets, including the LPU intellectual property. Jonathan Ross (CEO and founder) plus the entire senior leadership team joining Nvidia. The LPU inference technology that posed a competitive threat. What Groq supposedly keeps: independent company status under CFO Simon Edwards as new CEO, and the Groq cloud business (though how long that lasts without the tech team is questionable).

This structure mirrors Microsoft’s Inflection deal from March 2024. Microsoft paid $650 million to “license” Inflection’s AI technology, hired CEO Mustafa Suleyman and most of the staff, while Inflection “continued independently” (until it didn’t). The FTC launched an investigation asking whether this qualifies as an “informal acquisition” designed to circumvent HSR merger filing requirements. Licensing agreements may avoid mandatory antitrust review if no formal control transfer occurs—even when $20 billion changes hands.

Why does Nvidia care? Their 2019 Mellanox acquisition ($6.9B) faced a year-long regulatory review and is now under Chinese investigation for allegedly violating approval conditions. That ordeal taught Nvidia a lesson: large acquisitions attract regulatory scrutiny. Their attempted $40 billion ARM acquisition (2020-2022) failed after opposition from U.S., UK, EU, and Chinese regulators. By labeling this as “licensing,” Nvidia potentially sidesteps months of antitrust review while functionally acquiring Groq’s competitive threat.

Groq’s LPU Technology Was the Real Deal

Groq wasn’t just another overhyped AI chip startup. Their Language Processing Units delivered measurable, embarrassing-for-Nvidia performance advantages in AI inference. Benchmarks showed Llama 3 running at 877 tokens per second (8B model) on Groq’s LPU versus competitors limping along at ~80 tokens/sec. That’s 11x faster, not marketing spin. For the Llama 3 70B model, Groq served 284 tokens/sec. Mixtral 8x7B hit 480 tokens/sec.

The architectural advantage wasn’t incremental—it was foundational. LPUs use hundreds of megabytes of on-chip SRAM as primary weight storage (not cache), delivering 80 TB/s memory bandwidth. GPUs rely on external HBM memory, creating bottlenecks. Groq’s deterministic execution model eliminates the scheduling overhead that makes GPU performance unpredictable. The result: up to 18x faster inference than top cloud providers, according to Anyscale’s LLMPerf Leaderboard.

This matters because the AI chip market is bifurcating. Nvidia dominates training with 90%+ market share—CUDA lock-in makes switching prohibitively expensive for companies building foundation models. But the inference market was fragmenting. Custom ASICs and specialized chips are projected to capture 45% of inference market share by 2030, up from 37% in 2024. Groq’s LPU led this charge with 2 million developers using their cloud service, up from 356,000 last year. That’s 5.6x growth, and Nvidia noticed.

The $20B Question: Technology or Threat Elimination?

Groq raised $750 million at a $6.9 billion valuation just three months ago in September 2025. Nvidia is paying $20 billion. That’s a 2.9x premium over Groq’s recent funding round led by BlackRock, Samsung, and Cisco. Either Groq’s technology is worth $13+ billion more than top-tier investors thought 90 days ago, or Nvidia is paying a strategic premium to eliminate competition.

The math favors threat elimination. Nvidia’s quarterly data center revenue hit $51.2 billion in Q3 2025 with 73.6% gross margins. Groq’s LPU technology threatened to capture inference market share, potentially shaving billions from Nvidia’s revenue as inference workloads grow faster than training. Pay $20 billion now to protect $200+ billion in annual revenue, or let Groq grow into a legitimate competitor? The strategic calculus is clear.

Investors made out like bandits if the full $20 billion went to equity (unlikely, but illustrative). Total funding raised: $1.75 billion over 6 rounds. Exit value: $20 billion. That’s an 11.4x return. Series E investors who bought in at $6.9B valuation three months ago just made 2.9x in one quarter. BlackRock, Samsung, Neuberger Berman, and Cisco bet correctly on Nvidia’s fear of competition.

Market Consolidation: Developer Choice Shrinks

Nvidia controls 90-95% of the AI accelerator market, depending on which analyst you ask. AMD struggles to crack 10% despite the MI355X being 4x faster than its predecessor. Intel’s Gaudi 3 claims 1.5x H100 performance, but Bank of America projects Intel will capture less than 1% market share in 2025. The software ecosystem gap—specifically CUDA’s decade of developer lock-in—makes hardware advantages nearly irrelevant.

Custom ASICs from hyperscalers (Google TPU, AWS Trainium, Microsoft Maia) are growing, but they’re not for sale. You must use Google Cloud to access TPUs, AWS to use Trainium. These aren’t competitive alternatives for developers—they’re hyperscaler-exclusive infrastructure.

Groq was different. Their cloud API was publicly accessible. Any developer could sign up and use LPU inference without vendor lock-in to a specific cloud provider. With Groq absorbed into Nvidia, that independent option evaporates. The market consolidates: Nvidia GPUs for training and inference, or hyperscaler-exclusive custom chips you can’t buy. Developer choice didn’t just shrink—it collapsed to two bad options.

Meanwhile, Jonathan Ross completes a remarkable circle. He invented Google’s Tensor Processing Unit as a “20% project” in 2013, deploying 100,000+ units by 2017. Left Google in 2016 to found Groq with the explicit mission to challenge Nvidia and “eliminate artificial scarcity in AI compute.” Nine years later, he’s joining Nvidia with his team. Whether that means LPU technology gets integrated into Nvidia’s future products or quietly shelved to protect GPU margins remains to be seen.

Key Takeaways

  • Semantic gymnastics: Nvidia’s $20B “licensing agreement” for Groq walks, talks, and quacks like an acquisition—buying all assets, hiring CEO + team, while Groq “continues independently” mirrors the Microsoft-Inflection structure now under FTC investigation
  • Real competitive threat: Groq’s LPU technology delivered 18x faster AI inference (877 tokens/sec vs. ~80 for cloud GPUs) and 5.6x YoY developer growth, posing genuine competitive threat to Nvidia’s inference dominance
  • Strategic premium: The 2.9x premium over Groq’s $6.9B September 2025 valuation ($20B vs. $6.9B three months later) suggests strategic value (eliminating competition) exceeds technology value
  • Market consolidation accelerates: Nvidia’s 90%+ chip share grows stronger as the only independent inference competitor disappears, leaving developers with Nvidia GPUs or hyperscaler-exclusive custom ASICs
  • Antitrust bypass potential: Semantics may help Nvidia bypass HSR filing requirements and antitrust review, but the FTC’s Microsoft-Inflection investigation shows regulators are watching “informal acquisition” patterns
  • Jonathan Ross’s journey: Google TPU inventor → Groq founder competing with Nvidia → joining Nvidia signals either genuine technology integration or the end of independent LPU development
ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to simplify complex tech concepts, breaking them down into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *