Anthropic just secured 3.5 gigawatts of compute power from Google and Broadcom—enough to match Oregon’s entire data center infrastructure. The deal, worth $21 billion in 2026 and $42 billion in 2027, comes as Anthropic’s revenue exploded from $9 billion to over $30 billion in just four months. The company now has 1,000+ enterprise customers each spending at least $1 million annually, a figure that doubled in under two months.
Anthropic needs the compute to compete with OpenAI’s $600 billion Stargate initiative. Google wants to challenge NVIDIA’s GPU dominance with its Tensor Processing Units (TPUs), which train AI models 2.7x faster than comparable NVIDIA hardware on benchmarks like ResNet-50. Broadcom is making billions as the middleman, facilitating custom silicon deals and pulling in $46 billion in AI revenue this year alone—a 106% jump from 2025.
According to Krishna Rao, Anthropic’s CFO, the partnership represents a “disciplined approach to scaling infrastructure” that will support Claude’s frontier models while meeting escalating customer demand. Broadcom CEO Hock Tan said the company is “off to a very good start in 2026” with Anthropic, delivering 1 gigawatt immediately and scaling to 3.5 gigawatts starting in 2027.
To put 3.5 gigawatts in perspective: Northern Virginia, the world’s largest data center market, has 3.4 gigawatts of total capacity. Anthropic’s infrastructure alone will consume roughly 4.6% of all US data center power. This isn’t a rounding error—it’s a regional grid’s worth of electricity dedicated to training and running Claude.
Anthropic’s business is growing even faster than its infrastructure. Revenue jumped 233% in four months, from $9 billion in December 2025 to over $30 billion in April 2026. Eight of the Fortune 10 now use Claude, and the number of enterprise customers paying $1 million or more annually has doubled to over 1,000 in less than two months.
The infrastructure expansion should translate to better API availability—fewer “capacity exceeded” errors—and potentially faster inference with distributed compute. For enterprises evaluating Claude, the 3.5-gigawatt commitment signals Anthropic isn’t going anywhere. The company is projecting positive cash flow by 2027, suggesting sustainable unit economics rather than VC-subsidized pricing.
But there’s a catch: Claude is already 40-50% more expensive than OpenAI. Claude Opus 4.6 costs $5 per million input tokens and $25 output, compared to GPT-5.4’s $2.50 and $15. With $63 billion in infrastructure costs over the next two years, pricing pressure is real. Whether Anthropic passes those costs to customers or absorbs them through TPU efficiency gains remains to be seen.
Here’s what nobody wants to talk about: 3.5 gigawatts is an enormous amount of power. US data centers already consume 4% of the nation’s electricity—a figure projected to hit 7-12% by 2028 as AI workloads scale. Goldman Sachs warned in February that data center electricity demand will boost core inflation by 0.1% in both 2026 and 2027.
A typical 100-megawatt data center uses 300,000 gallons of water daily for cooling. Scale that to 3.5 gigawatts and you’re looking at roughly 10.5 million gallons per day. Google and Meta have both seen carbon emissions surge despite earlier reduction efforts, and data centers increase surface temperatures in surrounding areas by an average of 3.6 degrees Fahrenheit.
Anthropic’s $63 billion commitment over two years pales in comparison to OpenAI’s $600 billion Stargate initiative targeting 2030. But the strategies differ: Anthropic partners with cloud providers for capital efficiency while OpenAI aims to own the infrastructure layer through an exclusive Microsoft Azure deal worth $250 billion.
This deal is also the biggest validation yet for Google’s TPU strategy against NVIDIA’s GPU monopoly. TPUs offer better performance per watt and lower costs for large-scale AI training, though NVIDIA’s CUDA ecosystem still dominates developer mindshare. NVIDIA’s Blackwell chip launch could shift the balance, but for now, Anthropic’s bet on Google silicon is a major endorsement.
The message is clear: frontier AI requires gigawatt-scale infrastructure and billion-dollar deals. The days of scrappy AI startups are over. Only companies with massive capital—or partnerships with those who have it—can compete at this level.

