Amazon invested $5 billion in Anthropic this week, but buried in the announcement is the real story: Anthropic committed to spend $100 billion on AWS over the next decade. This isn’t venture capital—it’s vendor financing disguised as investment. Amazon essentially loans $5B and gets $100B back, plus equity in a $380 billion company. And here’s what nobody’s saying: if industry cloud waste rates hold, roughly $30 billion of that commitment will evaporate on idle GPUs and overprovisioned instances. The deal exposes how AI development economics are structurally wasteful.
The “Circular Financing” Pattern
Amazon calls this an investment. Let’s do the math. Amazon invests $5 billion immediately (with up to $25 billion total tied to milestones). Anthropic commits to spend $100 billion on AWS infrastructure over 10 years. Amazon gets guaranteed revenue of $100 billion plus equity in Anthropic. That’s not investing—that’s financing with an equity kicker.
TechCrunch called it a “circular AI deal,” and they’re right. Cloud providers are financing AI companies that immediately commit to spending on their infrastructure. Microsoft did it with OpenAI. Google is doing it with others. The pattern is clear: Cloud vendors can’t lose. Even if the AI company fails, the infrastructure commitment guarantees revenue.
For both parties, it’s rational. Anthropic gets $5 billion cash now and secures 5 gigawatts of compute capacity using Amazon’s Trainium chips. Amazon locks in $100 billion in revenue over a decade and gets equity upside if Claude succeeds. But make no mistake—Anthropic is now structurally locked into AWS. Switching clouds at this scale would cost hundreds of millions. One organization’s documented switching cost from AWS to a competitor was $8.5 million. At Anthropic’s scale, it would be financially impossible.
AI’s $30 Billion Waste Problem
Here’s the uncomfortable truth: industry data shows 27% of cloud spending is wasted globally—over $100 billion annually. Primary culprits: idle compute resources (35% of waste) and overprovisioned instances (25%). Apply that 30% average waste rate to Anthropic’s $100 billion commitment, and you get $30 billion wasted over the next decade. That’s a conservative estimate.
AI workloads make the waste problem worse, not better. Production AI systems run at under 50% GPU utilization even under active load, with training workloads often hitting just 30-50%. A GPU running at 20% utilization is wasting 80% of its cost. The root cause is architectural mismatch: AI workloads alternate between CPU preprocessing, GPU compute, and CPU postprocessing. But GPUs get allocated for the entire lifecycle, sitting idle during non-GPU phases.
Training large language models compounds the problem. Claude training happens episodically—massive compute demand for days or weeks during a training run, then idle capacity until the next version. Organizations must provision for peak demand but pay for average utilization. That gap is where $30 billion disappears.
Even with optimization, the waste is structural. FinOps teams report achieving 30-40% cost efficiency improvements through automation and continuous monitoring. But that brings waste from 30% down to maybe 20%. At Anthropic’s scale, that’s still $20 billion evaporating.
The Vendor Lock-In Calculation
Anthropic’s lock-in is strategic. They negotiated from strength—securing favorable terms, guaranteed capacity, and custom Trainium chip access. They know exactly what they’re trading. But most organizations don’t.
Vendor lock-in happens gradually, then suddenly. Startups drift into AWS, use proprietary services like DynamoDB or Lambda, then discover switching costs millions when a competitor offers better pricing. The choice was accidental. By the time you realize you’re locked in, you are.
Multi-cloud sounds like the solution. 89% of enterprises use multi-cloud strategies, with 42% citing vendor lock-in prevention as the primary reason. But it’s not free. Multi-cloud architectures cost 15-25% more in salary premiums due to skill fragmentation—you need separate teams for AWS, Azure, and GCP. The operational complexity adds overhead. You trade lock-in risk for higher operational costs.
The lesson isn’t “avoid lock-in.” It’s “make it intentional.” Anthropic locked in deliberately because the terms favor them. Your startup probably isn’t negotiating from that position.
What Developers Should Learn
FinOps adoption exploded: 98% of organizations now manage AI costs, up from just 31% in 2024. It’s the fastest-growing discipline in cloud engineering. But FinOps doesn’t fix the economics—it just makes the waste visible and reduces it incrementally.
Developers used to optimize for performance and uptime. In 2026, cost per request is equally critical. Cloud cost literacy is now a core engineering skill. That means understanding total cost of ownership before choosing a database, planning for switching costs before committing to a platform, and designing for portability even if you never use it.
The circular AI deals will keep coming. Cloud providers have figured out how to de-risk AI investments by turning capital into guaranteed infrastructure revenue. Anthropic’s $100 billion commitment sets a precedent. Other AI companies will face pressure to accept similar terms. The cloud oligopoly—AWS, Azure, GCP—consolidates further.
Anthropic knows what they’re doing. They’re locked in, but they negotiated from strength. The risk is smaller organizations following the pattern without understanding the trade-offs. Lock-in isn’t evil. Structural waste isn’t surprising. But both should be intentional.
The real question is whether AI economics are sustainable. $100 billion committed, $30 billion wasted, all to train models that might be obsolete in two years. That’s not a cloud problem. That’s an industry problem.












