Enterprises will waste $44.5 billion on cloud infrastructure in 2025. That’s not a typo, and it’s not because developers are careless. It’s because the system is fundamentally broken.
According to Harness’s FinOps in Focus 2025 report, 21% of enterprise cloud spend—$44.5 billion—goes straight into the void. Underutilized resources. Idle instances. Orphaned containers. The waste is staggering, and it’s getting worse.
Here’s the real kicker: 49% of organizations estimate they waste over 25% of their cloud budget. Nearly a third believe the waste exceeds 50%. And when you zoom into containers, the numbers are even more damning—over 80% of container spend is wasted on idle resources.
This isn’t a developer problem. It’s a systemic failure.
The Visibility Black Hole
The reason for this catastrophic waste isn’t developer incompetence—it’s that developers are flying blind.
Only 43% of developers have real-time data on idle cloud resources. Just 39% can see unused or orphaned resources. A mere 33% have visibility into over-provisioned or under-provisioned workloads. The result? 55% of purchasing commitments are based on pure guesswork.
Think about that for a second. More than half of cloud infrastructure decisions—decisions worth millions of dollars—are made without data. It’s like asking someone to optimize a black box while blindfolded.
And it’s not just the data that’s missing. Cloud pricing is so byzantine that 50% of organizations struggle to understand it. When 54% of waste stems from lack of visibility, you can’t blame developers for the mess. They literally can’t see the problem.
The FinOps Disconnect
Here’s where it gets frustrating. 59% of organizations now have FinOps teams—up from 51% the previous year. The FinOps Foundation’s 2025 State of FinOps Report surveyed organizations responsible for $69 billion in cloud spend. FinOps is no longer niche. It’s mainstream.
But it’s not working.
52% of engineering leaders cite the disconnect between FinOps teams and developers as the root cause of wasted spend. FinOps teams have the cost data. Developers have the engineering context. Neither can see the full picture, so both fail.
And here’s the twist: developers aren’t avoiding responsibility. 62% want more control over cloud costs. They’re not lazy or indifferent—they’re locked out. When 71% of developers don’t use spot orchestration, 61% don’t rightsize instances, 58% don’t use reserved instances, and 48% don’t shut down idle resources, it’s not because they don’t care. It’s because the tools, workflows, and culture aren’t there to support them.
Kubernetes: A Case Study in Broken Incentives
Want to see this dysfunction in action? Look at Kubernetes.
The average Kubernetes cluster runs at 13-25% CPU utilization and 18-35% memory utilization. Overprovisioning is typically 2-5x actual resource needs, leading to annual waste of $50,000 to $500,000 per cluster.
Why? Because developers set huge resource buffers. They’d rather waste cloud spend than face the wrath of the OOM Killer. It’s a rational decision—an out-of-memory crash is visible, immediate, and career-damaging. Wasted cloud spend is invisible, delayed, and someone else’s problem.
These small “just-in-case” decisions compound across hundreds of pods, creating massive, system-wide waste. And because developers don’t have real-time cost feedback, they have no idea they’re doing it.
As one report put it: “When 40% of resources are untagged, it’s rarely just a tooling issue—it’s a culture issue.”
What Actually Needs to Happen
The solution isn’t to blame developers. It’s to fix the system.
First, tools need to meet developers where they are. Cost optimization can’t be an afterthought bolted onto monthly finance reviews—it needs to be embedded in CI/CD pipelines, integrated into IDEs, and surfaced in real-time dashboards. Platforms like Vantage, CloudZero, Harness CCM, and Sedai are starting to do this, but adoption is still too low.
Second, organizations need to stop treating cost as someone else’s problem. The FinOps Foundation’s 2025 framework emphasizes automation, shared accountability, and near real-time cost intelligence. That means central FinOps teams handle broad optimization (negotiating discounts, selecting cheapest regions), while distributed engineering teams own the cost of their services—with the tools and data to actually manage it.
Third, make cost a first-class metric. Right now, developers optimize for performance and reliability because those metrics are visible and tracked. Cost needs the same treatment. If you can see latency in your observability platform, you should be able to see cost impact too.
The good news? Organizations that get this right can reduce waste by 20-30% while freeing up capital for innovation. And 86% of developers believe AI can help optimize costs in the next year, suggesting appetite for better solutions.
Stop Blaming Developers
The $44.5 billion cloud waste crisis isn’t a failure of individual developers—it’s a failure of platforms, tools, and culture.
Developers want to optimize costs. They just can’t do it with guesswork, zero visibility, and tools that actively fight against them. FinOps teams are growing, but they’re disconnected from the engineering workflows where decisions actually get made.
Until organizations fix the system—by embedding cost intelligence into developer tools, creating real-time feedback loops, and making cost a shared, visible metric—the waste will continue. The solution isn’t more accountability. It’s better infrastructure.











