Serverless computing promises to slash infrastructure costs with its “pay only for what you use” model. But many development teams are discovering the opposite: their cloud bills are higher than traditional server setups, sometimes by a factor of 10. The culprit? Hidden costs that vendors conveniently gloss over in their marketing.
The serverless pricing model isn’t as simple as “pay per request.” Sure, AWS Lambda charges $0.20 per million requests plus compute time. But that’s just the beginning. Add API Gateway at $3.50 per million requests, CloudWatch logs at $0.50 per GB, data transfer at $0.09 per GB, and suddenly your “cheap” serverless API costs more than running a dedicated server.
The Math Doesn’t Lie
Let’s run the numbers for a moderate-traffic API: 10 requests per second, 200ms average execution time. Over a month, that’s 26 million requests.
Lambda compute and request costs: about $50. Not bad. However, API Gateway adds $91. CloudWatch logs (assuming 10KB per request) add $130. Data transfer adds another $5-10. Total monthly bill: $275. Compare that to a t3.medium EC2 instance at $30 per month that handles the same load easily. The serverless approach costs nearly 10x more.
This isn’t a hypothetical scenario. It’s the reality that drove 37signals to abandon AWS entirely. They were paying $3.2 million per year for cloud infrastructure. After investing $600K in on-premise hardware, they’re now spending around $500K annually—a $7 million savings over five years. DHH didn’t mince words: “The cloud is wildly expensive for established businesses with steady traffic.”
Even Amazon’s Teams Are Bailing
The irony deepened when Amazon Prime Video’s engineering team published a case study about migrating FROM Lambda TO ECS containers. Their result? A 90% cost reduction for their video monitoring service. Read that again: Amazon’s own engineers found Lambda too expensive and moved to a more traditional architecture.
When the company selling you serverless can’t justify using it internally, that tells you something important about where the economics actually land.
The Cold Start Tax Nobody Mentions
Then there’s the cold start problem. Lambda functions that haven’t run recently take 1-5 seconds to initialize. Your users won’t tolerate that, so you enable provisioned concurrency to keep functions warm. At $0.015 per GB-hour, keeping just 10 one-GB instances warm costs $108 per month—more than that EC2 instance would cost for your entire workload.
This defeats the entire “pay only when it runs” promise. You’re paying for idle capacity anyway, just in a more expensive, less flexible way.
When Serverless Actually Works
To be fair, serverless isn’t always a bad choice. It excels in specific scenarios: extremely spiky traffic patterns with 100x variance, very low-traffic applications under 100K requests per month, event-driven architectures processing S3 uploads or queue messages, and rapid prototyping where speed to market matters more than cost optimization.
The break-even point typically falls around 100K-1M requests per month, depending on request duration and auxiliary service usage. Below that, serverless can be cheaper. Above 10 million requests monthly, traditional servers almost always win on cost. Between those thresholds, you need to do the actual math for your workload.
The Hidden Cost: Engineering Time
The biggest hidden cost isn’t in the AWS pricing calculator—it’s the engineering time spent optimizing serverless deployments. Teams dedicate entire sprints to reducing CloudWatch verbosity, batching requests to minimize API Gateway calls, and architecting around cold start penalties. Cost optimization becomes a job in itself.
Meanwhile, running a few EC2 instances or a Kubernetes cluster requires minimal ongoing cost optimization. Set it up once, scale when needed, done. The operational simplicity that serverless promises evaporates when you’re trying to make the economics work.
Do the Math First
Serverless isn’t cheaper by default—it’s a trade-off between operational convenience and cost efficiency. For certain workloads, that trade-off makes sense. For others, especially APIs with steady traffic, the math is clear: traditional servers or containers cost less and perform better.
Before making the leap to serverless, calculate your costs honestly. Include API Gateway, load balancers, monitoring, logging, and data transfer. Model your actual traffic patterns, not best-case scenarios. And ask yourself: is the operational convenience worth 5-10x higher costs?
For many teams, the answer is no. That’s why you’re seeing a quiet migration back to containers and dedicated servers. The serverless hype cycle is over. What remains is a useful tool for specific use cases—not the universal solution it was marketed to be.











