Cloudflare just proposed ditching JavaScript’s Web Streams API for an alternative that runs 2× to 120× faster. The company published benchmarks showing their approach—built on async iterables instead of the current spec—delivers massive performance gains across Node.js, Deno, Bun, and browsers. This isn’t incremental optimization. It’s a fundamental redesign that challenges a web standard adopted by every major JavaScript runtime.
The proposal, currently trending on Hacker News, arrives after years of Cloudflare building production systems on Web Streams. Their conclusion: the API has fundamental performance problems that no amount of optimization can fix.
Why Web Streams Are Slow
The Web Streams API, standardized between 2014 and 2016, has three core performance problems that compound through real-world applications.
Promise Overhead Kills Throughput
The specification requires creating a promise for every single chunk read from a stream. In transform pipelines—where data passes through multiple processing stages—this overhead multiplies catastrophically. Vercel’s Malte Ubl documented the impact: Node.js’s traditional pipeline() with passthrough transforms hits 7,900 MB/s. The equivalent Web Streams code? 630 MB/s. That’s a 12.5× slowdown from promise overhead alone.
Cloudflare’s benchmarks show the effect scales with pipeline complexity. A single transform might be 3× slower. Chain three transforms together, and you’re looking at 80-90× degradation. In server-side rendering workloads—think React streaming or Next.js—garbage collection from short-lived promise objects can consume over 50% of CPU time.
Lock Complexity Creates Operational Nightmares
Web Streams require manual reader acquisition and lock management. Read from a stream, you must call getReader(), manage the reader’s lifecycle, and remember to call releaseLock() when finished. Forget that final step? The stream locks permanently.
// Current Web Streams - ceremony and brittleness
const reader = stream.getReader();
try {
while (true) {
const { value, done } = await reader.read();
if (done) break;
process(value);
}
} finally {
reader.releaseLock(); // Forget this? Stream locked forever
}
Cloudflare’s assessment: “The complexity here is pure API overhead, not fundamental necessity.” This isn’t theoretical. Unconsumed response bodies in production applications permanently lock streams, exhausting connection pools and requiring service restarts.
Backpressure Doesn’t Work
Backpressure—the mechanism preventing fast producers from overwhelming slow consumers—is advisory in Web Streams. The API provides a desiredSize signal, but nothing enforces it. Producers can ignore the signal entirely, leading to unbounded memory growth.
The tee() operation, which splits a stream for multiple consumers, explicitly breaks backpressure guarantees. When consumers read at different speeds, tee() creates unbounded internal buffers. The result: memory leaks and out-of-memory crashes in production, exactly what backpressure should prevent.
The Alternative Approach
Cloudflare’s solution centers on a deceptively simple idea: use language primitives that didn’t exist when Web Streams was designed.
Async Iterables Replace Custom APIs
ES2018 introduced async iteration—the for await...of syntax—two years after Web Streams standardization. Cloudflare’s alternative treats streams as AsyncIterable<Uint8Array[]>, consumed like any async sequence:
// Alternative API - clean and simple
const output = Stream.pull(source, compress, encrypt);
for await (const chunks of output) {
for (const chunk of chunks) {
process(chunk);
}
}
// No locks. No readers. Just iteration.
No reader acquisition. No lock management. No releaseLock() to forget. JavaScript engines already optimize this pattern. The API complexity collapses into a language primitive.
Pull-Through Semantics Fix Backpressure
Instead of pushing data eagerly through transform pipelines, the alternative pulls data on-demand. When a consumer stops reading, processing stops automatically. No advisory signals. No ignored warnings. Just natural backpressure from control flow.
For scenarios requiring explicit buffering policies, the API offers four strategies: strict (reject writes when full), block (wait for space), drop-oldest, and drop-newest. Each makes consequences explicit. No more guessing whether backpressure signals are being honored.
Batched Chunks Amortize Overhead
Where Web Streams yield individual chunks (creating a promise per chunk), the alternative yields arrays of chunks. This batching amortizes async overhead across multiple items, dramatically reducing promise allocation costs. It’s the difference between 1,000 promises for 1,000 chunks versus 10 promises for 100 batches of 10 chunks.
Performance Results
Cloudflare tested their reference implementation—unoptimized, just design-level improvements—across runtimes. The gains are dramatic:
| Scenario (Node.js v24) | Alternative | Web Streams | Improvement |
|---|---|---|---|
| 3× transform chain | 275 GB/s | 3 GB/s | 90× faster |
| Async iteration | 530 GB/s | 35 GB/s | 15× faster |
| Tiny chunks (100B) | 4 GB/s | 450 MB/s | 8× faster |
Browser results show similar patterns. In Chrome, async iteration jumps from 10,000 operations per second with Web Streams to 1.1 million ops/s with the alternative—a 100× improvement. Push operations run 5-8× faster. Transform chains maintain 4-5× advantages.
The key insight: these gains come purely from API design, with zero optimization effort. Promise elimination, no lock tracking, and JavaScript engine optimizations for language primitives combine for order-of-magnitude improvements.
Who This Matters For
Edge Computing Takes the Biggest Hit
Cloudflare Workers, Vercel Edge Functions, and similar platforms run short-lived, high-throughput requests where milliseconds matter. Current Web Streams turn promise overhead and GC pressure into real cost at scale. The alternative API could reshape edge computing economics.
Server-Side Rendering Suffers Quietly
React streaming and Next.js spend 50%+ of CPU time on garbage collection when using Web Streams for transform pipelines. Developers attribute slowness to “SSR complexity” without realizing the streaming layer itself is the bottleneck.
File Processing Services Hit Memory Limits
Video transcoding, image transformation, and log processing services chain multiple stream transforms. Each stage in current Web Streams accumulates buffers and promises. The alternative’s pull-through semantics process only what’s needed, when it’s needed.
The Adoption Reality
Here’s the uncomfortable truth: Web Streams is a WHATWG standard, fully implemented across all modern browsers, Node.js, Deno, and Bun. Changing it requires browser vendor consensus, years of standardization work, and ecosystem migration.
Cloudflare frames this as starting a conversation, not demanding replacement: “Whether this exact API is the right answer is less important than whether it sparks productive discussion about what we actually need.”
The reference implementation on GitHub demonstrates feasibility. Async iterables can consume existing Web Streams, providing a migration path. The performance gains are verifiable and reproducible.
But standards have inertia. Libraries, documentation, and training materials all target Web Streams. Even if the alternative proves superior, adoption faces a multi-year standardization timeline.
What Developers Should Know
Use Web Streams today. It’s the standard, it works, and it’s supported everywhere. But watch this space.
If you’re building edge computing platforms, hitting GC issues with stream transforms, or processing high-throughput data pipelines, the alternative’s approach offers a clear path forward. The reference implementation is production-ready for experimentation.
The bigger lesson: standards can improve even post-adoption. ES2018’s async iterables provide better primitives for streaming than what existed in 2014-2016. Cloudflare’s proposal shows how revisiting designs with modern language features can deliver order-of-magnitude improvements.
Whether JavaScript gets a better streams API remains uncertain. That JavaScript deserves one is now harder to dispute.





