OpenUI rewrote their Rust WASM parser in TypeScript this week and achieved 2.2x-4.6x faster performance depending on the fixture. This isn’t measurement error or luck—it’s a systematic indictment of engineering dogma over profiling. The Rust community treats WASM as the default answer for web performance, but OpenUI proved that boundary overhead matters more than raw computation speed. Zero-cost abstractions stop being zero-cost the moment you cross a language boundary.
Boundary Overhead, Not Computation Speed
The performance bottleneck wasn’t parsing—it was data marshaling across the WASM boundary. Every parse required copying strings from JavaScript to WASM memory, parsing in Rust (fast), serializing results to JSON, copying the JSON string back to JavaScript, and deserializing with JSON.parse(). The sequential boundary-crossing pattern consumed most latency, not the actual parsing computation.
OpenUI tried avoiding JSON serialization by using serde-wasm-bindgen to return JsValue directly. Performance got 30% worse. The contact-form fixture went from 61.4µs to 79.4µs. Why? Converting Rust structs to JavaScript objects requires recursively materializing Rust data into real JavaScript arrays and objects—each field access crosses the runtime boundary. Many small crossings beat one large serialization.
This is the core issue. WASM isn’t slow—the architecture was wrong for the workload. Streaming parsers call the function dozens of times per request, multiplying boundary overhead. “Rust is fast” doesn’t mean “WASM is fast for my use case.”
V8’s JSON.parse Beats Rust Across Boundaries
V8’s JSON.parse() is a native C++ implementation with SIMD optimizations, not “slow JavaScript.” It processes JSON strings in a single optimized pass, outperforming field-by-field object construction across FFI boundaries. Fewer, larger, more optimized operations beat many small ones.
Moreover, V8 v13.8 (Chrome 138) made JSON.stringify 2x faster with SIMD optimizations and fast-JSON-iterable flags. For large payloads over 10KB, JSON.parse() is 1.7x faster than object literals. The TypeScript rewrite eliminated boundary crossings entirely—parsing happened purely in the JavaScript runtime where V8’s JIT could optimize it.
Developers underestimate native JavaScript performance because they assume “compiled is always faster.” However, V8 has had decades of optimization work. For certain workloads like JSON parsing, string manipulation, and object creation, fighting the platform is more expensive than using it.
Compute-Bound vs Interop-Heavy: The Dividing Line
WASM excels at compute-bound workloads with minimal interop: image processing, cryptography, physics simulations. In contrast, it fails for interop-heavy workloads: parsing structured text into JavaScript objects, frequent DOM manipulation, many small function calls. The rule: measure boundary crossings, not just computation speed.
Expert consensus from the 146-comment Hacker News discussion is clear. WASM wins when you have large input producing scalar output, bulk operations, or porting C/C++ libraries. WASM loses for streaming or chunked data, frequent small calls, and DOM-heavy apps. As OpenUI noted: “The parsing computation is fast enough that V8’s JIT eliminates any Rust advantage, and boundary overhead dominates.”
Don’t choose WASM by default—choose it when boundary crossings are rare and computation is expensive. Most web apps fail the first criteria.
Engineering Dogma Over Measurement
The broader problem isn’t technical—it’s cultural. “Rewrite it in Rust” has become cargo cult engineering. Developers choose WASM because “Rust is fast” without profiling, without measuring boundary costs, without considering architectural fit. OpenUI’s team used heavy instrumentation to test assumptions and found the real bottleneck: an O(N²) algorithm plus boundary overhead.
Furthermore, rewrites often appear to succeed because developers fix algorithmic issues during the rewrite, not because the new language is faster. OpenUI’s O(N²) to O(N) fix was independent of the language choice. One Hacker News commenter put it bluntly: “Pick Python and move fast, kids. It doesn’t matter how fast your software is if nobody uses it.”
Rust’s “zero-cost abstractions” promise holds true within Rust: iterators compile to hand-written C loops, closures have no overhead. But this promise breaks at FFI boundaries. The compiler can’t inline across boundaries. Each call behaves like an indirect function pointer. Data must be serialized and deserialized. Zero-cost becomes high-cost when languages don’t share a heap or object model.
The Rust documentation defines zero-cost abstractions as “What you don’t use, you don’t pay for. What you do use, you couldn’t optimize better manually.” That’s accurate—within Rust. Nevertheless, FFI calls prevent inlining, introduce marshaling costs, and break optimization opportunities. Mozilla’s acknowledgment that WebAssembly remains a “second-class citizen” on the web is telling. Memory model mismatches aren’t solved by better tooling.
Key Takeaways
Boundary overhead often exceeds computational gains for interop-heavy workloads. OpenUI’s 2.2x-4.6x speedup came from eliminating those crossings, not from superior parsing algorithms.
V8’s JSON.parse() is highly optimized—don’t underestimate native platform tools. Decades of optimization work in V8 mean that “compiled is always faster” isn’t always true when you account for boundary costs.
WASM excels at compute-bound tasks with minimal interop, not streaming parsers that cross boundaries frequently. Measure your specific use case before choosing WASM over TypeScript or JavaScript.
Zero-cost abstractions don’t cross language boundaries. Rust’s promises apply within Rust, not across FFI calls where marshaling and serialization dominate.
Measure your specific use case before choosing WASM. Profile first, optimize later. Boring tech that fits the platform often beats exotic tech fighting it.

