Racket v9.0 dropped on November 22, 2025, bringing native parallel threads to a language that has been around since the mid-1990s. This isn’t just another dot release with bug fixes and minor improvements. It’s a fundamental architectural breakthrough that challenges the persistent narrative that functional languages can’t compete on performance when multicore hardware enters the picture.
For three decades, Racket—and its PLT Scheme predecessor before the 2010 rename—offered green threads and clever workarounds like futures and places, but never true OS-level parallel execution. The language was born in an era when “computers with multiprocessors were not commonly available,” and that limitation got baked deep into the original implementation. Racket v9.0 finally fixes what couldn’t be patched.
What Actually Changed
The headline feature is straightforward: Racket now has shared-memory parallel threads that leverage multicore hardware. The API is elegantly minimal—just add a #:pool argument to the standard thread function:
(thread thunk #:pool 'own #:keep 'result)
That’s it. The #:pool argument creates a parallel thread in its own dedicated pool, and #:keep 'result preserves the return value for later retrieval via thread-wait. No breaking changes to existing code, no massive API overhaul. The thread function still creates a coroutine thread by default, so your codebase keeps working while you add parallelism where it actually matters.
This contrasts sharply with Racket’s previous parallelism options. Futures provided fine-grained parallelism but blocked on so many operations they were nearly useless for real-world code. Places offered message-passing concurrency with limited data sharing, forcing you to redesign your entire approach. Parallel threads just… work. They handle parameters, mutable hash tables, and other Racket constructs that futures choke on.
The Performance Reality Check
Benchmarks show 3.7 to 4.0× speedup on four cores for CPU-bound tasks like Fibonacci computations. Hash table operations on unshared data structures hit 4.5× speedup on eight cores. Those are real, usable gains for functional code doing serious computation.
However, I/O operations only get 1.3 to 1.6× speedup because file I/O locks are still too coarse-grained, according to the technical deep dive on parallel threads. Directory traversal tasks barely parallelize. And if you’re not using parallelism at all, you’ll eat a 6 to 8% performance cost on fine-grained mutable operations just from the infrastructure supporting it.
This isn’t magic. It’s solid engineering with known limitations. The Racket team isn’t overselling it, which makes the achievement more credible. Computation parallelizes well. I/O needs more work. That’s the reality.
The Eight-Year Foundation Rebuild
None of this would be possible without the massive technical undertaking that started in 2017. Racket spent eight years rebuilding its entire runtime on Chez Scheme because the original bytecode implementation had parallelism “deeply baked out” of the architecture. You can’t just bolt true parallelism onto a system designed when single-core CPUs were the norm.
The transition required memory fences for weak memory-consistency platforms, a parallelized garbage collector, and fundamental changes to how Racket threads work internally. By August 2025, Racket was distributing only CS (Chez Scheme) builds, marking the completion of a foundation that makes parallel threads architecturally sound rather than a hack layered on top of incompatible infrastructure.
Moreover, the eight-year transition succeeded without breaking the package ecosystem. Developers didn’t wake up to mass breakage. The community didn’t fragment into “old Racket” and “new Racket” camps. The team pulled off a fundamental rewrite while maintaining stability. That’s the kind of engineering discipline that doesn’t get enough credit.
Functional Programming Isn’t Alone in This Fight
Python introduced an experimental free-threaded build in version 3.13 through PEP 703, removing the Global Interpreter Lock that’s hobbled parallel execution for decades. OCaml shipped multicore support in version 5.0 after years of work. Node.js didn’t support parallel threads until version 12. This is an industry-wide reckoning for languages designed before multicore hardware became standard.
Racket is a small, academically-positioned language solving the exact same architectural challenges as Python, one of the most widely-used languages in the world. That’s a David and Goliath parallel worth noting. The fact that a language with limited commercial adoption and a tiny team can make fundamental architectural leaps shows that size isn’t the only factor in language evolution. Sometimes focused commitment beats massive resources.
The Community Split
The Hacker News discussion on Racket v9.0 hit 288 points with 95 comments, and the reactions split predictably. Technical folks acknowledged the achievement—parallel threads are legitimately hard to implement correctly. However, skeptics questioned whether it matters when Racket’s adoption outside academia remains limited.
“The big news here is that Racket now can run threads in parallel,” one developer noted. “While there were ways to get parallelism before (like places), this is much more lightweight and familiar.” That’s the optimistic take: Racket just became viable for performance-critical production work in ways it never was before.
The pessimistic take: “Too little, too late.” If your ecosystem is small, your job market is tiny, and developers already chose Python or Clojure for parallel functional programming, does a technical achievement matter if adoption doesn’t follow? Racket’s academic positioning limits its commercial appeal, and parallel threads don’t automatically fix that.
Both perspectives have merit. Technical excellence doesn’t guarantee market success. Nevertheless, languages that stagnate die, and languages that evolve at least have a shot.
What This Means for Functional Programming
Racket v9.0 proves that functional languages can achieve true parallelism without sacrificing the elegance that makes them valuable for certain problem domains. The #:pool argument is minimally invasive. The backward compatibility is clean. The performance gains are real for CPU-bound workloads.
This matters beyond Racket. It’s evidence that mature languages can make fundamental architectural changes when necessary. It’s a counterexample to the claim that functional programming inherently sacrifices performance. Furthermore, it’s a case study in how to execute a major rewrite without destroying an existing ecosystem.
Racket v9.0 is available now for download. Whether it translates to broader adoption remains an open question. But for developers working in functional paradigms who need multicore performance, the landscape just shifted. Thirty years is a long time to wait for true parallelism. The question now is whether developers will care enough to use it.










