Python 3.15’s JIT compiler hit its performance goals ahead of schedule in March 2026, achieving 11-12% speedups on macOS and 5-6% on Linux after years of disappointment. The turnaround is remarkable. The Faster CPython team lost Microsoft funding in 2025, the 3.13 and 3.14 JIT versions were often slower than the interpreter, and the Python Software Foundation paused grant funding due to budget shortfalls. Despite all this, the community-led team delivered meaningful performance gains over a year early. Now developers can enable JIT compilation and see real performance improvements without switching to PyPy or rewriting code.
How to Enable Python 3.15 JIT
Enabling the JIT compiler takes one environment variable. However, you must set PYTHON_JIT to activate just-in-time compilation for your Python 3.15 scripts.
export PYTHON_JIT=1
python3.15 my_script.py
To verify the JIT is running, check the _jit module accordingly.
import sys
print(sys._jit.is_enabled()) # Returns True if JIT is active
The JIT is experimental in Python 3.15 alpha, so it’s disabled by default. Nevertheless, once enabled, it compiles hot code paths to machine code at runtime, eliminating the repeated interpretation overhead that slows down Python’s bytecode execution.
Benchmarking JIT Performance
The only way to know if JIT helps your code is to measure it. Therefore, run your script with and without PYTHON_JIT enabled and compare execution times.
import time
def cpu_bound_loop(n):
total = 0
for i in range(n):
total += i * i
return total
start = time.time()
result = cpu_bound_loop(10_000_000)
end = time.time()
print(f"Time: {end - start:.3f}s")
Test this code with PYTHON_JIT=0 (disabled) and PYTHON_JIT=1 (enabled). In fact, CPU-bound loops like this often see 20%+ speedups. For more comprehensive testing, use the pyperformance benchmark suite, which reports geometric mean performance across diverse workloads. If the performance boost isn’t obvious, increase iteration counts to amplify the difference.
When JIT Helps (and When It Doesn’t)
JIT compilation benefits CPU-bound code. Specifically, tight loops, numerical computation, and small functions called frequently see the biggest gains. Moreover, long-running applications amortize the JIT compilation overhead, making the performance improvement more valuable.
JIT provides minimal benefit for I/O-bound tasks. Consequently, database queries, network requests, and file operations spend most of their time waiting, not computing. Short scripts that run once may see no improvement or even slowdowns because the JIT compilation time exceeds the execution savings.
Real-world example: a web service endpoint handling JSON parsing and database queries went from 120ms per request to 110ms with JIT enabled, an 8% improvement. Furthermore, the geometric mean across official benchmarks is 5-6% faster on x86_64 Linux and 11-12% faster on macOS AArch64. Performance ranges from a 15% slowdown to over 100% speedup depending on the workload.
The takeaway: benchmark your code. If it’s CPU-bound, JIT likely helps. If it’s I/O-bound, it won’t.
What Changed: Why 3.15 JIT Succeeds Where 3.13/3.14 Failed
The Python 3.13 and 3.14 JIT compilers were often slower than the interpreter. However, the 3.15 redesign fixed this with three major improvements.
First, the team overhauled the JIT tracing frontend with a dual dispatch mechanism. Instead of maintaining separate tracing versions of all instructions, the system uses a single tracing instruction with two dispatch tables. Consequently, this increased JIT code coverage by approximately 50% while minimizing interpreter bloat.
Second, developers eliminated branches in JIT code associated with reference count operations. As noted in the technical analysis, a single branch is expensive when multiplied across thousands of Python instructions, so removing these branches delivered measurable performance gains.
Third, the JIT now supports significantly more bytecode operations and includes basic register allocation. Furthermore, the compiler avoids stack operations and operates on registers instead, producing more efficient traces by reducing memory reads and writes.
CPython’s JIT uses a copy-and-patch approach. Bytecode is compiled using pre-built templates that are stitched together and patched at runtime with the correct values. This simpler approach trades maximum performance for compatibility, ensuring all Python code runs without modification.
The Turnaround Story: Funding Loss to Ahead-of-Schedule Success
The Faster CPython team’s success is impressive because of what they overcame. Microsoft funded the team from 2021 to 2024, but in 2025, Microsoft canceled funding and laid off most of the team, including Mark Shannon (who had contributed to CPython since 2010), Eric Snow, and Irit Katriel. Additionally, the Python Software Foundation paused its grants program in August 2025 due to funding shortfalls.
The JIT in Python 3.13 and 3.14 delivered disappointing results, often performing worse than the interpreter. Typically, losing funding while the product was underperforming would have killed most projects.
Instead, the community took over stewardship. Despite the setbacks, the team hit performance goals over a year early for macOS AArch64 and months early for x86_64 Linux. Indeed, this represents genuine resilience in open-source development. When corporate sponsors withdraw, passionate contributors can still deliver.
PyPy vs. CPython JIT: Pick Your Tradeoff
PyPy has had a mature JIT compiler for years and is often 5-10x faster than CPython for CPU-bound workloads. So why use CPython’s JIT?
Compatibility. PyPy has issues with C extensions and lacks some recent Python features. In contrast, CPython’s JIT runs all Python code without modification. The trade-off is clear: CPython JIT delivers modest gains (5-12%) but works with your entire codebase. Meanwhile, PyPy delivers maximum performance but requires compatibility workarounds.
For teams that need C extension support or the latest Python features, CPython’s JIT is the practical choice. However, for teams that can tolerate PyPy’s limitations and need maximum performance, PyPy remains the better option.
Future Outlook and Adoption Timeline
Python 3.15 is currently in alpha, with a stable release expected in October 2026. Nevertheless, the JIT will remain experimental in 3.15, with continued improvements planned for 3.16 and beyond. The GitHub issue tracker shows active development on extended bytecode coverage, register allocation, and potential tiered compilation (interpreter → basic JIT → optimized JIT).
Early adopters can test the JIT now in 3.15 alpha. Subsequently, mainstream adoption will likely begin in 2027 as teams upgrade to the stable 3.15 release. Whether 5-12% performance improvements are compelling enough for widespread adoption remains to be seen, but for teams running CPU-bound Python workloads, free performance without code changes is valuable.
Key Takeaways
- Python 3.15’s JIT compiler achieved 11-12% speedups on macOS and 5-6% on Linux, delivering on performance goals over a year early despite the Faster CPython team losing Microsoft funding in 2025
- Enable JIT by setting the PYTHON_JIT environment variable and verify with sys._jit.is_enabled() – benchmark your code with and without JIT to measure real-world performance impact
- JIT benefits CPU-bound code (tight loops, numerical computation, long-running apps) but provides minimal improvement for I/O-bound tasks (database, network, file operations) or short scripts
- The 3.15 JIT succeeds where 3.13/3.14 failed through three technical improvements: dual dispatch mechanism (50% more code coverage), reference count elimination, and extended bytecode support with register allocation
- CPython JIT delivers modest gains (5-12%) with full compatibility, while PyPy offers 5-10x speedups with C extension limitations – choose based on your compatibility vs. performance needs
- Python 3.15 stable releases in October 2026, with mainstream JIT adoption expected in 2027 and continued improvements planned for 3.16+

