On April 9, 2026, SiFive raised $400 million in a Series G funding round at a $3.65 billion valuation, with Nvidia joining as a new strategic investor. Led by Atreides Management and backed by Apollo Global Management, Point72 Turion, and T. Rowe Price, this oversubscribed round marks SiFive’s final private raise before an anticipated IPO. The significance isn’t just the money—it’s Nvidia’s strategic bet that RISC-V, the open-source CPU architecture, is ready for AI data centers.
RISC-V CPUs can now plug into Nvidia’s GPU infrastructure through NVLink Fusion, and Nvidia is porting CUDA to the RISC-V RVA23 profile. This puts RISC-V on equal footing with x86 and ARM in AI workloads. For developers, that’s the headline: the open-source instruction set architecture just became a viable alternative to Intel, AMD, and ARM in the one market that matters most—AI infrastructure.
Why Nvidia is Betting on RISC-V
Nvidia doesn’t make CPUs, but it needs a healthy CPU ecosystem to sell GPUs. X86 is controlled by Intel and AMD (competitors), and ARM’s licensing model creates friction. RISC-V offers Nvidia a neutral, open partner with no vendor lock-in.
The numbers back Nvidia’s bet. RISC-V hit 25% global market share in 2026, up from 2.5% in 2021. Meta acquired Rivos, Qualcomm bought Ventana Micro for $2.4 billion, and Google is porting its software stack to RISC-V. This is strategic infrastructure planning by hyperscalers breaking the x86/ARM duopoly.
In January 2026, SiFive integrated Nvidia’s NVLink Fusion, enabling coherent connectivity between RISC-V CPUs and Nvidia GPUs. Combined with Nvidia’s CUDA port to RISC-V, the last technical barrier to enterprise adoption is gone. System builders can now choose x86, ARM, or RISC-V CPUs for Nvidia’s AI infrastructure—and RISC-V is the only option with zero licensing fees.
Open Source Hardware’s Linux Moment
RISC-V’s trajectory mirrors Linux disrupting proprietary Unix: no licensing fees, no vendor lock-in, freedom to customize. Companies building AI data centers can add domain-specific instructions for AI workloads without waiting for Intel, AMD, or ARM’s roadmap.
The technical advantages matter. RISC-V’s modular design reduces power consumption—critical for massive AI clusters. SiFive’s data center CPUs target 200W TDP versus 250W for comparable x86 chips. For hyperscalers spending billions on power infrastructure, that efficiency gap compounds.
Real-world adoption is accelerating. Meta, Qualcomm, and Google are deploying RISC-V internally, and the automotive industry standardized on RISC-V through the Quintauris joint venture (Bosch, BMW, Infineon, NXP, Qualcomm) in early 2026. Industry analysts predict RISC-V-native data centers will emerge by 2027.
IPO Plans Signal Ecosystem Maturity
CEO Patrick Little confirmed to Reuters that the April 2026 Series G is SiFive’s final private round before an IPO, though no exchange or timeline has been announced. At $3.65 billion, SiFive is the most valuable pure-play RISC-V company. SiFive reported record growth in 2025, with over 500 semiconductor designs and 10+ billion RISC-V cores shipped, targeting a $100+ billion addressable market in data center CPUs.
An IPO validates the entire RISC-V ecosystem. Public markets don’t bet on science projects—if SiFive goes public, it proves open-source CPU architectures can compete with Intel, AMD, and ARM in data centers.
What Developers Should Watch
RISC-V isn’t vaporware. The toolchain is ready: GCC and LLVM have full support, major operating systems run on RISC-V, and Nvidia’s CUDA port means GPU-accelerated workloads will work. Developers should add RISC-V targets to CI/CD pipelines for future-proofing, watch for RISC-V instance offerings from AWS, Google Cloud, and Azure by 2028-2029, and monitor AI framework support as TensorFlow and PyTorch add RISC-V optimizations.
The performance trade-off is real. Early RISC-V chips lag x86 and ARM in single-threaded workloads but excel in power efficiency and massively parallel AI tasks. The sweet spot is AI inference and training, not general-purpose computing—yet.
ARM isn’t sitting still. Industry projections suggest ARM will power 90% of AI servers using custom processors by 2029, leaving 10% for x86 and RISC-V combined. RISC-V is credible, not dominant.
The Takeaway
Nvidia’s $400 million bet isn’t about RISC-V replacing x86 or ARM tomorrow—it’s about breaking the duopoly and ensuring Nvidia has a neutral, open CPU partner. RISC-V hit 25% market share, secured a $3.65 billion valuation, and cleared the IPO bar. For developers, RISC-V support is coming to your tools, frameworks, and cloud providers. Whether it becomes dominant or a strong third pillar alongside x86 and ARM, ignoring RISC-V in 2026 means ignoring where AI infrastructure is headed. Open-source hardware is following the same trajectory as open-source software—slow adoption, then sudden ubiquity.

