Frore Systems raised $143 million in Series D funding yesterday, hitting unicorn status with a $1.64 billion valuation. The 8-year-old startup, founded by ex-Qualcomm engineers, doesn’t make chips—it makes liquid cooling systems that address AI infrastructure’s biggest bottleneck: thermal limits. NVIDIA’s latest Blackwell GPUs generate up to 1,000 watts per chip, three times more heat than GPUs from seven years ago. AI racks now demand 120+ kilowatts per rack. Traditional air cooling maxes out at 30-40 kilowatts. When chips overheat past 83-87°C, they thermal throttle—automatically cutting performance by 30-50% to prevent damage. Your $100,000 GPU becomes a $60,000 GPU because you cheaped out on cooling.
AI Infrastructure Is Literally Overheating
The thermal crisis isn’t theoretical—it’s happening in production data centers right now. AI rack densities have exploded from 15 kW to 120-132 kW in less than a decade, driven by chips that generate exponentially more heat. Physics sets hard limits. Air cooling physically cannot remove heat fast enough beyond 30-40 kW per rack, no matter how many fans you add.
The invisible cost hits harder than most developers realize. GPUs thermal throttle when they reach temperature thresholds—reducing clock speeds to prevent hardware damage. This results in 30-50% performance degradation that users often don’t detect until they benchmark. Meanwhile, data centers burn 30-40% of total electricity on cooling instead of computation. That’s not infrastructure efficiency—that’s a physics problem masquerading as an engineering challenge.
Frore’s Liquid Cooling Outperforms Competitors 2x
Frore’s LiquidJet technology doesn’t use traditional flat cold plates. Instead, the system employs flexible, three-dimensional liquid channels customized for specific chip geometries—wrapping around processors to maximize surface contact. This architecture enables 600 W/cm² heat flux removal, double the 300 W/cm² capacity of competitors like CoolIT and Accelsius.
The performance difference translates directly to infrastructure economics. A liquid-cooled rack can fit 30-40 GPU servers compared to 10-15 air-cooled servers in the same 42U footprint—roughly 3x compute density. The system eliminates fans, reducing mechanical failure points and dust accumulation. More importantly, it prevents thermal throttling, ensuring GPUs run at advertised clock speeds instead of degraded performance.
Manufacturing runs exclusively in Taiwan currently, though Frore plans geographic expansion. Customers include large cloud providers, enterprises, and government agencies, though the company hasn’t disclosed names. The secrecy makes sense—cooling infrastructure is becoming competitive advantage, not commodity.
How NVIDIA’s CEO Pushed The Pivot
Frore Systems started eight years ago with a different mission: cooling smartphones and tablets using solid-state AirJet technology. Founders Sesh Madhavapeddy and Surya Ganti, both former Qualcomm engineers, saw consumer electronics as the primary market. Then NVIDIA CEO Jensen Huang intervened about two years ago, urging them to pivot to data center liquid cooling for AI chips.
Huang’s push wasn’t altruism—it was pragmatism. NVIDIA’s roadmap includes GPUs generating 1,000W+ thermal output. You can’t sell hardware that customers can’t cool. The conversation forced Frore to recognize a larger opportunity: hardware vendors need cooling partners. Chip manufacturers can’t vertically integrate thermal solutions the way they integrate memory or interconnects. Cooling is infrastructure, and infrastructure creates billion-dollar markets.
Institutional Money Bets on Thermal Infrastructure
The $143 million Series D was led by MVP Ventures, with Fidelity Management & Research, Qualcomm Ventures, and Mayfield Fund participating. Total funding now stands at $340 million. Investors aren’t buying into a niche hardware play—they’re betting that thermal management becomes as foundational as compute, storage, and networking in AI infrastructure.
Andre de Baubigny, Managing Partner at MVP Ventures, framed the thesis clearly: “Thermal architecture is the most stressed component in the data center. Frore’s platform unlocks higher compute density across hyperscale and edge environments.” CEO Madhavapeddy took it further: “Thermal management represents the single greatest limitation in AI chip performance.”
When institutional investors like Fidelity and strategic investors like Qualcomm Ventures back thermal infrastructure at unicorn valuations, the signal is loud: cooling is no longer a commodity. Data centers that solve thermal limits first can deploy more AI compute faster. That’s competitive advantage, not operational overhead.
Liquid Cooling Goes Mainstream
The industry tipping point already happened. By 2026, 22% of newly built data centers use liquid cooling, up from roughly 5% in 2023. Cold plate cooling—Frore’s category—commands 65% of the liquid cooling market. This isn’t experimental technology anymore. It’s becoming the default for AI infrastructure.
Germany announced plans to double its AI data center footprint by 2030, with liquid cooling as a design requirement. The directive recognizes what physics already proved: air cooling cannot support AI workload thermal demands. Retrofit projects remain expensive and inefficient—operators who didn’t design for liquid cooling from day one face stranded capacity or costly infrastructure overhauls.
Developers building or procuring AI infrastructure need to factor liquid cooling into architecture decisions immediately. The window for “wait and see” closed. Frore’s unicorn valuation confirms what data center operators already know: thermal infrastructure is foundational, not optional.
Key Takeaways
- AI chips generate extreme heat that air cooling cannot handle—NVIDIA Blackwell GPUs hit 1,000W, while air cooling maxes out at 30-40 kW per rack versus AI’s 120+ kW demands
- Thermal throttling costs 30-50% performance when GPUs overheat past 83-87°C, turning $100K hardware into $60K effective compute capacity
- Frore’s LiquidJet removes heat at 600 W/cm²—double competitors’ 300 W/cm²—enabling 3x rack density (30-40 GPU servers vs. 10-15 air-cooled)
- Liquid cooling adoption hit 22% of new data centers in 2026 (up from 5% in 2023), signaling the technology moved from experimental to mainstream
- Cooling is now competitive infrastructure—data centers that solve thermal limits first can deploy more AI compute faster than competitors stuck with air cooling

