AI & DevelopmentHardwareInfrastructure

Marvell’s $5.5B Bet on Photonics for AI Infrastructure

Marvell just bet $5.5 billion that the future of AI runs on light, not copper. On December 2, 2025, the chip giant announced it would acquire Celestial AI, a photonics startup building optical interconnect technology that moves data between GPUs and memory using light instead of electrical signals. The deal—$3.25 billion upfront plus a potential $2.25 billion earnout—signals photonics is transitioning from experimental technology to essential AI infrastructure. Here’s why this matters: AI workloads currently spend 60% of their time waiting for data, and copper interconnects have hit their physical limits. If you’re building AI systems today, photonics will determine whether your models train in hours or days within the next two to three years.

The Memory Bandwidth Bottleneck Is Killing AI Performance

The conventional wisdom says AI needs faster GPUs. The reality? Compute power isn’t the problem—data movement is. AI workloads spend 60% of their execution time waiting for memory, not crunching numbers. While data processing speed has increased 60,000x over the past two decades, memory-to-processor transfer speeds have only improved 30x. Meanwhile, AI models have grown 400x larger every two years since 1998. The math doesn’t work.

This creates what the industry calls the “AI Memory Wall.” GPUs sit idle waiting for data to arrive from memory. Training times are limited by how fast you can feed the model, not how fast you can process it. The energy cost of moving data now exceeds the energy cost of computation. Traditional copper interconnects work well up to five meters at current network speeds, but they become too thick and too short beyond 1.6 terabits per second. Skin effects severely limit cable reach as transmission speeds increase. Once you scale beyond 100,000 processors—routine for modern AI training clusters—copper simply cannot connect them. You have to use optics.

Photonics Delivers 10x Bandwidth with Half the Power

Celestial AI’s Photonic Fabric technology uses light instead of copper to move data between chips. The company integrates lasers, modulators, and photodetectors directly into or alongside processors, keeping data encoded as light in optical fiber until it reaches the GPU. The result is a 16-terabit-per-second chiplet—10x the capacity of state-of-the-art 1.6T ports used in today’s scale-out applications. Energy efficiency improves dramatically too: Photonic Fabric consumes just 6.2 picojoules per bit compared to 62.5 pJ/bit for electrical interconnects like NVLink or NVSwitch. That’s a 10x improvement.

The technical innovation lies in co-packaging. Celestial AI stacks optical components vertically with high-power GPUs in a 3D package, making the photonic connection directly into the processor rather than from the die edge. This approach frees up valuable die edge space—what engineers call “beachfront property”—allowing manufacturers to pack in more High Bandwidth Memory. Roundtrip memory access via photonic fabric takes roughly 120 nanoseconds, barely more than local DRAM access. For developers, this means training larger models without hitting the memory wall, and lower cloud costs as hyperscalers pass through 30-50% energy savings.

The Timeline: Photonics Arrives in 2028, Standard by 2030

This isn’t vaporware or a research project five years out. Marvell projects Celestial AI will reach a $500 million annualized revenue run rate in Q4 fiscal 2028 and double to $1 billion by Q4 2029. The earnout structure reflects confidence: Celestial AI shareholders get the full $2.25 billion bonus if cumulative revenue exceeds $2 billion by the end of fiscal 2029. Those numbers don’t work unless hyperscalers are deploying photonics at scale.

For developers, the impact timeline looks like this. The acquisition closes in Q1 2026, with hyperscalers like AWS, Azure, and Google beginning pilot deployments shortly after. By H2 2028, cloud providers will start offering photonics-enhanced AI instances. You’ll see faster training and inference times transparently—no code changes required. By 2029, photonics becomes standard equipment in AI data centers, making training of 1-trillion-parameter models economically viable. Industry projections suggest that by the mid-2030s, all interconnects will be optical and co-packaged. Memory bandwidth will shift from primary bottleneck to competitive advantage.

Hyperscalers Are Already Committed

Major cloud providers aren’t waiting. Oracle runs a 131,000-GPU fabric with optical links at all three network levels—the largest-scale proof that photonics works in production. Microsoft created its “Optics for the Cloud” research alliance and already deployed silicon photonics at the rack level for 100-gigabit interconnectivity. Google is standardizing Optical Circuit Switching as part of its networking roadmap. The optical components market totals $17 billion today, with AI-driven data centers accounting for over 60% of demand. Market analysts project the interchip optical interconnect market alone will reach $32.73 billion by 2030, up from $18.01 billion in 2025.

The shift toward photonics isn’t speculative—it’s financial. Hyperscalers care about three things: performance, energy efficiency, and total cost of ownership. Photonics delivers on all three. Ten times the bandwidth means faster training. Thirty to fifty percent power reduction cuts operational expenses. And while initial capital costs are higher, the TCO math favors optical interconnects at AI scale. For developers, this means multi-cloud strategies become more important as providers adopt different photonic solutions. Design systems assuming 10x bandwidth improvements arrive within three to five years.

The Photonics Race: Marvell Isn’t Alone

Marvell paid $5.5 billion to secure its position before the photonics market matures and valuations soar. But it’s not the only player. Ayar Labs has raised $374.7 million at a valuation exceeding $1 billion, backed by strategic investors including AMD Ventures, Intel Capital, and NVIDIA. The company’s technology delivers 6.4 terabits per second with 4-picojoule-per-bit energy efficiency and is partnering directly with hyperscalers. Lightmatter claims its Passage interconnect provides 10x more I/O bandwidth than existing chip-to-chip solutions. Luminous Computing is developing silicon photonics processors purpose-built for AI, though details remain scarce.

The competitive landscape validates photonics as critical infrastructure rather than niche technology. Multiple technical approaches—chiplets, optical interposers, integrated photonic processors—are converging on the same problem from different angles. The market is large enough to support several winners. Developers shouldn’t bet on a single photonic vendor. Instead, focus on outcomes: bandwidth, latency, energy efficiency. Cloud providers will integrate whichever solutions deliver the best price-performance, and the best strategy is platform-agnostic architecture that adapts to whatever photonics tech your provider chooses.

What Developers Should Do Now

Photonics is arriving in two to three years, not a decade. Watch for announcements from AWS, Azure, and Google about photonics-enhanced AI offerings in late 2027 and 2028. Design systems with the assumption that memory bandwidth will increase 10x by 2029. Plan infrastructure budgets accounting for lower energy costs but potentially higher initial capital expenses. Most importantly, stay platform-agnostic. The hyperscalers are making different bets on photonic technologies, and locking into a single provider early carries risk.

Marvell’s $5.5 billion acquisition is a market signal, not a science experiment. When a chip company spends that much to solve the memory bandwidth problem, it’s telling you copper is dead for AI scale and photonics is the only path forward. The race is on, and the winners will be those who recognize memory bandwidth—not compute—as the defining constraint of the AI era.

For more details, see Next Platform’s analysis and University of Michigan’s research on lightwave-connected chips.

ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to simplify complex tech concepts, breaking them down into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *