On March 2, 2026, NVIDIA announced $4 billion in strategic photonics investments—$2 billion each to Lumentum and Coherent—with multi-year purchasing commitments and capacity rights for optical networking components. The investment marks a critical inflection point in AI infrastructure: data movement between GPUs has become the bottleneck limiting AI scaling, not GPU compute power itself. As AI clusters grow to 100,000+ GPUs requiring 10 Tbps bandwidth per link, copper interconnects have hit physical limits at 400 Gbps. Silicon photonics delivers 1.6 Tbps today, scaling to 12.8 Tbps by 2028, while cutting power consumption 70%.
This isn’t just a technology upgrade. NVIDIA is locking up photonics supply chains the same way they controlled HBM memory—securing capacity while competitors scramble. AMD has no announced photonics partnerships. Intel is years behind. Google’s TPUs rely on custom optical interconnects unavailable to the broader market. Meanwhile, NVIDIA just guaranteed multi-year access to the components everyone needs but can’t get.
Why Silicon Photonics Became the AI Bottleneck
Modern AI training clusters like those used by Meta and Google require 32,000+ GPUs exchanging terabytes per second. Copper interconnects max out at 400 Gbps. At 224 Gbps speeds, copper cable reach drops below one meter—physically impossible for data center scale. Moreover, copper consumes 30% of total data center power. The physics don’t work anymore.
Silicon photonics solves both problems. NVIDIA’s Gen 1 optical engines shipping now deliver 1.6 Tbps per connector. Gen 2 Co-Packaged Optics (CPO) arriving in 2027 hit 6.4 Tbps. Gen 3 in 2028 reaches 12.8 Tbps—32x faster than copper’s limit. Power efficiency improves 3.5x: CPO modules consume 9W versus 30W for traditional optical transceivers. Consequently, data center interconnects that ate 30% of power budgets drop to single digits.
Training performance tells the real story. Optical networking eliminates data movement bottlenecks, increasing GPU utilization by over 20%. NVIDIA projects GPT-6 training time drops from 18 months with copper to 12 months with optical. That’s six months of GPU-hours saved—massive cost reduction even with photonics premiums baked in.
Furthermore, NVIDIA’s Quantum-X InfiniBand switches launched early 2026 with 115 Tbps throughput across 144 ports at 800 Gbps each. The CPO-based systems shipping in 2027 push bandwidth to 409.6 Tbps with 512 ports. These aren’t incremental improvements. Optical networking is reshaping what’s physically possible at AI scale.
NVIDIA’s Supply Chain Lockup Strategy
The $4 billion investment isn’t altruism. NVIDIA secured multi-year capacity rights and purchasing commitments from both Lumentum and Coherent—effectively locking up photonics supply. Lumentum controls 50-60% of the global laser market for AI networking. Demand already exceeds supply by 30%. Stock markets understood immediately: Lumentum jumped 12%, Coherent 15% on March 2.
This mirrors NVIDIA’s HBM playbook. Control critical components. Guarantee yourself supply. Let competitors fight for scraps. AMD has announced zero photonics partnerships as of March 2026. Intel is developing silicon photonics internally but trails NVIDIA’s timeline by years. Similarly, Google’s TPUs use custom optical interconnects unavailable to cloud customers or third-party AI companies.
Timing matters. Meta and Google signed a multi-billion dollar TPU deal in January 2026—hyperscalers actively diversifying away from NVIDIA GPUs due to supply constraints. However, NVIDIA’s photonics investment tightens the grip. Even if AMD builds competitive GPUs or Google opens TPU access, they lack the optical networking infrastructure NVIDIA just secured. Compute is only half the equation. Data movement is the other half, and NVIDIA owns it.
Additionally, Lumentum is building new U.S. fabrication facilities with 18-24 month lead times. Coherent scaled to 6-inch InP wafers in late 2025, producing 4x more laser chips per wafer than previous-generation 4-inch processes. These capacity expansions are locked to NVIDIA through multi-year commitments. Smaller AI companies face a “photonics shortage” similar to the HBM crisis—limited availability, higher costs, and delayed timelines.
How Silicon Photonics and Lasers Power AI Networking
Silicon photonics converts electrical signals to light pulses, transmits data over optical fibers at terabit speeds, then converts back to electrical signals. Lumentum manufactures the lasers—light sources that encode digital data. Coherent develops the silicon photonics integration—combining lasers, modulators, waveguides, and photodetectors on chip. Together they provide the complete optical networking stack AI clusters require.
Silicon itself can’t emit light efficiently due to its indirect bandgap. Photonics requires heterogeneous integration with III-V semiconductors like gallium arsenide that can produce coherent light. This manufacturing complexity is why NVIDIA needed both companies. Lumentum’s 400mW continuous-wave lasers power the next generation of 1.6T and 3.2T Ethernet transceivers. Meanwhile, Coherent’s vertical integration—owning material, chip, and module production—provides scale advantages competitors can’t match.
NVIDIA CEO Jensen Huang stated: “AI has reinvented computing and is driving the largest computing infrastructure buildout in history. Together with Lumentum, NVIDIA is advancing the world’s most sophisticated silicon photonics to build the next generation of gigawatt-scale AI factories.” Lumentum CEO Michael Hurlston added: “This multiyear strategic agreement reflects our shared commitment to advancing the optics technologies that will power the next generation of AI infrastructure.”
What This Means for AI Infrastructure
Future AI infrastructure costs will include photonics premiums estimated at 15-30% of total system cost. Training speed will increasingly depend on optical networking bandwidth alongside GPU performance. Developers planning AI deployments in 2027 and beyond need to budget for these components and understand they’re not optional—copper can’t scale to the bandwidth AI requires.
Market growth reflects this shift. The global optical interconnect market for AI data centers was $9.94 billion in 2025, projected to hit $31.04 billion by 2033 at 15.3% compound annual growth. 1.6T Ethernet optical transceivers alone will reach a $15 billion market by 2030. Furthermore, data center bandwidth requirements are expected to grow 6x by 2030, while global data center power consumption hits 800 TWh by 2026—equivalent to Japan’s total electricity usage. Photonics offers the only scalable solution addressing both bandwidth and power constraints.
Availability matters as much as cost. NVIDIA’s capacity rights mean they receive components first. AMD, Intel, and smaller AI companies wait. This creates competitive disadvantage beyond just GPU performance. Interestingly, a company with slower GPUs but superior optical networking can outperform faster GPUs bottlenecked by copper or limited optical availability. Infrastructure planning needs to account for component lead times stretching 18-24 months as new photonics fabs come online.
Cloud providers building AI infrastructure should evaluate optical networking bandwidth in vendor comparisons, not just GPU specifications. Training workload benchmarks need to measure end-to-end performance including data movement, not isolated compute metrics. The TCO calculation changes: faster optical interconnects reduce training time, lowering total GPU-hours despite hardware premiums. Indeed, a 20% utilization improvement from CPO pays for itself in six months on large-scale training jobs.
Key Takeaways
- NVIDIA invested $4 billion ($2B each) in Lumentum and Coherent on March 2, 2026, securing multi-year photonics supply with capacity rights and purchasing commitments—locking up optical networking components competitors also need.
- Optical networking has become the AI bottleneck, not GPU compute. Copper maxes at 400 Gbps; silicon photonics delivers 1.6-12.8 Tbps while cutting interconnect power consumption 70%. GPT-6 training drops from 18 months to 12 months with optical.
- Supply chain consolidation mirrors NVIDIA’s HBM strategy. AMD has no photonics partnerships. Intel trails by years. Smaller AI companies face photonics shortages with limited availability and higher costs through 2027-2028.
- AI infrastructure costs will include 15-30% photonics premiums. Developers must budget for optical networking in 2027+ deployments and account for 18-24 month component lead times as new fabs scale production.
- Training performance increasingly depends on optical bandwidth alongside GPU specs. Cloud providers should benchmark end-to-end performance including data movement, not isolated compute metrics, as infrastructure TCO shifts toward interconnect efficiency.

