AWS CEO Matt Garman threw cold water on Elon Musk’s space data center ambitions at a tech conference in San Francisco on Tuesday, February 3, calling the idea “not economical” just one day after SpaceX acquired xAI in a $1.25 trillion merger to build orbital AI infrastructure. When asked about the viability of space-based compute, Garman didn’t mince words: “I don’t know if you’ve seen a rack of servers lately: They’re heavy.” The blunt assessment comes as Musk claims space will be “the lowest cost place for AI” within 2-3 years—a timeline the industry sees as wildly optimistic.
Musk’s Billion-Satellite Bet
SpaceX’s acquisition of xAI on February 2 wasn’t just another Musk headline—it came with FCC filings for up to one million satellites designed as orbital data centers. The plan targets 100 gigawatts of annual AI compute capacity, with satellites operating between 500-2000km altitude and remaining in sunlight 99% of the time for constant solar power. At Davos last month, Musk called it “a no-brainer,” arguing that space solves AI’s biggest bottleneck: power and cooling constraints that terrestrial data centers can’t escape without “imposing hardship on communities.”
The pitch sounds compelling. Unlimited solar energy, free radiative cooling in vacuum, no land constraints, no environmental protests. However, Garman, who runs the world’s largest cloud provider, isn’t buying it.
Transportation Costs Remain Prohibitive
Garman’s critique centers on physics and economics. At Tuesday’s Cisco AI Summit, he stressed that launch costs remain “a major bottleneck” despite improvements. Getting payloads to orbit costs thousands of dollars per kilogram, and data centers aren’t light—a single rack of servers weighs tons. Even if SpaceX’s Starship achieves its ambitious $200/kg target, research suggests costs need to drop below that threshold for space compute to make economic sense.
Moreover, there’s the maintenance problem. “Last I checked, humanity has yet to build a permanent structure in space,” Garman noted. Terrestrial data centers constantly replace failed drives, upgrade GPUs, and swap out components. In orbit? Hardware failures mean dead satellites until the next costly replacement mission. Radiation damage limits satellite lifespans to about five years, forcing continuous expensive replacements.
Technical Hurdles Musk’s Vision Doesn’t Address
Beyond launch costs, orbital data centers face fundamental physics constraints. Latency is the killer for most workloads—10-20 milliseconds minimum round-trip propagation makes real-time applications like trading, interactive AI, or streaming unviable. Additionally, radiation exposure degrades hardware quickly, and radiation-hardened processors cost $200,000 each versus $300 for Earth-based equivalents. Thermal management requires massive radiators that can account for over 40% of spacecraft mass, according to NASA studies.
These aren’t problems you solve by iterating faster or throwing money at them. Instead, they’re constraints that make space compute viable only for specific use cases: batch processing, model training, or archival storage where latency doesn’t matter.
The Bezos Irony
Here’s where it gets interesting: Jeff Bezos, who founded Amazon (Garman’s employer), is pursuing space data infrastructure through Blue Origin’s TeraWave constellation—5,400 satellites launching in 2027 designed to provide high-throughput networking for data centers. So while AWS dismisses space data centers as uneconomical, Bezos is betting on the network infrastructure to support them. The contradiction highlights how even within the Amazon ecosystem, opinions split on whether Musk’s timeline makes sense.
What Developers Should Bet On
The industry consensus is clear: space data centers won’t replace terrestrial cloud infrastructure anytime soon. AWS is spending $125 billion on ground-based data centers in 2026 alone, and Microsoft, Meta, and Google are collectively pouring $470 billion into traditional infrastructure. For developers building applications today, the practical advice is straightforward—plan for AWS, Azure, or GCP. Latency, reliability, and economics all favor terrestrial compute for the vast majority of workloads.
Musk has solved hard technical problems before, and maybe SpaceX cracks the economics with radical launch cost reductions. But betting on space compute being cheaper than ground-based within 2-3 years means betting against physics, current economics, and the judgment of the world’s largest cloud provider. Space data centers will likely carve out niches for batch processing and training over the next decade. But Garman’s assessment cuts through the hype: servers are heavy, space infrastructure is hard, and economics don’t add up yet. For now, the future of AI compute remains firmly grounded.












