AI & DevelopmentTech BusinessNews & Analysis

Anduril’s $30B Autonomous Weapons Fail Testing, Safety Risks

Palmer Luckey’s Anduril Industries—the $30.5 billion defense startup promising AI-powered autonomous weapons—has faced systematic failures across its product line during recent testing, according to a Wall Street Journal investigation published November 27. More than 30 autonomous drone boats shut down during a Navy exercise in May, with sailors warning of “extreme risk” to military personnel. An unmanned fighter’s engine was damaged. A counterdrone system caused a 22-acre fire in Oregon. Altius drones plunged 8,000 feet during Air Force tests. Ukrainian forces stopped using Anduril’s Altius loitering munitions in 2024 after repeated battlefield failures.

This isn’t just another startup stumbling through product-market fit. It’s a test of whether Silicon Valley’s “move fast and break things” culture can work in defense, where broken things mean loss of life. The answer, based on the WSJ’s findings, isn’t promising.

The Catalog of Failures

Anduril’s weapons failed across multiple platforms between May and August 2025, creating safety hazards that contradict the company’s claims of battlefield-ready technology. During a Navy exercise off California in May, 30+ autonomous drone boats powered by Lattice AI shut down mid-deployment. Sailors filed reports describing “continuous operational security and safety violations” that posed “extreme risk” with potential for “serious bodily harm.” The mass shutdown turned the vessels into hazards for other ships in the exercise.

The failures weren’t isolated to one system. Anduril’s Fury unmanned fighter—selected for the Air Force’s Collaborative Combat Aircraft program—suffered engine damage during summer ground testing before it could deploy. In August, the Anvil counterdrone system, designed for “low-collateral defeat” of hostile drones, caused a 22-acre wildfire in Oregon. During Air Force testing at Eglin AFB, an Altius drone plunged 8,000 feet straight into the ground after launch. A second Altius spiraled down in a separate test shortly after.

Meanwhile, Ukrainian forces who received Anduril’s Ghost and Altius drones in 2022-2023 found them vulnerable to Russian electronic warfare. The Ghost struggled with signal jamming and terrain interference—problems Anduril later admitted it “miscalculated.” Front-line soldiers reported that Altius drones crashed and missed targets. By 2024, Ukrainian forces had stopped using them entirely. For context: Ukrainian Deputy PM Mykhailo Fedorov reported that Ukrainian-made drones comprised 96% of the one million units deployed in 2024. Western makers, including Anduril, had minimal battlefield impact.

Silicon Valley Culture Meets Defense Reality

The failures highlight a fundamental clash between Silicon Valley’s innovation culture and defense technology requirements. Palantir’s CTO Shyam Sankar testified to the US Armed Services Committee that he “would gladly accept more failure if it meant more catastrophic success.” He advocated for “more crazy” and “letting chaos reign” in military procurement, arguing that regulatory oversight “constrains” innovation.

That philosophy might work for consumer apps. It doesn’t work for weapons. As a Watson Institute analysis at Brown University explains: “In Silicon Valley, the ‘move fast and break things’ motto implies that problems that arise in the rollout of the tech can always be addressed and solved later. In the world of defense and war, the harm produced by this kind of risk-taking cannot so easily be undone.”

The contrast is stark. Traditional defense contractors like Lockheed Martin and Boeing operate on 5-10 year development cycles with extensive testing, redundancy, and safety protocols developed over decades. Anduril’s approach uses rapid iteration, consumer-grade components, and software-first architecture. When Navy sailors warn of “extreme risk to personnel” from safety violations during testing, that’s not innovation—it’s negligence dressed up as disruption.

The $30.5 Billion Valuation Question

Anduril’s valuation tripled from $8.48 billion in 2022 to $30.5 billion in June 2025, despite ongoing testing failures. The June funding round—a $2.5 billion Series G led by Peter Thiel’s Founders Fund, which wrote its largest check ever at $1 billion—valued the company at levels approaching traditional defense giants. The company generated $1 billion in revenue in 2024 and secured major contracts including a $22 billion Microsoft AR headset takeover and $642.2 million in Navy counter-drone systems.

But revenue and contracts don’t equal reliability. The testing failures raise serious questions about whether defense tech startup valuations reflect capability or hype. When your $30.5 billion company’s products are failing tests, causing fires, and prompting safety violation reports from military personnel, that’s a red flag investors should notice. The pattern mirrors classic tech bubble dynamics: prioritizing market position and growth metrics over whether the product actually works when lives are at stake.

Palmer Luckey’s Gamble

Palmer Luckey made virtual reality mainstream by selling Oculus to Facebook for $2 billion. He founded Anduril in 2017 to apply Silicon Valley principles to defense. But VR headsets and military drones are fundamentally different products with different failure consequences. Bad frame rates in a VR game don’t kill people. Autonomous weapons that shut down during Navy exercises or plunge 8,000 feet during testing create “extreme risk” to personnel.

The gap between Luckey’s claims and battlefield reality is telling. In March, he said Altius drones had “taken out hundreds of millions of dollars worth of Russian targets.” Yet Ukrainian forces stopped using Altius in 2024 after crashes and missed targets. Anduril maintains that these challenges are “typical of weapons development” and denies “underlying flaws” in its technology. That response ignores a key fact: Navy sailors don’t file reports about “extreme risk to personnel” during typical weapons development. They file them when something is dangerously wrong.

What Happens Next

DoD policy requires autonomous weapon systems to be “tested to function as anticipated in realistic operational environments” and “sufficiently robust to minimize the probability and consequences of failures.” Anduril’s pattern of failures—mass shutdowns, crashes, fires, safety violations—suggests its systems aren’t meeting those standards. The National Security Commission on AI has warned that “unintended escalations may occur when systems fail to perform as intended” and that AI systems “reduce the time and space available for de-escalatory measures.”

The failures could trigger regulatory scrutiny, contract cancellations, and a broader reassessment of whether autonomous weapons technology is ready for deployment. Best case: Anduril slows development, implements traditional defense testing rigor, and eventually succeeds with more reliable systems. Worst case: a catastrophic failure causes loss of life, triggering regulatory crackdown and valuation collapse. Most likely: the company is forced to adopt traditional defense testing cycles, some products succeed in less autonomous roles, and others get shelved.

For the AI and developer community, these failures are a stark reminder that machine learning unpredictability isn’t just a research problem—it’s a catastrophic failure mode when applied to weapons. If a $30.5 billion startup with top talent and massive funding can’t make autonomous weapons reliable, maybe the technology isn’t ready. The testing failures might slow the autonomous weapons arms race. Many would consider that a good thing.

ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to simplify complex tech concepts, breaking them down into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *