The Pentagon awarded classified AI contracts last week to eight major tech companies—OpenAI, Google, Microsoft, Amazon, Nvidia, SpaceX, Oracle, and Reflection AI. One frontier lab was conspicuously missing: Anthropic. The company refused to strip AI safety guardrails preventing use of its Claude model for fully autonomous weapons and mass domestic surveillance. Every single competitor said yes. Anthropic said no, even with billions in defense revenue on the line.
This represents a watershed moment in AI ethics. An AI company chose principles over profit, while the entire industry took the money. The question isn’t just about this contract—it’s whether ethical AI companies can survive commercially.
Pentagon AI Contracts: Eight Companies, One Answer
The Pentagon’s contracts grant access to classified networks at Impact Level 6 and 7—environments handling Secret and Top Secret data. All eight companies accepted terms allowing “all lawful purposes,” meaning no contractual restrictions on how the military uses their AI. No limits on autonomous weapons. No explicit ban on mass surveillance. Just a blank check for 1.3 million Defense Department users.
Anthropic drew a different line. The company was willing to support intelligence analysis and decision support. However, it refused to enable AI systems making targeting decisions without meaningful human oversight. That refusal cost them a $200 million contract in February and exclusion from billions more last week.
The Technical Argument Everyone’s Ignoring
Anthropic CEO Dario Amodei made a claim that matters: “Frontier AI systems are simply not reliable enough to power fully autonomous weapons. There’s a basic unpredictability to them that in a purely technical way, we have not solved.” This isn’t just ethics posturing—it’s a technical statement about AI reliability.
If Amodei is right, every company that agreed to unrestricted military use just signed contracts for capabilities their AI might not be ready to handle. The Pentagon says existing policies already prohibit the uses Anthropic opposes. Consequently, why not put those restrictions in the contracts? If they’re already policy, making them contractual costs nothing. Unless the Pentagon wants flexibility to change those policies later.
Blacklisted for Disagreeing with the Government
When Anthropic refused to budge, Defense Secretary Pete Hegseth designated the company a “supply chain risk to national security“—the first time this label was applied to an American company. Trump ordered federal agencies to stop using Anthropic’s technology. The company sued, and a federal judge sided with them. Judge Rita Lin wrote that the government’s actions violated the First Amendment: “Nothing supports the Orwellian notion that an American company may be branded a potential adversary for expressing disagreement with the government.”
That ruling lasted two weeks. Subsequently, an appeals court overturned it April 8, allowing the blacklist to stand. The lawsuit continues, but the contracts went to competitors who didn’t fight.
The Market Rewarded Principles. For Now.
Here’s where it gets interesting. Anthropic lost government billions but won the enterprise market. The company captured 73% of all first-time enterprise AI spending in March. Moreover, Claude topped app store charts as users protested OpenAI’s Pentagon deal by uninstalling ChatGPT. Enterprise customers viewed the Pentagon fight as validation of Anthropic’s ethics, not a liability.
Even competitors’ employees supported Anthropic. Over 30 workers from Google, OpenAI, and Microsoft filed legal briefs backing the company. Microsoft itself filed an amicus brief. At least one OpenAI executive quit over the rushed Pentagon announcement. The market, at least initially, punished companies for accepting unrestricted military use.
Can Anthropic sustain this? Short-term, yes. Long-term, unknown. If government contracts become existential for AI companies, ethics becomes a luxury only profitable firms can afford. And that’s exactly the problem.
What Competitors Actually Agreed To
OpenAI made public statements saying it won’t allow its tools for mass domestic surveillance or autonomous weapons. Similarly, Google’s contract includes language saying the technology isn’t “intended for” certain uses. However, both contracts have “all lawful purposes” clauses, and enforceability of the restrictions is unclear. The Pentagon interprets “lawful” broadly, and what’s lawful can change with policy shifts or legal reinterpretations.
Anthropic wanted contractual guardrails. Competitors accepted vague language. That’s the difference. It’s also why Anthropic is blacklisted and competitors aren’t.
The Precedent: Private Contracts, Public Consequences
No U.S. law prohibits using AI for autonomous weapons. No international treaty bans it. Pentagon policies exist, but they’re internal and changeable. Right now, how military AI gets used is being decided in private contract negotiations between companies and procurement officers. No congressional oversight. No public input. No democratic deliberation.
That’s a governance vacuum. Whether you think Anthropic is right or naive, this much is clear: Decisions about autonomous weapons shouldn’t be made in private contract talks. They should happen in Congress, with public debate and democratic accountability. That’s not happening.
Anthropic made a billion-dollar bet that markets reward ethics. So far, they’re winning. Nevertheless, the real story is the precedent: If refusing unlimited military use makes you a “supply chain risk,” the incentive structure favors companies without red lines. And that’s dangerous regardless of where you stand on this specific dispute.
Key Takeaways
- Pentagon awarded contracts to eight AI companies; Anthropic excluded for refusing to remove autonomous weapons guardrails
- Anthropic captured 73% of enterprise AI spending in March despite losing billions in government revenue
- Legal battle continues: Appeals court allowed “supply chain risk” blacklist to stand
- Competitors accepted “all lawful purposes” contracts with unclear enforcement of safety restrictions
- No U.S. law or treaty governs autonomous weapons AI—decisions made in private contracts






