The Pentagon designated Anthropic as a “supply chain risk” on March 5, marking the first time this national security label has been applied to a US-owned AI company. The unprecedented move came after Anthropic refused to accept contract terms requiring “all lawful use” of its Claude AI. Instead, the company insisted on two boundaries: no mass domestic surveillance and no autonomous weapons. Hours later, OpenAI announced a Pentagon deal on the same terms Anthropic rejected.
This precedent-setting conflict tests whether private AI companies can set ethical boundaries with government, or whether refusing demands triggers punitive action.
First US Company Designated National Security Risk
Supply chain risk designations were created to block foreign adversaries from infiltrating US systems. Chinese telecom giants like Huawei and ZTE have worn the label. So have 190+ entities linked to Russia. However, Anthropic represents something different – an American AI company founded by former OpenAI safety researchers. The designation is unprecedented.
The Pentagon’s March 5 notification triggered a 180-day countdown for military commanders to remove Claude from all systems by early September. The designation uses legal authority designed to prevent Chinese backdoors and Russian espionage – now weaponized against a US company over a contract disagreement.
This precedent carries danger. If the Pentagon can apply national security labels to companies that refuse broad AI access, what’s the limiting principle? The message to Silicon Valley is clear: comply with demands, or face consequences that could crater your business.
AI Ethics vs “All Lawful Use”
Anthropic drew two specific lines. First: Claude cannot be used for mass domestic surveillance of US citizens. Second: no autonomous weapons systems that select and engage targets without human oversight. These aren’t radical positions – they’re basic guardrails that most AI safety researchers would endorse.
The Pentagon countered with “all lawful use” language. This sounds reasonable until you realize “lawful” means whatever Congress and courts say it is. Mass surveillance? Lawful with a FISA court order. Autonomous targeting? Lawful in declared war zones under certain rules of engagement. The word “lawful” provides exactly zero ethical constraints.
CEO Dario Amodei stated bluntly in his February 26 announcement: “We cannot in good conscience allow the Department of Defense to use our models in all lawful use cases without limitation.” Consequently, the company would greenlight Claude analyzing satellite imagery for human-reviewed targeting decisions. Autonomous kill chains and dragnet surveillance? Hard pass.
OpenAI Fills the Void Amid Backlash
On February 27 – the same day Trump ordered federal agencies to halt Anthropic contracts – OpenAI announced its own Pentagon deal. The timing raised eyebrows across the industry.
OpenAI claims similar boundaries: no mass surveillance, no autonomous weapons, no intelligence agency use. Nevertheless, they accepted the Pentagon’s contract framework that Anthropic rejected. Either OpenAI found magic compromise language, or someone’s safety guardrails are more cosmetic than concrete.
The backlash came swift. ChatGPT uninstalls spiked 295% day-over-day. Claude shot to number one in the App Store. Furthermore, OpenAI’s own robotics lead, Caitlin Kalinowski, resigned on March 7 over the deal. Even CEO Sam Altman admitted the move “looked opportunistic and sloppy.”
Then Dario Amodei’s March 4 internal memo leaked, calling OpenAI’s approach “safety theater” and arguing “they cared about placating employees, and we actually cared about preventing abuses.” (Amodei apologized two days later, but the accusation resonated.) OpenAI’s deal creates competitive pressure for every other AI lab: accept broad government terms and keep contracts, or stand on principle and watch rivals capture the business.
Tech Giants and Workers Unite Against Pentagon
Despite cutthroat competition in AI, the industry rallied behind Anthropic. More than 30 employees from Google and OpenAI – including Google Chief Scientist Jeff Dean – filed an amicus brief on March 10 in personal capacity, risking employer relationships to support a competitor’s lawsuit.
Their argument cuts deep: “Current frontier AI models are not yet reliable or transparent enough to be trusted with lethal targeting decisions.” They called the designation “an improper and arbitrary use of power” that threatens American AI competitiveness. Microsoft then filed a corporate brief on March 12, urging courts to block the designation – a remarkable step for a company with massive defense contracts.
Anthropic filed suit on March 9, arguing the designation is legally unsound. The case will answer whether AI companies can refuse government demands based on ethical principles. Three outcomes loom: Anthropic wins and establishes company autonomy; Pentagon prevails and chills AI safety advocacy; or settlement creates a framework neither side loves but both can accept.
What’s Actually at Stake
Strip away the legal jargon. The fight boils down to this: Can a private company tell the US government “no, you can’t use our AI for that,” or does refusal equal disloyalty?
Anthropic argues its boundaries prevent misuse of powerful technology. The Pentagon argues operational flexibility requires broad access to any lawful application. Both positions have merit. However, using a supply chain risk designation – a tool meant for foreign threats – to punish a domestic company for ethical boundaries crosses a line. That’s not oversight. That’s coercion.
The tech worker briefs nail it: “Safety guardrails are a necessity, not an optional luxury.” If the government can force AI companies to accept “all lawful use” by threatening business survival, who actually controls AI governance? Not the companies building the technology. Not the researchers studying its risks. Rather, the entity with procurement power and national security labels.
This case will set the template for AI-government relations for years. The question isn’t whether Anthropic’s specific boundaries are correct – reasonable people can debate autonomous weapons policy. The question is whether companies can have boundaries at all, or whether government access trumps private ethics. The precedent we set here matters far beyond one AI lab and one contract dispute.

