Anthropic became the first American AI company designated a “supply chain risk” by the Pentagon on March 5, 2026—a label historically reserved for Chinese adversaries like Huawei. Four days later, the company sued the Department of Defense, alleging First Amendment retaliation after refusing to let the military use Claude AI for mass domestic surveillance and fully autonomous weapons. However, the Pentagon used Claude to coordinate Iran airstrikes on March 4, the same week it banned the technology as a security threat.
The Two Red Lines Pentagon Couldn’t Accept
Anthropic drew two non-negotiable boundaries. First, no mass domestic surveillance. The company refuses to let Claude analyze massive datasets to create detailed dossiers on Americans’ political views, associations, sex lives, and browsing histories. Second, no fully autonomous weapons. Claude won’t make final targeting decisions without human oversight. CEO Dario Amodei defended the stance in a CBS News interview: “We are patriotic Americans. The red lines we have drawn, we drew because we believe that crossing those red lines is contrary to American values.”
The Pentagon demanded “all lawful uses” without restrictions. Defense officials claim they’re not interested in surveillance or autonomous weapons, but refuse to rule them out contractually. Negotiations collapsed, and Defense Secretary Pete Hegseth issued the supply-chain risk designation on February 27.
Pentagon Banned Claude, Then Immediately Used It in Combat
Here’s where it gets absurd. On March 4, 2026, the Pentagon used Claude AI to coordinate airstrikes against Iran during the largest American military campaign since the 2003 Iraq invasion, according to The Washington Post. This happened after Trump’s February 27 ban order and during the active supply-chain risk designation process. Moreover, Pentagon CTO Emil Michael admitted the contradiction: “You can’t just rip out a system that’s deeply embedded overnight.” Defense officials called Anthropic “indispensable” in private briefings, even as they publicly stigmatized the company as a national security threat.
If Claude truly poses a supply-chain risk, why use it to conduct live military operations? Furthermore, the answer exposes the designation as punitive retaliation, not a genuine security concern. The Pentagon is punishing Anthropic for refusing unlimited access, not because the technology threatens national security.
First American Company Treated Like Huawei
The supply-chain risk designation has historically targeted foreign adversaries: Huawei and ZTE from China, Kaspersky from Russia. Notably, no American company has ever received this label, as reported by TechCrunch. Anthropic isn’t a foreign entity—it’s based in San Francisco, founded by former OpenAI safety leaders Dario and Daniela Amodei. The designation blacklists Anthropic from government contracts, prevents federal agencies and military contractors from using Claude, and stigmatizes the company as a security threat.
The financial damage is severe. Anthropic lost over $150 million in annual DOD contracts. A partner switched from Claude to a competitor for an FDA deployment, eliminating a $100 million pipeline. Negotiations with financial institutions worth $180 million combined have stalled. CFO estimates put total 2026 revenue harm at “hundreds of millions to billions.” Consequently, over 100 enterprise customers contacted Anthropic expressing “deep fear, confusion and doubt” about association with a Pentagon-banned company.
Tech Industry Rallies Behind Anthropic
Competitors are supporting Anthropic’s legal fight. More than 300 Google employees and 60 OpenAI employees signed an open letter urging their companies to adopt Anthropic’s red lines, as reported by Fortune. The letter, titled “We Will Not Be Divided,” demands leadership “stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.”
Additionally, thirty scientists from OpenAI and Google DeepMind—including Google chief scientist Jeff Dean—filed amicus briefs supporting Anthropic. An OpenAI spokesperson confirmed the company shares Anthropic’s red lines on surveillance and autonomous weapons. The unity is remarkable: direct competitors are backing Anthropic because the outcome affects the entire AI industry.
What’s at Stake
Anthropic filed two lawsuits on March 9 in U.S. District Court for Northern California and the D.C. Circuit Court of Appeals, CNN reports. The company alleges the designation constitutes unlawful First Amendment retaliation for protected speech advocating AI safeguards. Legal experts predict the Pentagon’s designation won’t survive court challenge. The Foundation for Individual Rights and Expression argues the government is violating Anthropic’s constitutional rights. Lawfare notes no precedent exists for designating a domestic company this way over a contract dispute.
If Anthropic wins, it sets precedent that AI companies can refuse government demands on ethical grounds. Conversely, if the Pentagon wins, the chilling effect on AI ethics could drive top researchers to leave for companies outside U.S. jurisdiction. Many developers refuse to work on mass surveillance or autonomous weapons systems—losing that talent undermines American AI leadership more than any contract dispute.
The irony remains: the Pentagon calls Anthropic too dangerous to use, then uses Claude to bomb Iran. You can’t claim a company threatens national security while depending on its technology for your largest military operation in two decades. That’s not policy. That’s punishment.

