Anthropic sued the Pentagon today, challenging the Department of Defense’s unprecedented designation of the company as a “supply chain risk”—the first time this label has been applied to a U.S. company. Moreover, the lawsuit, filed March 9 in federal court in California, alleges First Amendment violations and government overreach. The core issue: Anthropic refused to grant the Pentagon “unfettered access” to its Claude AI for autonomous weapons and mass domestic surveillance.
This isn’t about red tape or paperwork. Rather, it’s about whether AI companies have the right to refuse building weapons that kill without human oversight.
The Autonomous Weapons Red Line
The Pentagon wanted Claude for “any lawful use”—including fully autonomous weapon systems that select and engage targets without human involvement. However, Anthropic refused, demanding contractual guarantees that Claude wouldn’t be used for fully autonomous weapons or mass surveillance of Americans. Consequently, when negotiations broke down, the Pentagon designated Anthropic a supply chain risk on March 5.
The dispute escalated around a nuclear strike scenario. According to Futurism, the Pentagon wanted Claude to support incoming nuclear strike response decisions—exactly the kind of high-stakes, life-or-death scenario where AI that hallucinates has no business operating autonomously.
Anthropic drew a hard line. In response, the Pentagon rejected it and blacklisted the company. Now we have a lawsuit that will define whether AI companies can set ethical boundaries or must bend to government demands.
First Amendment Test Case for the AI Era
Anthropic’s lawsuit claims the designation violates the company’s First Amendment rights. Specifically, it argues the government is retaliating against Anthropic for its public advocacy of AI safeguards. Notably, the supply chain risk label has historically been reserved for foreign adversaries like Huawei and ZTE—never a U.S. company until now.
The lawsuit names 12+ federal agencies as defendants, including the DOD, Treasury, State Department, and General Services Administration. Furthermore, Anthropic alleges the actions are “unprecedented and unlawful” and that the company wasn’t granted adequate due process.
Legal experts are skeptical the Pentagon’s designation will survive judicial review. Indeed, Lawfare’s headline puts it bluntly: “Pentagon’s Anthropic Designation Won’t Survive First Contact with Legal System.”
If Anthropic wins, AI companies gain First Amendment protection to advocate for restrictions without government retaliation. Conversely, if the Pentagon wins, companies must choose between ethics and government contracts. There’s no middle ground here.
OpenAI and Google Seize the Opening
The supply chain risk designation requires all DOD contractors to certify they don’t use Claude in Pentagon work. As a result, defense tech companies are dropping Claude immediately. CNBC reports that 10+ defense contractors “have backed off of their use of Claude for defense use cases and are in active processes to replace the service with another one.”
Palantir, which relies on government contracts for 60% of its U.S. revenue, is “heavily embedded” with Anthropic’s technology. Therefore, the company faces short-term operational disruptions, according to Piper Sandler analysts.
The timing is telling: OpenAI signed its Pentagon deal on February 27—the same day Trump ordered federal agencies to cease using Anthropic. Additionally, OpenAI’s contract includes “red lines” (no domestic mass surveillance, no autonomous weapons without human control), but they’re flexible enough to satisfy the Pentagon. Sam Altman later admitted the deal “looked opportunistic and sloppy,” but at least OpenAI kept some guardrails.
Meanwhile, Google updated its ethical guidelines in 2026 and dropped its pledge not to use AI for weapons development. Thus, the companies willing to compromise are capturing the contracts Anthropic refuses.
The Tech Community Remains Divided
Interestingly, Anthropic saw over 1 million people sign up for Claude daily during the past week—a show of grassroass support for the company’s ethics stance. Moreover, major tech groups representing Google, Apple, Microsoft, and Nvidia sent letters backing Anthropic and urging the Pentagon to reverse the designation.
Nevertheless, critics say Anthropic is being naive. The pragmatist camp argues national security needs AI, and Anthropic’s absolutism just hands market share to less principled competitors. TechCrunch asks: “Will the Pentagon’s Anthropic controversy scare startups away from defense work?”
There’s no clean answer. Ultimately, fully autonomous lethal decisions are too high-stakes for AI systems that make mistakes. However, individual companies can’t solve this alone—AI in warfare needs government regulation, not corporate ethics policies.
What Happens Next
The lawsuit will take 1-2 years to resolve, but the precedent will shape AI-government relationships for decades. First, a preliminary injunction could temporarily block the designation while the case proceeds.
This is a watershed moment. The outcome will determine whether AI companies have the autonomy to set ethical guardrails or must comply with government demands to keep contracts. Therefore, developers should watch this closely—the boundaries defined here will shape the industry for years.
Anthropic chose principles over Pentagon money. Whether that’s brave or reckless depends on who wins in court.

