On March 20, 2026, Anthropic submitted court filings that revealed a damning contradiction. Internal emails show that one day after the Pentagon formally blacklisted Anthropic as an “unacceptable national security risk,” a Pentagon Under Secretary emailed CEO Dario Amodei saying the two sides were “very close” and “nearly aligned” on the exact issues the government now cites as proof Anthropic poses a threat. The email, dated March 4, directly contradicts the government’s public position.
The Contradiction Is Impossible to Reconcile
How can a company be “nearly aligned” on AI safety and simultaneously pose an “unacceptable national security risk”? The logic doesn’t hold. On March 3, the Pentagon finalized its supply-chain risk designation against Anthropic. On March 4—just one day later—Under Secretary Michael told Amodei they were “very close” on autonomous weapons and mass surveillance, the two issues the Pentagon now points to as evidence of a national security threat.
Either Anthropic is nearly aligned with the Pentagon’s requirements, or they’re an unacceptable threat. Both cannot be true at the same time. The email exposes what this really is: political retaliation after President Trump publicly declared the relationship “kaput.”
What Anthropic Won’t Allow
At the heart of the dispute are two red lines Anthropic refuses to cross. The company won’t let the Pentagon use Claude for mass domestic surveillance of Americans. And they won’t allow Claude to power fully autonomous weapons—AI systems that can kill without human involvement.
The Pentagon wants an “all lawful use” clause with zero restrictions. Anthropic’s position is that current AI models aren’t reliable enough for autonomous killing, and mass surveillance of citizens crosses ethical lines. The government’s response? Blacklist them as a threat to national security.
But if the Pentagon’s own Under Secretary privately admitted they were “nearly aligned” on these exact issues, how credible is the threat designation?
The Pentagon Still Uses Claude While Calling It a Threat
The designation triggered immediate fallout. Defense contractors like Lockheed Martin told employees to stop using Claude immediately. Federal agencies directed staff to switch to other AI models. OpenAI conveniently announced a Pentagon deal the same day Anthropic was banned.
But here’s the irony: despite the blacklist, the Pentagon is still using Claude for military operations in Iran. Palantir’s CEO confirmed Claude remains in the company’s tools. Microsoft says Anthropic’s products are still available to customers. The designation only applies to direct Pentagon contracts, not all government work.
So Claude is simultaneously too dangerous for national security and actively deployed in military operations. This further exposes the designation as political theater.
An Unprecedented Use of Government Power
Anthropic is the only American company ever publicly designated a supply chain risk. The designation has traditionally been reserved for foreign adversaries like China and Russia. There’s no precedent for weaponizing it against a U.S. company with no foreign ties.
The timeline shows political pressure, not security concerns. February 26: Anthropic rejects the Pentagon’s “final offer.” February 27: Trump declares the relationship “kaput” and orders agencies to stop using Anthropic. March 3: The blacklist is finalized. March 4: The Pentagon privately says “nearly aligned.” March 9: Anthropic files two federal lawsuits.
Court Hearing Tomorrow Could Set Precedent
Tomorrow, March 24, Anthropic’s case goes before Judge Rita Lin in a San Francisco federal court. The company argues the supply-chain risk designation violates the First Amendment—that it’s retaliation for Anthropic’s public stance on AI ethics. The March 4 email is their smoking gun evidence.
The government counters that Anthropic’s refusal to grant unrestricted access is a “business decision,” not protected speech, and the designation is a “straightforward national security call.” The March 4 email undermines that claim.
The stakes are high. If the Pentagon wins, the government can force AI companies to remove ethical guardrails by threatening them with blacklists. If Anthropic wins, companies can resist government pressure on ethics grounds. Senator Elizabeth Warren has already questioned the DOD about the blacklist, calling it “retaliation.”
For developers and tech professionals, this case sets precedent for how government can leverage “national security” to override corporate AI ethics. Anthropic’s position isn’t radical—refusing mass surveillance of Americans and autonomous killing machines is a baseline safety standard. The Pentagon’s March 4 email suggests they know this. The blacklist suggests they don’t care.

