Security

Trump Bans Anthropic from Pentagon: AI Safety Showdown

On February 27, President Trump ordered federal agencies to stop using Anthropic’s AI after the company refused to let the Pentagon use its Claude AI for fully autonomous weapons or mass domestic surveillance without written guarantees. Defense Secretary Pete Hegseth branded Anthropic a “supply-chain risk to National Security”—a designation usually reserved for foreign adversaries like Huawei. But court filings revealed this week that Pentagon officials told Anthropic CEO Dario Amodei the two sides were “very close” to agreement just days before Trump’s ban, contradicting the administration’s public portrayal of an uncooperative company. A federal hearing on March 24 will decide whether the designation stands.

Pentagon Said “Very Close” Before Ban—Why the Contradiction?

On March 4, 2026—one day after formally designating Anthropic a “supply-chain risk”—a Pentagon Under Secretary emailed Dario Amodei saying the two sides were “very close” on autonomous weapons and mass surveillance issues. TechCrunch reported this week that court filings revealed the email just days before Tuesday’s federal hearing before Judge Rita Lin in San Francisco.

If the Pentagon believed a deal was imminent, why did Trump and Hegseth publicly attack Anthropic as unpatriotic? The contradiction suggests political motives beyond legitimate national security concerns. Legal experts say this may doom the Pentagon’s case—if you’re “very close” to agreement, you can’t simultaneously claim the company poses a supply chain risk. The March 4 email is the smoking gun that undermines the administration’s narrative.

What Anthropic Wanted: Written Assurances on Weapons and Surveillance

Anthropic asked for written assurances that its AI wouldn’t be used for two specific purposes: fully autonomous weapons that select and engage targets without human oversight, and mass domestic surveillance aggregating Americans’ location data, browsing history, and communications without warrants. The Pentagon said it had “no plans” for either use but refused to put it in writing, demanding Anthropic accept “any lawful purpose” language instead.

Dario Amodei explained that “frontier AI systems are simply not reliable enough to power fully autonomous weapons.” On surveillance, he warned that AI enables assembling “scattered data into a comprehensive picture of any person’s life automatically and at massive scale” using data purchased from brokers without warrants. Anthropic isn’t objecting to military AI in general—just to specific high-risk uses where current technology isn’t ready or civil liberties are at stake.

Related: Age Verification: Privacy Theater That Fails Children

Pentagon Refused to Put It in Writing—That’s a Red Flag

Here’s the core issue: If the Pentagon truly has no plans for autonomous weapons or mass surveillance of Americans, putting it in writing costs nothing. Verbal promises are worthless when AI systems can be used for decades and administrations change every four years. The Pentagon’s refusal to provide written assurances is suspicious—it suggests they want to keep their options open.

Anthropic’s technical argument is sound. Current AI systems aren’t reliable enough for fully autonomous lethal decisions. And the legal loophole allowing warrantless data purchases from brokers combined with AI-powered analysis is a real threat to civil liberties. If the Pentagon’s claims are true, why not memorialize them in contract terms? Their refusal raises the question: What are they not telling us?

OpenAI Got the Same Deal—Political Theater, Not Substance

Hours after Trump banned Anthropic, OpenAI CEO Sam Altman announced a Pentagon deal with the same restrictions Anthropic requested—prohibitions on fully autonomous weapons and mass domestic surveillance. The difference? OpenAI said they were “citing applicable laws” rather than demanding “specific contract prohibitions.” Altman later admitted the announcement was “opportunistic and sloppy.”

MIT Technology Review reported that OpenAI’s “compromise” contains functionally identical restrictions but with different framing. Anthropic CEO Dario Amodei called OpenAI’s messaging “straight up lies,” arguing the substance was the same but the PR was better. This reveals the dispute was about political optics, not actual ethical differences.

If OpenAI got the same restrictions by framing them differently, then Anthropic’s “red lines” weren’t unreasonable—they were just poorly marketed. Developers should recognize this: the issue isn’t whether companies can impose ethics constraints, but whether they’re politically savvy about it. The Pentagon gave OpenAI what they refused to give Anthropic, exposing this as retaliation for not playing the PR game.

Related: DSLMs: Why Enterprises Abandon General AI in 2026

“Supply Chain Risk” Was Never Meant for This

The “supply chain risk” designation under 10 U.S.C. § 3252 is designed to protect sensitive military systems from foreign adversaries suspected of espionage—think Huawei, Kaspersky. Applying this authority to a domestic company in a contract dispute is unprecedented legal overreach. Legal experts at Lawfare predict the designation “won’t survive first contact with the legal system.”

The statute requires the Secretary to prove “less intrusive measures are not reasonably available.” The Pentagon’s March 4 email admitting they were “very close” to resolution proves less intrusive measures were available—continued negotiation. Nearly 150 retired federal and state judges filed an amicus brief supporting Anthropic, alongside Google, Amazon, Apple, and Microsoft. Even 36 AI researchers from OpenAI and Google, including Google Chief Scientist Jeff Dean, filed a brief supporting Anthropic.

This broad tech industry support shows the principle matters beyond Anthropic. If the Pentagon can designate any company that demands contract terms as a “national security risk,” no tech company is safe. Negotiate tough? You’re labeled a threat. The March 24 hearing will determine whether this precedent stands or gets struck down.

Why Anthropic Is Right (and What Happens Next)

Anthropic is right to demand written assurances. Tech workers understand this intuitively—796 Google workers and 98 OpenAI employees signed solidarity letters supporting Anthropic, even at competing companies. The developer community recognizes the principle matters: Can you maintain ethical boundaries when your code is used by powerful institutions, or must you capitulate to “any lawful purpose” demands?

The Pentagon’s refusal isn’t about national security—it’s about control. Verbal assurances are meaningless. AI systems have decades-long lifespans. Political administrations change. Written contracts provide durability. If the Pentagon’s claims are good faith, writing them down should be trivial. Their refusal suggests they want flexibility to repurpose AI for uses Anthropic specifically objects to.

The March 24 federal hearing will set precedent for the entire AI industry. If Judge Rita Lin overturns the designation, AI companies maintain the right to impose ethical constraints. If the designation stands, companies must choose: government contracts or ethical principles. Either way, developers should support companies that draw red lines. It protects the industry’s ability to maintain ethics against political pressure.

Key Takeaways

  • Pentagon’s March 4 email undermines their case: Claiming “very close” to agreement one day after designating Anthropic a “supply chain risk” exposes political motivations and contradicts the national security narrative.
  • Anthropic’s technical argument is sound: Current AI isn’t reliable enough for fully autonomous weapons, and mass surveillance using AI to aggregate warrantless data poses real civil liberties threats.
  • OpenAI got identical restrictions with better PR: The Pentagon gave OpenAI the same deal they refused Anthropic, proving this is about political framing, not substance. “Citing laws” vs. “contract prohibitions” is a distinction without a difference.
  • Legal experts predict designation will fail: Nearly 150 retired judges and major tech companies support Anthropic. Supply chain risk authority was designed for foreign adversaries, not domestic contract disputes.
  • March 24 hearing sets industry precedent: The federal court will decide whether AI companies can maintain ethical red lines or must accept government “any lawful purpose” demands. This affects every developer working on AI.
ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to cover latest tech news, controversies, and summarizing them into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *

    More in:Security