NewsAI & DevelopmentTech Business

Anthropic Pentagon Ultimatum: Drop AI Safety or Lose All

Anthropic faces a 5 PM Friday deadline from the Pentagon to remove AI safety restrictions on its Claude model—or risk losing its $200 million contract and being labeled a “supply-chain risk” that would bar all Pentagon contractors from using Claude. CEO Dario Amodei’s response Thursday: “We cannot in good conscience agree.”

The Pentagon’s Demands

Secretary of War Pete Hegseth wants unrestricted military access to Claude for “any lawful purpose,” including fully autonomous weapons that can select and attack targets without human involvement, and mass surveillance of American citizens. Current Anthropic contracts explicitly prohibit both uses.

Hegseth’s ultimatum is blunt: “Get on board or not.” If Anthropic refuses by Friday’s deadline, he’ll invoke the Defense Production Act to compel compliance and label the company a supply-chain risk.

The Supply-Chain Risk Nuclear Option

The supply-chain designation is the real threat. It would mean no company doing business with the Pentagon could use Claude—forcing major enterprise customers to choose between their government contracts and Anthropic’s AI.

Fortune reported this represents “one of the biggest crises in Anthropic’s five-year existence.” The $200 million contract cancellation is immediate, but the supply-chain risk could cripple the company’s entire enterprise business.

Anthropic Stands Firm

In an official statement Thursday, Amodei said Anthropic “cannot in good conscience” accept Pentagon demands. The company, founded by former OpenAI researchers who left over safety concerns, has offered a compromise: Claude can be used for missile defense and defensive applications, but autonomous weapons and mass surveillance remain red lines.

“The contract language we received from the Department of War made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons,” the statement read.

Anthropic is willing to walk away from $200 million rather than compromise its founding principles.

Industry Context: Anthropic Stands Alone

Anthropic is the only major AI company with these military use restrictions. OpenAI, Google, Meta, and xAI have all signaled willingness to comply with Pentagon demands for “all lawful applications.”

Google dropped its pledge not to use AI for weapons or surveillance in 2025. OpenAI recently removed explicit “safety” references from its mission language. According to CNN, competitors have been willing to provide unrestricted military access.

This standoff sets precedent for the entire AI industry. If the Pentagon can force Anthropic to comply, every AI company faces the same ultimatum: drop your ethical guidelines or lose government business.

What Happens Next

Friday’s 5:01 PM deadline leaves three likely scenarios: Anthropic backs down, the Pentagon designates it a supply-chain risk, or both sides end up in court.

Legal experts say using the Defense Production Act to compel AI model changes would be unprecedented. Lawfare noted the law “has never been used to compel a company to produce a product it’s deemed unsafe, or to dictate its terms of service.” First Amendment concerns also arise if the government forces a company to modify its AI model’s values.

The tech industry is watching closely. Anthropic’s decision—stand firm and potentially sacrifice its business, or cave to government pressure—will define whether AI safety commitments mean anything when tested by real power.

Whether this represents principled courage or corporate suicide will be clear by Friday evening.

ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to cover latest tech news, controversies, and summarizing them into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *

    More in:News