AI & Development

OpenAI Pentagon Deal Hours After Anthropic Ban: AI Safety

OpenAI struck a Pentagon deal Friday night, February 28, 2026, to deploy AI models on classified networks — just hours after the Trump administration blacklisted rival Anthropic and designated it a “supply-chain risk to national security.” The twist: both companies claim identical red lines against autonomous weapons and mass surveillance. Same principles, opposite outcomes. The timing raises uncomfortable questions about what actually determines Pentagon AI contracts: safety standards, or politics?

The Anthropic Collapse: From Pentagon Partner to Security Threat

Anthropic had a $200 million Pentagon contract. Past tense. The company deployed Claude on DoD classified networks months ago with safety restrictions baked in. However, in February 2026, the Pentagon demanded Anthropic remove those restrictions and allow “all lawful purposes.”

CEO Dario Amodei refused, stating he “cannot in good conscience accede” to demands that “made virtually no progress on preventing Claude’s use for mass surveillance of Americans” or “in fully autonomous weapons.” The Pentagon set a 5:01 PM deadline on Friday, February 28. Anthropic didn’t budge. Hours later, Trump ordered a government-wide ban with six-month phaseout. Defense Secretary Pete Hegseth: “America’s warfighters will never be held hostage by the ideological whims of Big Tech.” Anthropic went from trusted vendor to blacklisted security risk in one day.

The OpenAI Pentagon Deal: Identical Red Lines, Green Light

That same Friday evening, OpenAI CEO Sam Altman announced his company had reached an agreement with the Pentagon for classified network deployment. The safeguards Altman described? Identical to Anthropic’s. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman posted. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

OpenAI got the exact restrictions Anthropic demanded — in writing. Moreover, the company will send forward-deployed engineers to the Pentagon to ensure model safety and build technical safeguards. Altman even asked the Pentagon to offer these same terms to all AI companies. The Pentagon accepted.

So what changed? Not the principles. Both companies ban domestic mass surveillance. Both prohibit fully autonomous weapons without human oversight. The difference isn’t what’s in the contract — it’s who signed it.

The Employee Backlash: OpenAI Workers Sided with Anthropic

Here’s the uncomfortable part for OpenAI: more than 60 of its own employees signed an open letter supporting Anthropic’s stance, published Friday night alongside 300+ Google workers. The letter urged tech leaders to “put aside their differences and stand together to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.”

The timing is brutal. OpenAI employees were publicly backing Anthropic’s refusal while Sam Altman was finalizing the deal those employees opposed. The letter included notable researchers like Boaz Barak and William Feng — not junior engineers, but people whose technical judgment matters. Over 430 tech workers across multiple companies signed. The message to management: we’re watching, and we don’t trust Pentagon promises. Internal dissent at this scale creates recruiting and retention headaches for a company competing for top AI talent.

What Made the Difference? Politics Over Principles

The Pentagon’s position: laws already prohibit mass surveillance and autonomous weapons, so trust the military to follow them. Anthropic wanted contractual restrictions. Nevertheless, OpenAI… got those restrictions anyway.

So why did the Pentagon accept from OpenAI what it refused from Anthropic? Not technical differences. The contracts include the same prohibitions. The variables are timing, negotiation, and politics.

Trump called Anthropic “leftwing nut jobs” while OpenAI faced no criticism despite identical safety principles. Hegseth designated Anthropic a security risk for demanding written restrictions, then accepted written restrictions from OpenAI. The Pentagon used Anthropic as an example: refuse our demands, get blacklisted. Accept them (phrased correctly), get the contract.

Competition gave the Pentagon leverage. Companies can’t collectively bargain when competing for hundreds of millions. Anthropic took a stand. OpenAI took the deal.

Governance Questions Nobody Can Answer

This sets a precedent. The government will designate AI companies as security risks for demanding contractual safety restrictions. Other AI companies are watching: Google, xAI, Meta. The lesson is clear.

But enforcement remains unclear. OpenAI has red lines “in writing.” What happens if the Pentagon violates them? Can OpenAI pull its models? The contract isn’t public. We’re trusting both parties to honor an agreement we can’t read.

Meanwhile, a UN resolution calling for enforceable restrictions on lethal autonomous weapons passed with 156 nations supporting. The U.S. rejected it. If the U.S. won’t accept international rules, and companies get blacklisted for imposing their own, who governs military AI use?

The Pentagon has more leverage than AI companies. That’s probably true regardless of contracts. Makes you wonder whether OpenAI’s restrictions matter, or if they’re just better political theater than Anthropic’s refusal.

ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to cover latest tech news, controversies, and summarizing them into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *