Technology

Pentagon Labels Anthropic Supply Chain Risk: First U.S. Firm

On March 5, 2026, the Pentagon officially designated Anthropic a “supply chain risk”—making it the first U.S. company in history to receive this label, previously reserved exclusively for foreign adversaries like China’s Huawei. The designation came after Anthropic refused to grant the military unlimited access to its Claude AI system, specifically rejecting use cases involving autonomous weapons and mass domestic surveillance of Americans. Within hours, OpenAI announced a Pentagon deal to replace Anthropic in classified military environments—despite OpenAI president Greg Brockman having donated $25 million to Trump’s MAGA Inc. super PAC six months earlier.

This represents the AI industry’s first major ethical split over military AI use. Every AI company now faces a binary choice: accept unlimited military access or take an ethical stance and risk government retaliation.

First U.S. Company Ever Designated Supply Chain Risk

The “supply chain risk” designation was created to address national security threats from foreign adversaries. Huawei received it in 2019 due to ties with the Chinese government. ZTE, Kaspersky, and other foreign companies followed. However, Anthropic is the first domestic American company ever designated—a tool designed for geopolitical threats now deployed in a contract dispute.

Sen. Kirsten Gillibrand called it “a dangerous misuse of a tool meant to address adversary-controlled technology.” Meanwhile, Usama Fayyad, Northeastern University’s senior vice provost for AI, warned it “will cause major economic, scientific and engineering damage as everyone freezes in fear and the U.S. falls behind other countries.” The precedent is chilling: the Pentagon can now label any company refusing its contract terms a “national security threat,” eliminating tech companies’ negotiating leverage entirely.

Autonomous Weapons and Surveillance: The Core Dispute

Anthropic wanted contractual guarantees that Claude would not power fully autonomous weapons (no human in targeting or firing decisions) or mass domestic surveillance of U.S. citizens. In contrast, the Pentagon demanded “unfettered access to Claude across all lawful purposes” with zero vendor-imposed restrictions. A senior Pentagon official stated: “The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability.”

Dario Amodei, Anthropic’s CEO, refused: “We cannot in good conscience agree to allow the Department of Defense to use its AI models in all lawful use cases. AI-driven mass surveillance presents serious, novel risks to fundamental liberties.” He announced Anthropic will challenge the designation in court, calling it legally unsound.

OpenAI’s approach? Non-binding “policy red lines” instead of contractual prohibitions. Their Pentagon agreement lists the same concerns—no mass surveillance, no autonomous weapons, no high-stakes automated decisions—but as voluntary policies, not contract terms. Policies change. Contracts don’t. The difference matters.

OpenAI’s $25 Million Advantage

The timing exposed the political dynamics. OpenAI announced its Pentagon deal on March 5—the same day Anthropic received the supply chain risk designation, hours before U.S. strikes on Iran. Furthermore, the optics were brutal. Greg Brockman donated $25 million to Trump’s MAGA Inc. PAC in September 2025. Anthropic donated zero. Amodei cited the donation disparity as an implicit factor in the Pentagon’s decision.

Sam Altman later admitted the announcement “looked opportunistic and sloppy,” adding “I shouldn’t have rushed.” Consequently, the public agreed. ChatGPT uninstalls surged 300%. A “Cancel ChatGPT” movement grew among developers. Meanwhile, Anthropic saw its largest single-day signups ever—briefly becoming the #1 app in the U.S. Developers voted with their downloads.

Related: OpenAI Robotics Chief Quits Pentagon Deal After 4 Months

Pentagon Still Uses “Risky” Claude in Combat

Here’s the contradiction: even as the Pentagon designates Anthropic a supply chain risk and orders federal agencies to phase out Claude within six months, the U.S. military continues actively using Claude for operations in Iran. Moreover, the Pentagon’s Maven Smart System relies on Claude for intelligence analysis, target identification, threat evaluation, and strike planning. Over 1,000 targets were hit in the first 24 hours of the Iran conflict using Claude-assisted analysis.

If Claude genuinely threatened national security, it wouldn’t be trusted for combat operations. The six-month phase-out period allows continued military use while punishing Anthropic for refusing contract terms. The hypocrisy is stark: too dangerous for federal agencies, but reliable enough for selecting bombing targets. This exposes the designation as political retaliation rather than legitimate security concern.

Industry Split: Every AI Company Must Choose

The Anthropic-OpenAI divide forces every AI company to declare publicly: ethics-first or government-friendly. There’s no middle ground. Lockheed Martin announced it will “follow the Pentagon’s direction” and drop Anthropic. Additionally, defense tech startups are fleeing Anthropic partnerships to avoid designation risk. Microsoft, Google, and Amazon hedged, clarifying Claude remains available for non-DOD work.

Public sentiment overwhelmingly supports Anthropic. Developers largely rejected OpenAI’s opportunism—the 300% uninstall surge and boycott movement demonstrate values matter. This echoes 2018, when Google employees protested Project Maven and forced the company to limit military AI work. Corporate values, not just capabilities, drive developer loyalty.

Key Takeaways

  • The Pentagon weaponized a tool designed for foreign adversaries (Huawei, ZTE) against an American company refusing its contract terms—the first domestic “supply chain risk” designation ever
  • Anthropic sought contractual prohibitions on autonomous weapons and mass surveillance; OpenAI accepted non-binding “policy red lines” that can change anytime
  • Political donations matter: Greg Brockman’s $25 million to Trump’s PAC preceded OpenAI’s Pentagon deal hours after Anthropic’s designation
  • The Pentagon continues using Claude for combat operations in Iran while simultaneously labeling it a supply chain risk—exposing the designation as political retaliation, not security concern
  • Developers are voting with downloads: ChatGPT uninstalls surged 300% while Anthropic saw record signups, proving corporate values influence provider choice as much as technical capabilities

This precedent affects every tech company considering government contracts. The AI industry now splits between companies with binding ethical commitments and those with flexible policies. Developers are watching—and choosing accordingly.

ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to cover latest tech news, controversies, and summarizing them into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *

    More in:Technology