The Trump administration appealed a federal judge’s ruling on April 2, challenging the decision that blocked the Pentagon from labeling Anthropic as a national security threat. The AI company refused to allow its Claude model to power autonomous weapons or enable mass surveillance of Americans, and the government responded by treating it like a foreign adversary. U.S. District Judge Rita Lin wasn’t buying it. In her March 26 ruling, she called the government’s actions “Orwellian,” writing that “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.” The Ninth Circuit Court of Appeals has set an April 30 deadline for the Justice Department to explain why a company’s ethical stance on AI should justify treating it like China or Russia.
This isn’t just legal procedure. This case will define whether AI companies can refuse government contracts based on ethical concerns about their technology’s use, establishing precedent for every AI developer and company working with federal agencies.
Judge Blocks Pentagon’s “Orwellian” Label
Judge Lin’s 43-page preliminary injunction blocked two punitive measures: the Pentagon’s designation of Anthropic as a “supply chain risk” and Trump’s directive banning federal agencies from using Claude AI. The supply chain risk classification is normally reserved for companies linked to foreign adversaries—China, Russia, Iran. Using it against an American company for disagreeing with contract terms crossed a constitutional line. Fortune reported on Judge Lin’s scathing rebuke of the Pentagon’s approach.
Lin questioned the Pentagon’s reasoning directly. “If the worry is about the integrity of the operational chain of command, [DoD] could just stop using Claude,” she wrote. Why label the company a national security threat when the government could simply walk away from the contract? The judge found the “broad punitive measures” arbitrary and potentially “crippling” to Anthropic—retaliation for bringing public attention to the Pentagon’s demands.
The First Amendment angle matters. Lin ruled that “punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.” Companies have a constitutional right to publicly disagree with government policy. Moreover, the government cannot weaponize national security classifications to silence dissent.
Autonomous Weapons and Mass Surveillance: The Ethical Boundary
Anthropic’s refusal centered on two explicit prohibitions: no fully autonomous weapons systems and no mass surveillance of U.S. citizens. Autonomous weapons use AI to search, identify, and engage targets without human decision-making in life-or-death moments. Mass surveillance means broad domestic monitoring programs powered by AI. These aren’t theoretical concerns—they’re active Pentagon deployment areas.
Defense Secretary Pete Hegseth gave CEO Dario Amodei a deadline: 5:01 PM on February 27, 2026. Allow “unrestricted use” for “all lawful purposes” or face consequences. Anthropic released a statement the day before: “We cannot in good conscience accede to their request.” The company walked away from a $200 million contract signed just seven months earlier, when it became the first AI lab deployed on the Pentagon’s classified networks.
Within hours of the February 27 ban, OpenAI announced a Pentagon contract filling the gap. The difference? OpenAI’s policy allows use for “any lawful purpose” with operational safeguards—cloud-only deployment, safety researchers “in the loop”—rather than contractual prohibitions. Critics argue this distinction matters: contractual bans are enforceable, while operational safeguards depend on trust and can change without legal consequences. Additionally, the Electronic Frontier Foundation titled their analysis “Weasel Words”, arguing OpenAI’s approach won’t stop AI-powered surveillance.
April 30 Deadline Sets Stage for Precedent
The Ninth Circuit Court of Appeals has given the Justice Department until April 30 to submit its brief explaining why Judge Lin’s ruling should be overturned. This appeal determines whether the preliminary injunction stands while the full lawsuit proceeds. If the government wins, Anthropic faces immediate consequences: the ban on federal use takes effect, the supply chain risk designation activates, and the company loses access to all government contracts. Conversely, if Anthropic prevails, the injunction remains and the case moves to trial on constitutional grounds.
The timeline reveals how fast AI ethics moved from theoretical debate to constitutional law. February 24 (Hegseth’s ultimatum) to February 27 (Trump’s ban) to March 26 (Judge blocks ban) to April 2 (DOJ appeals) to April 30 (arguments due). Six weeks from contract dispute to federal court precedent. Every AI company is watching this case, adjusting government contracting strategies in real time. If the government wins on appeal, expect companies to cave on ethics demands. If Anthropic wins, expect more companies to push back with legal protections.
The Industry Split and Unprecedented Coalition
An unusual coalition filed amicus briefs supporting Anthropic: Microsoft (a direct competitor through its OpenAI investment), tech industry trade groups, rank-and-file tech workers, retired U.S. military leaders, and 14 Catholic moral theologians. The Catholic scholars argued that removing humans from life-or-death decisions violates human dignity and creates an accountability gap if AI causes wrongful deaths. Retired military leaders backed Anthropic’s position that current AI technology isn’t reliable enough for fully autonomous lethal decisions.
When religious ethicists and retired generals both support an AI company over the Pentagon, it signals this isn’t a normal procurement dispute. The breadth of support strengthens Anthropic’s legal position and makes it harder to dismiss the case as Silicon Valley ideology overriding national security. Furthermore, it shows AI ethics has moved beyond tech company marketing to mainstream moral and strategic concerns.
The industry split is clear. Anthropic demanded contractual bans and walked away from $200 million. OpenAI accepted “any lawful purpose” with claimed safeguards and secured the contract. For developers, this choice defines what you’re asked to build. If AI ethics matters to you, the difference between contractual prohibitions and trust-based compliance is career-defining. Consequently, you’re not just picking an employer—you’re picking a position on autonomous weapons.
Key Takeaways
- Judge Rita Lin ruled the Pentagon’s “Orwellian” use of national security classifications against an American company for policy disagreement likely violates the First Amendment, establishing that companies can publicly refuse government contracts on ethical grounds.
- The April 30 Justice Department deadline creates a clear next milestone, with the Ninth Circuit appeal determining whether AI companies’ ethical stances receive constitutional protection or government procurement demands override responsible AI principles.
- Anthropic refused to allow Claude for fully autonomous weapons (AI targeting and firing without humans) and mass surveillance of Americans, walking away from $200 million rather than compromise, while OpenAI filled the contract with an “any lawful purpose” policy critics call insufficient.
- An unprecedented coalition—Microsoft, military leaders, Catholic theologians, tech workers—supports Anthropic, revealing this case transcends tech company interests to address fundamental questions about human accountability in AI-powered life-or-death decisions.
- Every AI company faces this choice on government contracts: demand enforceable ethical prohibitions like Anthropic, or trust operational safeguards like OpenAI. The legal precedent from this case will define which approach survives constitutional scrutiny.
The outcome will shape AI governance for years. This is the moment when “responsible AI” principles meet federal procurement power in court, and judges—not tech executives or Pentagon officials—will decide who controls the ethical architecture of AI technology.


