The Pentagon’s attempt to blacklist Anthropic backfired in spectacular fashion. When the Department of Defense designated Claude as a supply-chain risk on February 27, developers didn’t abandon the AI assistant—they flocked to it. Claude’s revenue run-rate surged to nearly $20 billion from $9 billion months earlier, the app hit #1 on the App Store for the first time, and Anthropic is now on track to surpass OpenAI’s revenue by year’s end. It’s the Streisand Effect at scale: the Pentagon’s restriction became Anthropic’s best marketing campaign.
The Numbers Don’t Lie
On the same day the blacklist hit, Claude surpassed ChatGPT in daily U.S. downloads for the first time and climbed to #1 on the App Store. Enterprise subscriptions quadrupled since the start of 2026, and public sector revenue jumped fourfold between December and January. By February, Claude’s revenue run-rate approached $20 billion—up from $9 billion at year-end 2025.
Outside Anthropic’s headquarters, supporters left chalk messages reading “You give us courage.” Developers saw the company standing firm on AI safety principles—refusing to remove Constitutional AI guardrails for autonomous weapons and mass surveillance—and voted with their wallets.
OpenAI’s Opportunistic Mistake
OpenAI announced a Pentagon contract on February 28, hours after Anthropic’s blacklisting. The timing was brutal. Sam Altman later admitted the deal “looked opportunistic and sloppy” and said he “shouldn’t have rushed” the announcement.
The backlash was immediate. A grassroots “QuitGPT” boycott gathered 2.5 million supporters. ChatGPT mobile app uninstalls spiked 295 percent day-over-day, and one-star reviews surged 775 percent in a single day. Protesters gathered outside OpenAI’s San Francisco offices on March 3, and the company’s robotics leader resigned on March 8 over concerns about the Pentagon deal. Altman had to amend the agreement after the blowback.
Developers made a choice: Anthropic stood firm, OpenAI compromised, and the market rewarded the former.
What the Pentagon Wanted
The conflict stems from Anthropic’s refusal to remove safety restrictions. Defense Secretary Pete Hegseth put it bluntly: “We will not employ AI models that won’t allow you to fight wars.” Pentagon officials worried Anthropic might shut off Claude mid-operation, putting lives at risk. Anthropic disputed this characterization but wouldn’t budge on removing guardrails for autonomous weapons and mass surveillance.
The February 27 designation labeled Anthropic a supply-chain risk, requiring defense contractors to certify they don’t use Claude in Pentagon work. It’s a designation historically reserved for foreign adversaries.
Doubling Down with Claude Partner Network
Rather than retreat, Anthropic doubled down. On March 12, the company launched a $100 million Claude Partner Network with Accenture, Deloitte, Cognizant, and Infosys. The company scaled its partner-facing headcount fivefold and committed funding for training, certification, and co-marketing.
The message: we don’t need Pentagon contracts when the commercial market is booming.
What’s Next for Anthropic
Anthropic sued the government on March 9, claiming the blacklisting is “unprecedented and unlawful.” Legal experts say the company has a strong case. If the current trajectory holds, Anthropic will surpass OpenAI’s revenue by the end of 2026—a stunning reversal for a company that was blacklisted just months earlier.
The blacklist could still cost Anthropic multiple billions in government contracts. But commercial success suggests the company doesn’t need them. The open question: can government effectively restrict AI tools if developers and enterprises reject those restrictions?
The Pentagon tried to limit Claude’s reach. Instead, it made Anthropic the fastest-growing AI company in the world.

