On March 2, 2026, OpenAI CEO Sam Altman announced a deal to deploy ChatGPT’s AI models on the Pentagon’s classified networks—hours after rival Anthropic refused the same contract and was banned from US government use. The decision triggered an immediate backlash: 1.5 million users cancelled subscriptions or pledged to quit, ChatGPT app uninstalls spiked 295%, and Anthropic’s Claude surged to #1 on Apple’s App Store for the first time ever, dethroning ChatGPT. Even Altman admitted the deal “looked opportunistic and sloppy.”
This isn’t just Twitter outrage—it’s a market correction. The numbers prove ethics matter in AI tool selection, and OpenAI learned the hard way: you can’t serve both commercial users who value privacy and Pentagon contracts that compromise it.
The Numbers Prove This Is Real, Not Twitter Noise
QuitGPT.org, a grassroots boycott campaign, coordinated 1.5-2.5 million users to cancel subscriptions, delete accounts, or pledge to quit ChatGPT. Many participants were $20/month Plus and $200/month Pro subscribers—direct revenue impact, not performative virtue signaling. App analytics confirmed ChatGPT uninstalls jumped 295%.
Meanwhile, Anthropic’s Claude climbed from outside the top 100 to #1 on Apple’s App Store in just days (March 1-3), dethroning ChatGPT for the first time. Claude also hit #1 in 15 other countries including the UK, Canada, France, and Singapore. Daily sign-ups broke all-time records every day that week, free users increased 60% since January, paid subscribers more than doubled in 2026, and daily active users surged 180% to 11.3 million.
Social media boycotts usually fizzle. This one had teeth—measurable app uninstalls, subscription cancellations, and competitor gains. Anthropic won millions of users without spending a dollar on ads. They just stood on principle.
OpenAI Admits Fault, Scrambles to Revise Contract Terms
In a March 3 “Ask Me Anything” session on X, Sam Altman conceded the Pentagon deal “looked opportunistic and sloppy” and “was definitely rushed.” He admitted: “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy… We shouldn’t have rushed [the Pentagon deal] and [we’re] making some additions.”
OpenAI scrambled to revise contract terms, explicitly barring Defense Intelligence Components (NSA, National Geospatial-Intelligence Agency, Defense Intelligence Agency) from using ChatGPT for domestic surveillance. The revised safeguard states: “OpenAI’s AI systems shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”
When a CEO publicly admits screwing up, it signals real damage control mode. However, critics immediately spotted the loophole: “intentionally.” What about “accidentally” or “incidentally” using data for surveillance? What about other DoD agencies not covered by the intelligence ban? Vague safeguards don’t rebuild trust, especially when the full contract text remains unpublished.
Related: Pentagon Blacklist Backfires: Claude Revenue Hits $20B Run-Rate
Internal Dissent: Robotics Leader Resigned Over “Governance Failure”
Caitlin Kalinowski, OpenAI’s robotics team leader (joined November 2024 from Meta AR/VR), resigned March 7-9 citing “surveillance of Americans without judicial oversight and lethal autonomy without human authorization” as lines crossed by the Pentagon deal.
Her resignation statement on X and LinkedIn was carefully worded but damning: “AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are [lines that deserve more deliberation]… My issue was that the announcement was rushed without the guardrails defined. It’s a governance concern first and foremost.”
She emphasized this was “about principle, not people” and expressed “deep respect for Sam and the team.” Translation: OpenAI’s decision-making process failed, not its people. When a senior leader resigns over governance—not personality conflicts or comp—it signals deeper institutional problems.
External criticism is noise. Internal resignations are signal. Furthermore, Kalinowski’s concerns (surveillance without warrants, autonomous weapons without human control) are precisely what users feared. Many OpenAI employees also signed an open letter supporting Anthropic’s ethical stance, though this received less media attention.
Anthropic’s Quiet Win: Ethics as Competitive Advantage
Anthropic refused the Pentagon contract (~$200 million) when negotiations over safeguards failed, got designated a “supply-chain risk” and banned by the Trump administration from all federal agencies, and never publicly criticized OpenAI. The result: Claude hit #1 on the App Store, daily sign-ups broke records, paid subscribers more than doubled, and Anthropic positioned itself as the “ethical alternative” without saying a word.
The timeline exposes the hypocrisy. Anthropic negotiated for explicit protections against mass surveillance and autonomous weapons. Negotiations failed. Anthropic walked away from $200 million. Hours later, Trump banned Anthropic from federal agencies. Then OpenAI announced the Pentagon deal. Consequently, users fled to Claude.
Dario Amodei, Anthropic’s CEO, made no public statements attacking OpenAI—just stood firm on principles. Actions spoke louder. Claude’s App Store trajectory tells the story: outside the top 100 in late January, top 20 in February, #6 on March 1, #4 on March 2, #1 on March 3, and still #1 on March 15.
This proves ethics can be a competitive advantage, not just a PR liability. Anthropic gained millions of users by refusing to compromise. In contrast, OpenAI hemorrhaged users by taking the deal. For developers choosing AI tools, there’s now a clear “ethical alternative” that’s also technically competitive.
The Real Cost: You Can’t Play Both Sides
OpenAI tried to position itself as the “safe, trustworthy AI for everyone” while simultaneously enabling Pentagon surveillance and military applications. The 1.5 million user exodus proves this strategy backfired. Commercial users who value privacy won’t stick around when you compromise for government contracts.
OpenAI’s founding mission (2015) emphasized developing AI “for the benefit of humanity” and avoiding weaponization. The current reality: Pentagon contract for classified networks, revised terms that still allow non-intelligence DoD agencies to potentially use ChatGPT for surveillance, and user trust obliterated. Moreover, #CancelChatGPT and #QuitGPT trended across Reddit and X, with screenshots of cancelled $20/month and $200/month subscriptions flooding social media.
The core lesson: Pick a lane. You can be the Pentagon’s AI tool or the people’s AI tool, but not both. Anthropic chose ethics over $200 million and won market share. OpenAI chose Pentagon money over user trust and lost 1.5 million users plus the #1 App Store ranking. The market spoke, and OpenAI didn’t like what it heard.

