NewsAI & Development

ChatGPT Uninstalls Surge 295% Over Pentagon Deal—Claude Hits #1

On Saturday, February 28, ChatGPT mobile app uninstalls surged 295%—thirty times the normal rate—as users revolted against OpenAI’s Pentagon deal signed just hours after the Trump administration blacklisted rival Anthropic for refusing the same contract. Claude, Anthropic’s AI assistant, jumped to #1 on the U.S. App Store for the first time ever, beating ChatGPT while its maker faced the government’s harshest penalty: designation as a “supply chain risk to national security.” The 295% spike isn’t just remarkable—it’s proof that users care about AI ethics more than Silicon Valley thought.

The Ethical Split That Started It All

Anthropic drew two ethical redlines on its Pentagon contract worth up to $200 million: no autonomous weapons and no mass domestic surveillance of Americans. The company’s rationale was technical and ethical. Current AI models hallucinate and make errors—they’re not reliable enough for fully autonomous targeting and firing decisions that endanger warfighters and civilians. Mass surveillance using AI crosses fundamental rights boundaries protected by the Fourth Amendment.

The Pentagon demanded Anthropic drop those restrictions and provide “all lawful purposes” access to Claude. Deadline: 5:01 PM ET on Thursday, February 26. Anthropic refused. On Friday, President Trump ordered federal agencies to “IMMEDIATELY CEASE” using Anthropic technology, and Defense Secretary Pete Hegseth designated the company a “supply chain risk”—the same penalty previously reserved for foreign adversaries like Huawei.

Hours after Anthropic was blacklisted, OpenAI announced its Pentagon deal. The timing was brutal. On Thursday, Sam Altman had told OpenAI employees the company shared the same “red lines” as Anthropic on surveillance and autonomous weapons. By Friday, OpenAI had signed the deal Anthropic rejected. By Tuesday, Altman was backpedaling: “We shouldn’t have rushed to get the agreement out on Friday. I think it just looked opportunistic and sloppy.”

Users Voted with Their Feet—Immediately

The numbers don’t lie. ChatGPT’s typical daily uninstall rate is 9%. On Saturday, it hit 295%. That’s not a slow boycott building over weeks—that’s an instant mass exodus. Claude downloads surged 51% that same day. By Saturday evening, Claude had overtaken ChatGPT to claim the #1 spot on the U.S. App Store. ChatGPT fell to #2. Google’s Gemini lagged behind in fourth place.

This wasn’t just consumers. Tech workers across the industry signed an open letter backing Anthropic’s stance: 875+ employees from Google, OpenAI, IBM, Slack, Cursor, and Salesforce Ventures put their names on it. Many OpenAI employees publicly supported their competitor over their own employer’s Pentagon deal. “Cancel ChatGPT” trended on X and Reddit. A consumer boycott campaign called QuitGPT organized protests at OpenAI’s headquarters.

Anthropic lost a $200 million contract but gained millions of users. Claude jumped from #42 on the App Store to #1. That’s the business case for ethics: long-term trust beats short-term revenue. OpenAI took the $200 million and lost market position, user trust, and employee morale in 48 hours.

First Time the U.S. Has Blacklisted an American AI Company

The “supply chain risk to national security” designation has been used before—on Huawei and ZTE, companies from adversary nations accused of espionage. It’s never been used on an American company. Pete Hegseth’s designation bans all military contractors from doing business with Anthropic. It’s the harshest penalty the Pentagon can impose short of criminal charges.

Anthropic called the move “retaliatory and punitive,” “legally unsound,” and warned it sets a “dangerous precedent for any American company that negotiates with the government.” The company vowed to “challenge any supply chain risk designation in court.”

Legal experts are skeptical the designation will survive. Lawfare’s headline: “Pentagon’s Anthropic Designation Won’t Survive First Contact with Legal System.” The core legal questions: Does Hegseth have the authority to blacklist a U.S. company for refusing a contract? Is this retaliation for exercising normal negotiation rights? What’s the actual scope of the designation?

If upheld, the government can coerce any company into accepting any contract terms under threat of blacklist. If overturned in court, it establishes that companies CAN refuse government demands on ethical grounds without facing retaliatory punishment. The legal battle will set precedent far beyond AI.

OpenAI’s Damage Control Reveals the Problem

By Tuesday, March 3, Sam Altman was in damage control mode. He admitted OpenAI “shouldn’t have rushed” and acknowledged the deal “looked opportunistic and sloppy.” He claimed OpenAI was “trying to de-escalate things and avoid a much worse outcome.” The company promised a renegotiated contract with stronger language: AI systems “shall not be intentionally used for domestic surveillance of U.S. persons and nationals,” citing the Fourth Amendment, the National Security Act of 1947, and FISA.

But here’s the problem: Anthropic drew a hard line and refused the contract. OpenAI is trying to have it both ways—take Pentagon money AND maintain ethical safeguards through contract language. MIT Technology Review’s headline captured the skepticism: “OpenAI’s ‘compromise’ with the Pentagon is what Anthropic feared.” Are these real protections, or PR language with loopholes?

The internal conflict at OpenAI is telling. When your own employees sign an open letter supporting your competitor’s ethical stance over your business decision, you have a morale problem. Expect employee retention and recruitment challenges ahead.

What Developers Should Do About This

Every developer now faces a choice. Which AI tools do you use—Claude or ChatGPT? Not based on features alone, but on company values. Which companies do you work for? Do they have Pentagon contracts? What’s their stance on surveillance and autonomous weapons? What should you demand from your current employer?

This is the hill worth dying on. Autonomous weapons powered by AI models that hallucinate and make errors are not reliable for life-or-death decisions. Mass surveillance of Americans using AI is a fundamental rights violation. If no AI company refuses military applications, all of them will be pressured into building surveillance and weapons systems.

Practical actions developers can take right now: Switch to Claude if you use ChatGPT for ethical reasons, not just features. If you’re job hunting, ask interviewers about military contracts and ethical policies during interviews. At your current job, organize with coworkers to demand transparency on how AI products are used. Across the industry, support companies that take ethical stances even at financial cost.

Anthropic proved ethics can be profitable. The company lost a $200 million contract and gained millions of users worth far more in long-term value. User trust matters. The 295% uninstall surge proves it. Developers have power—875 employees from across the tech industry backed Anthropic’s stance. That pressure matters.

The Precedent Being Set Right Now

This is AI’s Oppenheimer moment. Nuclear physicists faced this choice in 1945: do you build technology you know will be weaponized? AI researchers face it now in 2026. The legal challenge ahead will determine whether the U.S. government can blacklist American companies for refusing contracts on ethical grounds, or whether companies have the right to draw redlines without facing retaliation.

Just as social media companies split between privacy-focused models and ad-driven surveillance, the AI industry is splitting right now. There’s the “Anthropic path”—refuse military contracts that cross ethical lines, build user trust, accept financial cost in the short term. And there’s the “OpenAI path”—take Pentagon money, try to add safeguards through contract language, deal with user backlash and internal conflict.

Anthropic drew the line: no autonomous weapons, no mass surveillance. OpenAI crossed it, then backpedaled when users revolted. Developers should demand their employers and tools take Anthropic’s path. This is the moment that will define which AI companies you can trust and which ones sold out. Where do you stand?

ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to cover latest tech news, controversies, and summarizing them into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *

    More in:News