On April 14, 2026, Anthropic quietly rolled out government ID and live selfie verification for select Claude users—making it the first major consumer AI chatbot to require such invasive identity checks. The timing couldn’t be more ironic: just eight weeks after rejecting a $200 million Pentagon contract over surveillance concerns, the company now demands passport scans processed by Persona, the same third-party vendor whose codebase leaked on a government server in February. Users who fled ChatGPT for Claude’s privacy stance are furious—and many are switching back.
The privacy paradox is stark. Neither ChatGPT nor Gemini require government IDs for consumer use. Anthropic’s voluntary choice (not legally mandated) just handed its competitors a gift.
From Pentagon Rejection to Passport Scans
In February 2026, Anthropic rejected the Pentagon contract, citing concerns about “mass surveillance and autonomous weapons.” The moral stance resonated—free signups surged 60% as privacy-conscious developers fled OpenAI’s ChatGPT. Daily signups broke records. Claude positioned itself as the privacy-first alternative.
Eight weeks later, Anthropic rolls out mandatory government ID and live selfie verification. The requirements are strict: physical, undamaged passports, driver’s licenses, or national ID cards only. Photocopies, screenshots, and digital IDs are rejected. Users must capture live selfies via phone or webcam for biometric matching.
ChatGPT and Gemini? Age 13 and up, zero ID verification. Anthropic just made Claude the most restrictive major AI chatbot—the opposite of its privacy-first brand promise.
Persona Verification: Discord Canceled Weeks Earlier
Anthropic chose Persona, a third-party KYC vendor, to process identity verification. Here’s the problem: Persona’s entire frontend codebase was exposed on a U.S. government-authorized endpoint in February 2026—just weeks before Anthropic adopted them. Security researchers found nearly 2,500 accessible files sitting on a government server.
Discord had been testing Persona for age verification. After the security incident and user backlash, Discord canceled the partnership in late February and delayed its rollout to “second half of 2026.” Anthropic saw this play out and chose Persona anyway in April.
The data exposure is substantial. Your government ID and biometric selfie flow through Persona to seven subprocessors: AWS, Google, OpenAI, Stripe, Twilio, Confluent, and Anthropic itself. Data retention? Up to three years. Persona performs 269 distinct verification checks, including “adverse media” screening for terrorism and espionage.
The circular irony: users left ChatGPT for Claude’s privacy, but Persona shares data with OpenAI as a subprocessor. You’re trusting OpenAI indirectly anyway.
User Backlash and Competitive Damage
Developer communities erupted after the April 14 rollout. Hacker News users called the policy “disgusting,” “deranged,” and “hypocritical.” The sentiment was unanimous: Anthropic betrayed the users who trusted its privacy principles.
One user captured it perfectly: “AI KYC is here. Not even a regulatory requirement—Anthropic just doing it because they want to.” Another noted: “Claude now requires government ID verification…ChatGPT doesn’t. Gemini doesn’t. Anthropic just handed their competitors a gift.”
Users are canceling Claude Pro and Max subscriptions. Some are switching back to ChatGPT (the service they left for privacy reasons). Others are migrating to Gemini or running local models like Qwen and Gemma via Ollama and LM Studio. Anthropic’s privacy differentiation—its core competitive advantage—just evaporated.
Related: Anthropic Pentagon Ban: First US Company Hit as “Woke AI” Risk
Voluntary Surveillance Theater
Anthropic cites “preventing abuse, enforcing usage policies, and complying with legal obligations” as justification. But this is voluntary surveillance theater, not regulatory compliance. No US or EU law mandates government ID verification for AI chatbots. If it were legally required, ChatGPT and Gemini would require it too. They don’t.
OpenAI and Google achieve abuse prevention and policy enforcement without demanding passports. Anthropic chose this path independently—a strategic decision that undermines its brand and alienates its core user base.
The Pentagon-to-passport timeline is damning. February: reject surveillance contract, attract privacy-conscious users. April: demand government IDs processed by a vendor with proven security failures. It’s strategic self-sabotage.
Key Takeaways
- Anthropic rolled out mandatory government ID and live selfie verification on April 14, 2026—the first major AI chatbot to do so, just eight weeks after rejecting a Pentagon surveillance contract
- Neither ChatGPT nor Gemini require ID verification for consumer use (age 13+ only), making Claude the most restrictive option and handing competitors a major advantage
- Persona, the third-party verification vendor, exposed its entire frontend codebase on a government server in February 2026—weeks before Anthropic adopted them (Discord canceled the partnership over this incident)
- User data flows through seven subprocessors including OpenAI, creating a circular privacy paradox: users left ChatGPT for Claude’s privacy, but now trust OpenAI indirectly via Persona
- This is voluntary corporate surveillance, not regulatory compliance—Anthropic chose to implement invasive ID checks while competitors achieve the same goals without passports
Anthropic’s privacy paradox is complete. The company won users through principled stances against surveillance, then voluntarily implemented the industry’s most invasive identity verification system. Users recognize the betrayal and are voting with their wallets—fleeing to the competitors Claude was supposed to replace.










