The European Commission launched a formal investigation into Grok on January 26, targeting Elon Musk’s AI chatbot for generating an estimated 3 million sexualized images—including 23,000 of children—in just 11 days. Simple prompts like “remove her clothes” bypassed safety controls entirely. The Digital Services Act investigation could result in fines up to 6% of X’s global revenue.
This wasn’t isolated abuse slipping through cracks. The Centre for Countering Digital Hate analyzed a sample of 20,000 images Grok generated between December 29 and January 8. Their finding: Grok produced sexualized content at a rate of 190 images per minute, with one sexualized image of a child every 41 seconds.
That scale exposes a systemic safety architecture failure, not a moderation gap.
How Safety Controls Failed at Every Layer
Grok’s safety failures stand in stark contrast to industry standards. DALL-E uses advanced techniques to prevent photorealistic generations of real individuals’ faces. Midjourney enforces strict bans on “disrespectful, aggressive, or abusive” content with community reporting systems.
Grok had geographic IP blocking for jurisdictions where explicit content is illegal, content filters for harmful language, and subscriber-only access as an “accountability layer.” Yet users bypassed all of it with basic prompts: “put her in a bikini” or “remove her clothes.”
The technical gaps are clear. Grok-2 operates with minimal NSFW restrictions compared to competitors. There’s no watermarking identifying images as AI-generated—a transparency standard across the industry. And critically, insufficient pre-deployment safety testing meant these vulnerabilities reached production at massive scale.
X says it implemented “technological measures” in January 2026 to prevent editing images of real people in revealing clothing. That’s after generating 3 million sexualized images, not before. Musk’s personal response—posting an image mocking the restrictions with text “It’s so hard to get the moderation just right 😂”—signals the leadership tone problem behind the technical failures.
DSA Investigation: First Major AI Platform Enforcement
The Digital Services Act investigation examines whether X met its obligations to assess and mitigate risks before deploying Grok’s image generation features in the EU. The stakes are significant: fines can reach 6% of global annual turnover, with additional periodic penalties of up to 5% of average daily worldwide turnover for continued non-compliance.
This builds on precedent. The European Commission issued a €120 million fine to X in December 2025 for DSA transparency violations—the first major enforcement action under the regulation. That fine was well below the 6% maximum, suggesting tougher penalties ahead for systemic safety failures.
The investigation scope has expanded beyond image generation to examine X’s recommender systems and the move toward a Grok-based recommendation model. Brussels is scrutinizing systemic risks across the platform’s AI integration.
Global Regulatory Momentum Beyond the EU
This isn’t just European overreach. Multiple jurisdictions are acting in coordinated fashion.
The UK’s Ofcom opened a formal investigation on January 12 under the Online Safety Act, examining whether X complied with duties to prevent illegal content including child sexual abuse material and non-consensual intimate imagery. India’s Ministry of Electronics and IT issued warnings after identifying “serious failures” in preventing sexually explicit content. X responded by blocking over 3,500 pieces of content and deleting more than 600 accounts, but Indian officials deemed the response insufficient, calling for policy-level fixes.
Malaysia and Indonesia went further, becoming the first countries to block Grok entirely over safety violations. Access was restored only after X implemented additional safety measures—establishing a precedent that unsafe AI deployment can mean losing entire markets overnight.
Technical Lessons for Developers Building AI Features
The developer takeaway isn’t “AI image generation is impossible to secure.” It’s “Grok failed because X skipped the safety infrastructure that works.”
Industry best practices are clear:
- Pre-deployment red-teaming: Test with adversarial prompts before public release
- Multi-layer filtering: Analyze prompts AND validate outputs, not one or the other
- Watermarking: Tag all AI-generated images for transparency
- Human review loops: Escalate edge cases, don’t rely purely on automation
- Geographic compliance: Respect local laws and cultural norms with regional controls
- Transparent reporting: Publish safety metrics, accuracy rates, error analysis
Grok lacked most of these. The cost: international investigations, potential hundreds of millions in fines, and platform blocks in multiple countries.
What This Means for AI Developers in 2026
The regulatory landscape is shifting from reactive to preventive. The DSA enforcement framework requires Very Large Online Platforms to conduct annual risk assessments and implement mitigation measures before deploying AI features. Transparency obligations mandate reporting on AI accuracy, error rates, and the role of human review.
Move fast and break things doesn’t work when you’re breaking child safety laws across continents. The Grok investigation sets the standard: deploy AI without robust safety controls, face 6% revenue fines and lose market access. For developers building AI features in 2026, safety architecture isn’t a nice-to-have—it’s a legal requirement with real enforcement teeth.












