
Apple spent March banning AI coding apps for “security reasons.” It spent May engineering a path to let those same apps back in — on Apple’s terms, with Apple taking a cut. Three weeks before WWDC 2026, the picture is clear: this was never purely about malware.
What Apple Actually Banned (And How Selectively)
The enforcement started quietly. In March 2026, Apple began blocking updates for Replit and Vibecode, citing App Store Guideline 2.5.2 — a long-standing rule that bars apps from executing code that alters their own functionality after review. The rule predates AI code generation entirely; its language was written for a different era of software distribution.
By March 30, Apple escalated: it pulled “Anything,” an AI app-building tool with a nine-figure valuation, from the App Store entirely. The reason was the same — 2.5.2. But here’s where the logic frays. Vercel’s v0 and several other tools with comparable architectures remained untouched with no explanation. Apple wasn’t enforcing a clear policy; it was setting precedent through action, leaving developers to navigate by what Apple blocked rather than what Apple said.
The Anything team’s response deserves a moment: they rebuilt their core product inside iMessage. Apple’s own messaging app became a vibe coding environment. The app returned to the App Store by April 3. It’s a creative move, but it also illustrates how absurd selective enforcement gets — the rules apparently don’t apply inside the blue bubble.
The $900 Million Reason This Is About Revenue
To understand Apple’s behavior, stop reading the press releases and look at the numbers. In 2025, Apple collected roughly $900 million in App Store commissions from generative AI apps. ChatGPT alone accounted for approximately 75% of that — around $675 million — through the standard 30% subscription cut Apple takes in year one, 15% thereafter.
For context: Amazon, Alphabet, Meta, and Microsoft collectively spent close to $400 billion on AI infrastructure in 2025. Apple spent about $12.7 billion and pocketed nearly $900 million from AI apps it didn’t build. The AI arms race is being taxed by a company that largely opted out of it.
With AI agents expanding beyond chatbots into tools that perform tasks, book services, and generate software autonomously, Apple faces a direct threat to this revenue stream. An agentic AI that operates outside the App Store — distributing subscriptions through the web, bypassing Apple’s payment rails entirely — is money Apple doesn’t collect. The ban on vibe coding apps isn’t separable from this economic reality.
The Reversal: Same Strategy, Different Phase
On May 13, The Information reported that Apple staffers are designing a system to allow AI agents on the App Store while maintaining privacy and security standards. This is being framed as a reversal. It isn’t. It’s the same strategy: control distribution, collect the fee.
Apple has already moved the commission lever once. Starting May 1, 2026, apps with “advanced AI capabilities” that adhere to Apple’s ethical AI guidelines qualify for a reduced 15% commission rate — cutting the standard first-year take in half. This is a carrot. The App Store ban was a stick. Both are designed to keep agentic AI flowing through Apple’s payment infrastructure.
Whether WWDC 2026 on June 8 delivers a formal AI agent framework or merely teases one, the direction is set: Apple will allow AI agents in the App Store, under conditions Apple controls, at rates Apple sets.
What Developers Should Watch at WWDC 2026
If you’re building AI products for iOS, four things matter at the June 8 keynote:
- Updated App Review Guidelines: Will Apple formally revise 2.5.2 language to define what AI code execution is permissible — or will enforcement remain the de facto policy?
- Siri Extensions framework: iOS 27 is expected to open Siri to Claude, Gemini, and third-party agents. How this API works determines how AI agents can plug into the native iOS experience.
- AI agent entitlement system: A special capability declaration for apps with AI code generation would give developers a clear compliance path rather than the current guesswork.
- Commission structure for AI agents: Will the 15% rate for “ethical AI” apps expand to cover agentic capabilities? This is the economic signal that determines what gets built.
Security Is Real. Revenue Is Realer.
Apple’s malware concern is not fabricated. An AI coding app that clears the review process can subsequently generate software Apple has never inspected — by definition. That’s a legitimate risk vector. But it doesn’t explain why Vercel’s v0 was fine and Anything wasn’t. It doesn’t explain why enforcement escalated from “block updates” to “remove app” for a product with a nine-figure valuation and no documented harm. And it doesn’t explain why the same company that cited security to ban these apps in March is now building a framework to admit them in May.
What explains all of it: Apple is on track to earn over $1 billion from AI apps it didn’t build in 2026. Agentic AI is the next wave. Apple intends to collect its cut of that wave too. Security is how Apple describes this. Revenue is what drives it. Developers who understand the financial logic will predict Apple’s behavior far more accurately than those still debating whether the ban was “fair.”













