Industry Analysis

Meta $3B Scam Ad Revenue: Platform Economics Beat Safety

A Reuters investigation published this week revealed Meta earned $3 billion from scam ads, illegal gambling, and pornography in 2024—19% of its $18 billion China advertising revenue. Internal documents show CEO Mark Zuckerberg intervened to disband Meta’s anti-fraud team just as it succeeded in cutting violations from 19% to 9% in six months. Within months, violations rebounded to 16% after Meta reinstated 4,000 suspended Chinese agencies for $240 million in revenue. The kicker: Meta’s internal assessment concluded scam revenue would “almost certainly exceed the cost of any regulatory settlement.”

This isn’t a bug—it’s how platform economics work when profit maximization meets user safety. Meta didn’t accidentally profit from scams; it calculated that $3 billion in scam revenue exceeds enforcement costs and regulatory fines combined.

The Numbers Don’t Lie

Meta showed users 15 billion “higher risk” scam ads daily in 2024. That’s approximately 11 scam ads per user, every single day. The company internally labeled China its “top scam exporting nation,” accounting for 25% of all global scam and banned-product ads. Yet China advertising revenue grew from $7.4 billion in 2022 to $18.4 billion in 2024—a 148% increase in two years.

Rob Leathern, who ran Meta’s business integrity operations until 2019, reviewed the violation rates revealed in Reuters’ reporting. His reaction: “The levels that you’re talking about are not defensible. I don’t know how anyone could think this is okay.” Leathern has since launched CollectiveMetrics.org, a nonprofit focused on bringing transparency to digital advertising.

The scale isn’t a rounding error. Internal documents show Meta earned approximately $7 billion annualized from “higher risk” ads across all regions—representing a deliberate economic choice, not enforcement failure.

When Enforcement Works, Shut It Down

In early 2024, Meta created a China-focused fraud unit that used stepped-up detection tools and tougher human review to slash problematic ads from 19% to 9% of China revenues in six months. The team worked. Violations dropped 53% in half a year.

Then CEO Mark Zuckerberg weighed in. A late 2024 internal document notes the China ads-enforcement team was “asked to pause” work “as a result of Integrity Strategy pivot and follow-up from Zuck.” Meta disbanded the team, lifted freezes on new Chinese ad agencies, and reinstated 4,000 suspended agencies—unlocking $240 million in annual revenue. Half of that revenue came from ads violating Meta’s own safety policies.

By mid-2025, banned advertisements climbed back to approximately 16% of Meta’s China revenue. The pattern is clear: enforcement succeeded, so it was eliminated. Meta spokesperson Andy Stone claims the team was “always meant to be temporary,” which raises the obvious question—why temporary when it demonstrably worked?

Meta’s “Trusted Experts” Helped Run Scam Ads

Reuters reporter Jeff Horwitz tested Meta’s advertising enforcement by creating ads for a bogus investment promising 10% weekly returns—a 14,000% annualized rate. Working through Meta’s “Badged Partners”—agencies publicly endorsed in its official Partner Directory as “trusted experts”—the scam ads ran for four days and reached more than 20,000 users across the US, Europe, India, and Brazil.

Agencies like Bluefocus (Vietnam), Green Orange (Hong Kong), and Uproas (Estonia) charge a $30 sign-up fee plus higher commissions to run prohibited ads. Bluefocus even offers a tutorial titled “How to Advertise Illegal Products on Facebook.” These weren’t rogue actors—they were Meta’s officially endorsed partners.

After Reuters exposed this, Meta deleted its entire Partner Directory and put the program “under review.” An external audit by Propellerfish Group, commissioned by Meta itself, concluded the company’s “own behavior and policies” were promoting systemic corruption in China’s advertising ecosystem. Reuters reported Meta largely ignored these findings and expanded operations anyway.

The Economics of Looking Away

Meta set a revenue “guardrail” capping enforcement impact at $135 million—0.15% of total revenue. Any scam campaign generating more revenue than that threshold gets deprioritized. Internal documents show growth teams could veto enforcement proposals due to “revenue impact,” and “High Value Accounts” were allowed to accumulate 500+ policy violation strikes before shutdown.

The internal assessment that scam revenue would “almost certainly exceed the cost of any regulatory settlement” reveals the core calculation. Meta isn’t trying to eliminate scams—it’s managing them to a profitable equilibrium where enforcement costs plus potential fines remain below scam ad revenue. This is platform economics at work: when the formula is “scam revenue > enforcement cost + regulatory fines,” platforms choose revenue every time.

Three-quarters of harmful ad spending came from accounts with partner protections, showing the system was designed to protect high-spending advertisers regardless of content. For developers building on Meta’s platform or advertisers concerned about brand safety, this raises fundamental trust questions.

What Happens Next

Meta’s scam ad problem lands amid accelerating regulatory pressure. In November 2025, state attorneys general from dozens of US states and territories sent a letter warning Meta, OpenAI, Google, Anthropic, and other tech companies to fix “delusional outputs” or risk breaching state law. The letter cited mental health incidents, suicides, and other serious harms linked to AI platforms.

Senators Mark Kelly and John Curtis introduced the Algorithm Accountability Act, which amends Section 230 to impose a duty of care on companies using recommendation algorithms. The bipartisan Deepfake Liability Act would make platform immunity conditional on addressing deepfakes, cyberstalking, and digital forgeries. Meanwhile, the EU’s Digital Services Act already requires platforms to file illegal-content risk assessments and faces enforcement through fines and daily penalties.

The FBI seized $214 million in March 2025 from one Chinese stock scam that used Facebook and Instagram ads to lure victims. Users who clicked the ads were routed to WhatsApp groups run by scammers posing as US investment advisors, then steered to buy stock at vastly inflated prices. This demonstrates the real-world financial harm at scale.

Key Takeaways

  • Meta earned $3 billion from scam ads in 2024 (19% of China revenue), showing users 15 billion daily “higher risk” scam ads—this is systematic, not isolated
  • A dedicated anti-fraud team cut violations 53% in six months (19% to 9%), then CEO intervention disbanded the team and violations rebounded to 16%—enforcement worked until it threatened revenue
  • Reuters reporter successfully ran scam ads promising 14,000% returns through Meta’s “Badged Partners,” who were officially endorsed “trusted experts”—Meta deleted its Partner Directory after exposure
  • Internal assessment: scam revenue would “almost certainly exceed” regulatory fines, with $135M enforcement guardrail (0.15% of revenue) and 500+ strikes allowed for “High Value Accounts”—treating fines as cost of doing business
  • Regulatory reckoning accelerating: State AGs warning platforms, Section 230 reform bills introduced, EU DSA enforcing accountability—the era of scam revenue > enforcement cost may be ending

For developers and tech professionals, the question isn’t whether Meta knowingly profited from scams—internal documents prove it did. The question is whether platforms designed around these economic incentives can self-regulate, or if structural changes through legislation are inevitable.

ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to simplify complex tech concepts, breaking them down into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *