NewsSecurity

Meta Buried Evidence Instagram Harms Children – Court Filing

Court filings unsealed Friday reveal what 1,800 plaintiffs—including children, parents, school districts, and 33 state attorneys general—have been arguing for years: Meta knew Instagram and Facebook harm children’s mental health, and the company buried the evidence. Internal documents from the lawsuit filed in Northern District of California show Meta shut down mental health research after finding “causal evidence” of harm. Meanwhile, the company maintained a “17x strike policy” allowing sex trafficking accounts 16 violations before suspension. This is the largest child safety litigation against a tech platform in history, and the January 26, 2026 hearing could reveal even more.

Meta Shut Down Research Proving Harm

In 2020, Meta scientists ran “Project Mercury” with Nielsen to test what happened when users deactivated Facebook and Instagram for one week. The results were unambiguous: people reported lower feelings of depression, anxiety, loneliness, and social comparison. A Meta researcher confirmed to Nick Clegg, then-head of global public policy, that “the Nielsen study does show causal impact on social comparison.”

Instead of publishing those findings or pursuing further research, Meta killed the project. Internally, the company claimed the study was “tainted by the existing media narrative.” Externally, Meta told Congress it had “no ability to quantify” whether its products harmed teenage girls—despite having the exact data proving they did.

The 17x Strike Policy: Safety By Design Failure

Vaishnavi Jayakumar, Instagram’s former head of safety and well-being, testified that she was shocked when she joined Meta in 2020 and learned about the company’s strike policy for sex trafficking. Accounts engaged in “trafficking of humans for sex” could violate platform rules 16 times before facing suspension on the 17th strike. Jayakumar characterized this as “a very, very high strike threshold” compared to industry standards, which typically enforce 1-3 strikes for serious violations.

This isn’t a bug—it’s a feature. Strike policies are design parameters that encode business priorities. A 17x threshold prioritizes user retention. A 1x threshold prioritizes safety. Developers implementing moderation systems make this choice explicitly, and the evidence shows Meta chose growth.

Zuckerberg’s Priorities: Metaverse Over Child Safety

In a 2021 text message, Mark Zuckerberg admitted child safety wasn’t his “top concern” because he had “a number of other areas I’m more focused on like building the metaverse.” He shot down or ignored Nick Clegg’s repeated requests to better fund child safety work. Meanwhile, court filings allege Meta “intentionally designed its youth safety features to be ineffective and rarely used” and blocked safety improvements that “might be harmful to growth.”

The company knew about the problem. Internal research from 2015 found 4 million Instagram users under age 13—about one-third of all 10-12 year olds in the U.S. By 2018, Meta reported to Zuckerberg that 40% of 9-12 year olds used Instagram daily. Between 2019 and 2023, Meta received over 1 million reports of underage users but “disabled only a fraction of those accounts.” A 2018 internal document states plainly: “we do very little to keep U13s off our platform.”

Why Developers Should Care

This isn’t just Meta’s problem. The lawsuit also names Google, TikTok, and Snapchat, alleging all four companies “relentlessly pursued a strategy of growth at all costs” and “intentionally hidden the internally recognized risks” from users, parents, and teachers. School districts are suing because they’re paying for counseling services and suicide prevention programs to address the mental health crisis platforms helped create.

If you’re building social features, recommendation algorithms, or content moderation systems, the Meta lawsuit exposes trade-offs you’ll face: strike policy thresholds, A/B test ethics when user harm is possible, age verification enforcement vs user acquisition, safety team funding vs new feature development. At some point, you may need to push back when growth metrics override user safety—or decide whether to keep working on a platform you know causes harm.

What Happens Next

The January 26, 2026 hearing in Northern California will determine whether Meta’s internal documents remain in evidence or get struck from the record. Meta is fighting to keep them sealed, which suggests there’s more damaging material the public hasn’t seen yet. The outcome could set precedent for platform liability and accelerate regulatory changes already gaining momentum.

Congress has proposed sunsetting Section 230 by 2026, which would end the blanket immunity platforms currently enjoy. The 2025 TAKE IT DOWN Act already created a carve-out requiring platforms to remove non-consensual intimate images within 48 hours. Multiple states are passing age verification requirements and safety audit mandates. The era of self-regulation is ending.

The evidence is overwhelming: platforms knew they harmed children and chose profit anyway. Developers who build safety-by-design now will fare better than those fighting regulators in court. Safety engineering roles are becoming more senior and better funded as the industry realizes compliance isn’t optional anymore. The Meta lawsuit isn’t an isolated incident—it’s the beginning of platform accountability.

ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to simplify complex tech concepts, breaking them down into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *

    More in:News