“AI slop” dominated Word of the Year selections for 2025. Merriam-Webster, Macquarie Dictionary, and the American Dialect Society all independently chose variations of this term to describe the wave of low-quality AI-generated content flooding online communities. By May 2026, the problem reached crisis levels. On May 6, a viral post titled “AI Slop is Killing Online Communities” exploded on Hacker News with 454 points and 437 comments. Open source projects are slamming their doors on AI contributions. Stack Overflow’s monthly questions dropped from 200,000 in 2014 to near zero by 2026. Communities face an existential choice: ban AI and lose openness, or allow AI and lose quality.
What Is AI Slop and Why Did It Win Word of the Year?
AI slop is low-quality, mass-produced content that sounds fluent but delivers no value. Researchers identify three defining characteristics: superficial competence (grammatically correct but conceptually shallow), asymmetric effort (seconds to generate, hours to evaluate), and mass producibility (floods platforms at near-zero cost).
When three major dictionaries independently select the same word, it signals a tipping point. Merriam-Webster defined it as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” Macquarie Dictionary won the public vote with “low-quality content created by generative AI, often containing errors, and not requested by the user.” Moreover, this wasn’t just tech jargon—it became a cultural phenomenon reflecting widespread frustration with the internet’s rapid degradation.
The Measurable Damage: Stack Overflow, GitHub, and YouTube
AI slop has caused measurable, catastrophic damage to the platforms developers rely on daily. Stack Overflow saw monthly questions collapse from 200,000 in 2014 to near zero by 2026 as developers moved to ChatGPT—which provides wrong answers over 50% of the time. Consequently, speed beats accuracy when you can’t trust either source.
Open source projects are drowning. Major projects enacted total bans: Zig implemented a “strict-no-llm-no-ai policy,” NetBSD banned AI code entirely, and tldraw’s creator began auto-closing all external pull requests because AI submissions overwhelmed volunteer maintainers. Additionally, GitHub responded by adding a feature to disable pull requests entirely—a nuclear option that defeats the purpose of open source collaboration.
The damage extends beyond developer communities. However, a March 2026 New York Times investigation found 40% of YouTube Kids recommendations were AI slop with realistic or Cocomelon-style visuals. GetStream.io research found platforms with unmoderated AI content see 30% retention drops within the first year. Therefore, communities that defined the internet for developers are either dying or closing their doors.
Why Moderation Can’t Save Us: The Asymmetry Problem
Moderation fails because of fundamental asymmetry captured by Brandolini’s Law: “The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.” One person with AI tools can generate hundreds of submissions per day. Nevertheless, each takes seconds to create but minutes to hours for a volunteer moderator to evaluate properly.
Detection tools don’t work. AI detection accuracy ranges from 0-100% (unreliable at best). Furthermore, watermarks can be stripped. Content is often perceptually indistinguishable from human work. Stack Overflow’s attempt to ban AI answers led to a volunteer moderator strike because the policy was unworkable—false accusations harmed legitimate contributors. The company later allowed AI content, triggering another strike. Only 1.2% of communities have policies addressing AI content, and those that do lack enforcement mechanisms.
There’s no technical solution coming. Before AI, high effort cost served as natural “proof of work” filtering for quality. In contrast, the economics are now broken: the tools that made content generation cheap have made quality control impossible at scale.
Communities Choose Quality Over Openness
Faced with the choice between quality and openness, communities are choosing quality. This represents a fundamental shift from the “default open” ethos that built the modern web. Four major projects (Zig, NetBSD, GIMP, qemu) enacted total bans. LLVM instituted a “human-in-the-loop” policy requiring disclosure and accountability. Ghostty auto-closes unattributed AI pull requests.
The trend is accelerating. In early 2026, projects began adding “No AI contributions” to their codes of conduct. Meanwhile, GitHub’s decision to add a “disable PRs” feature acknowledges that the flood can’t be managed with traditional tools. Indeed, tldraw’s choice to close all external PRs—not just AI ones—shows how AI spam forces nuclear options that hurt everyone.
When communities close their doors, they’re not being exclusionary—they’re fighting for survival. Open source was built on trust and collaboration. AI broke both. Better to be exclusive and alive than open and dead.
What You Can Do: Defending Quality in the Age of AI Slop
There’s no perfect solution, but passivity guarantees community death. Here’s what developers can do:
Support communities that ban or strictly regulate AI contributions. These aren’t gatekeepers—they’re survivors protecting quality standards. Similarly, call out AI slop when you encounter it. Don’t normalize low quality through silence.
If you use AI tools, be transparent and stand behind quality. The LLVM approach works: disclose AI use and take responsibility for what you submit. Build “with AI, not by AI”—that means months of work and real-world testing, not one-off generations posted as contributions.
Contribute to communities that value human creativity and quality over volume. The consumer preference shift shows this matters: only 26% now prefer AI content, down from 60% in 2023. Furthermore, the #supporthumanart hashtag is trending as a dominant response. Paradoxically, AI slop may be accelerating appreciation for human creativity and authenticity.
Communities die when quality members check out. If developers who value quality withdraw, only AI slop remains. Therefore, the future of online communities depends on people actively defending standards and calling out the race to the bottom.
Detection fails. Bans are hard to enforce. Moderation can’t scale. But doing nothing guarantees community death. Communities are right to slam their doors. Quality matters more than openness when openness means drowning in noise.









