Redox OS adopted a strict no-LLM policy in February 2026, triggering a 339-comment Hacker News debate that exposed a fundamental rift in open source philosophy. They’re not alone. Gentoo and NetBSD banned AI-generated code in 2024, while Debian spent weeks debating the issue this February before ultimately deciding not to decide. This isn’t about hating AI—it’s an economic question about who pays for open source sustainability.
Review Becomes Refactor: The Maintainer Burden
LLM-generated code shifts burden from contributors to maintainers. What previously served as “proof of effort”—understanding a codebase, formatting patches correctly, writing meaningful commits—can now be automated in seconds. The result? Maintainers report that reviewing AI-generated code often requires MORE effort than writing it manually.
The EFF’s February 2026 policy on LLM contributions states plainly: code reviews are “turning into code refactors for maintainers if contributors don’t understand the code they submitted.” When you submit code you don’t understand, you’re asking unpaid volunteers to become your debuggers, teachers, and code reviewers. As one Hacker News commenter put it: “Most cost of code is maintenance. AI cannot help with that. Verbose AI submissions are a huge liability.”
This is the core argument for bans. Open source maintenance is already undervalued and under-resourced. Large pull requests that used to indicate significant thought now appear in minutes with no guarantee the contributor understands what they’ve submitted. The traditional model—high contributor effort, low maintainer effort—has inverted.
The Ban Wave: From Gentoo to Redox OS
Gentoo struck first in April 2024, instituting a complete ban on AI-assisted contributions citing three concerns: copyright, quality, and ethics. Their council policy states that contributions “created with the assistance of Natural Language Processing artificial intelligence tools are forbidden.” NetBSD followed in May 2024, classifying LLM-generated code as “tainted”—code with unclear provenance that requires written approval from core maintainers.
Redox OS’s February 2026 policy combined Certificate of Origin requirements with a strict no-LLM stance, sparking the most intense debate yet. Debian deliberated throughout February 2026, with Lucas Nussbaum proposing formal guidelines around disclosure and understanding requirements. After weeks of discussion, Debian concluded that maintaining status quo was “the best possible, and least disruptive, outcome for now.”
The EFF took a middle ground: allowing LLM use with three requirements—mandatory disclosure, demonstrated understanding, and human-authored documentation. But the pattern is clear. Multiple major open source projects independently reached similar conclusions about AI contributions being unsustainable. This isn’t one project making a controversial decision. It’s a trend.
How “Vibe Coding” Destroys Community Knowledge
Stack Overflow traffic has plummeted 75% from its 2017 peak and 60% year-over-year in December 2024. Developers now ask chatbots instead of visiting documentation sites, participating in forums, or engaging with project communities. Academic research quantified a 25% reduction in Stack Overflow activity within six months of ChatGPT’s release. Questions have dropped 76% since November 2022.
Hackaday calls this “vibe coding”—copy-pasting LLM output without understanding. It’s killing more than just Stack Overflow. When developers bypass documentation and community forums, “user interaction is pulled away from OSS projects.” Mentoring opportunities disappear. The knowledge commons that sustains open source erodes.
This is a tragedy of the commons. Individual contributors maximize personal productivity with AI, but collectively this destroys the community engagement open source depends on. When everyone uses chatbots, nobody contributes to Stack Overflow, documentation sites, or forums. The knowledge base that future AI and humans rely on collapses.
Why “LLMs Are Just Tools” Misses the Point
The most common objection: “LLMs are just tools, like compilers replacing assembly language.” But the analogy fails. Compilers don’t submit pull requests you don’t understand. They compile YOUR code. LLMs generate code you might not comprehend, then YOU submit it to unpaid maintainers for review.
Ted Ts’o argued in Debian’s debate that “gatekeeping contributors who use AI is self-defeating.” But Simon Richter countered with the real concern: “AI removes mentoring opportunities, creating a massive skill gap between ‘gets some results’ and ‘consistently and sustainably delivers results.'” The issue isn’t AI capability—it’s who pays for the externalized cost of low-effort contributions.
Yes, bans are largely unenforceable. Gentoo relies on trust, noting they’ll question “weird mistakes” unlikely from humans. But cultural norms matter more than detection. These policies set expectations: If you contribute to open source, understand what you submit and respect that maintainer time is finite and valuable.
What This Means for You
If you use LLMs to contribute to open source, expect more projects to adopt policies—explicit or implicit. Understand every line you submit. Be prepared to debug, modify, and maintain your code. Disclose AI assistance even if not required. Most importantly, recognize that when you submit code you don’t understand, you’re asking finite volunteer resources to subsidize your productivity gains.
The bans aren’t a solution to the underlying problem—undervalued, under-resourced open source maintenance. But they’re a rational defensive mechanism against unsustainable labor extraction. Projects saying “no” are setting boundaries that should have existed all along.
Key Takeaways
- Multiple major open source projects (Gentoo, NetBSD, Redox OS) have banned LLM-generated code, while Debian debated extensively without reaching consensus—this is a trend, not an outlier
- The core issue is maintainer burden: reviewing AI-generated code often requires more effort than writing it manually, turning unpaid volunteers into debuggers and teachers for code contributors don’t understand
- Stack Overflow traffic has collapsed 75% from its 2017 peak as “vibe coding” destroys the community knowledge-sharing that sustains open source—a classic tragedy of the commons
- Common objections like “LLMs are just tools” miss the economic point: these tools externalize costs from contributors to finite volunteer maintainer capacity
- If you use LLMs to contribute, understand your code thoroughly, disclose AI assistance, and respect that maintainer time is a valuable, finite resource—not a free debugging service

