Debian developers spent February-March 2026 debating whether to accept AI-assisted code contributions, ultimately deciding “not to decide.” Ex-Debian Project Leader Lucas Nussbaum proposed a general resolution in mid-February with seven conditions for accepting LLM-assisted code. After weeks of intense mailing list debate, the community abandoned the vote entirely, opting to continue handling AI contributions case-by-case under existing policies. This marks the first major open source project to publicly grapple with AI code governance after Amazon’s March 2026 outages proved verification isn’t optional.
Debian’s stalemate reveals a systemic problem affecting every open source project. You can’t write policy for “AI contributions” when nobody agrees what the term means, enforcement is impossible without detection tools, and copyright law hasn’t caught up to AI-generated code.
Three Camps, Zero Agreement on AI Contributions
The Debian community split into three irreconcilable camps. Charles Plessy led the copyright faction, calling commercial AI “copyright laundering machines that allow to suck the energy invested in Free Software and transfer it in proprietary works.” He’d vote against current commercial AI usage but wouldn’t oppose generative AI trained on consensual data.
Ansgar Burchardt represented the pragmatist camp, arguing AI is “just another tool” comparable to compilers or linters. Russ Allbery characterized Gentoo’s AI ban as “unenforceable” and preferred reactive rather than proactive policies. Sam Hartman suggested existing application managers could assess trustworthiness without new rules.
Sean Whitton proposed a nuanced middle ground: distinguish between LLM uses. Code review with AI assistance differs fundamentally from submitting AI-generated production code. A blanket policy on “AI contributions” conflates use cases that aren’t comparable.
The community couldn’t reconcile these positions. You can’t reach consensus when one camp sees copyright violations, another sees pragmatic tooling, and a third argues the question itself is poorly framed.
You Can’t Police What You Can’t Detect
Every AI code policy faces an insurmountable barrier: you cannot reliably detect AI-generated code. Gentoo banned AI contributions in April 2024, but Council member Michał Górny admitted the policy’s real purpose is “to make it clear what’s acceptable and what’s not, and politely ask our contributors to respect that.”
Enforcement is honor-based. If maintainers spot unusual mistakes unlikely from human error, they ask questions. That’s the best they can do. QEMU will decline contributions if AI use is “known or suspected”—which means someone confesses or leaves obvious tells. Detection tools like GPTZero don’t work reliably on code, especially post-edited code.
Gentoo’s February 2026 migration to Codeberg highlighted the tension. The project partly moved because GitHub Copilot prompts felt like “nagware” promoting the very tool they’d banned. You can’t enforce policy when the platform itself suggests AI usage.
Without detection tools, formal AI policies are performative. They express values—which matters philosophically—but can’t prevent AI code from entering projects. Debian’s case-by-case approach is arguably more honest than unenforceable bans.
Why Amazon Can Mandate and Debian Can’t
Amazon responded to AI code outages—including a six-hour shopping disruption on March 5, 2026—by mandating senior engineer approval for all AI-assisted changes. Corporate governance works top-down. Amazon controls workflow gates, CI/CD pipelines, and code review systems. Senior approval becomes a technical enforcement point, not just policy.
Debian’s consensus model requires volunteer agreement. You can’t force contributors to follow unenforceable rules. Corporate employers can fire people who violate policy. Open source projects can only reject patches—but first they’d need to detect AI usage, which brings us back to the enforcement problem.
Related: Amazon AI Code Review Policy: Senior Approval Now Mandatory
This governance gap isn’t a bug, it’s fundamental to how open source operates. When faced with AI tooling, consensus models break down because values diverge. Amazon solves this with hierarchy. Debian can’t. Every community-driven project faces this same structural limitation.
Developer Certificate of Origin Hits AI Roadblock
Projects using the Developer Certificate of Origin (DCO)—including Linux kernel and QEMU—face a legal conflict with AI-generated code. DCO requires contributors to attest “I created this work” or “I have rights to submit this.” But AI output’s copyright status is legally unclear.
QEMU banned AI contributions specifically because “in many jurisdictions AI-generated code is not recognized as a copyright-protected work. Therefore, it is legally difficult to demonstrate the existence of a human author.” If someone claims your AI output infringes their code, you can’t prove “this was generated by an AI, carries no copyright, and is unrelated to your work.”
The EU AI Act adds regulatory pressure, requiring human oversight by August 2, 2026. Open source projects face compliance questions without legal clarity on AI authorship. The legal foundation of open source contribution—DCO, copyright licenses, attribution—hasn’t adapted to AI output.
Projects can’t vote their way past this. Courts need to clarify copyright status. Legislatures need to update IP law. Debian’s stalemate is rational given legal uncertainty.
Open Source AI Code Policies: The Current Landscape
70-73 open source organizations have developed AI policies as of February 2026, per RedMonk analysis. The ecosystem is fragmenting between bans, permissive policies, and indecision.
Gentoo (April 2024) and QEMU (June 2025) banned AI contributions citing copyright, quality, and DCO incompatibility. NetBSD followed Gentoo’s lead. The Linux Foundation took the opposite approach: AI-generated code is allowed if the tool’s terms of service don’t conflict with open source licenses and contributors verify compliance.
Debian joins the undecided camp. No formal policy means questions get handled individually as they arise—which may be the only viable approach until detection tools exist and copyright law catches up.
All policies rely on honor-based compliance. Nobody has solved enforcement. Contributors using GitHub Copilot for autocomplete face rejection from Gentoo but acceptance from Linux Foundation projects. This inconsistency creates confusion and may drive developers toward permissive projects.
Key Takeaways
- Debian’s February-March 2026 debate ended without a general resolution vote, with the community deciding to continue handling AI contributions case-by-case rather than adopting formal policy
- Consensus failed because three camps couldn’t reconcile positions: copyright concerns (calling AI “copyright laundering”), pragmatism (AI as “just another tool”), and nuanced distinctions (code review vs production code generation)
- Enforcement remains impossible without reliable AI detection tools—all existing policies (Gentoo’s ban, Linux Foundation’s permissive approach) rely on honor-based compliance
- Corporate governance (Amazon’s mandatory senior approval) works through technical controls and hierarchy, while open source consensus models break down when values diverge over AI tooling
- Legal uncertainty blocks progress: Developer Certificate of Origin requires human authorship, but AI output’s copyright status is unclear in most jurisdictions, with EU AI Act August 2026 deadline adding compliance pressure
The open source community can’t agree on AI code policies because the question itself is broken. Definitions don’t exist, enforcement mechanisms aren’t available, and copyright law lags behind technology. Debian’s non-decision may be the most rational response until these foundational issues resolve.

