Dan Blanchard just sparked open source’s biggest ethics crisis in years. On March 5, the 12-year maintainer of chardet—a Python encoding library used by 130 million projects monthly—released version 7.0.0 as an MIT-licensed “clean room” implementation, shifting from LGPL 2.1. His secret weapon? Claude AI. Feed the API and test suite to an LLM, get “independent” code, relicense to your preference. Legal? Unclear. Legitimate? Hong Minhee’s viral essay (167 points, 160 comments on Hacker News) argues hell no: “Law sets a floor; clearing it does not mean the conduct is right.” If chardet’s relicensing stands, 40 years of GPL protections evaporate. Bruce Perens warns: “The entire economics of software development are dead, gone, over, kaput!”
The Maintainer Knowledge Problem Destroys Clean Room Defense
Blanchard claims “clean room” status by never looking at chardet’s source code during the AI reimplementation. But he maintained the project for 12 years. True clean room implementation—established in the NEC v. Intel case—requires the developer to have zero prior exposure to the original codebase. It’s legally and logically impossible for a long-time maintainer to claim this.
Here’s how real clean room works: Team A examines the original system and writes a specification. A lawyer reviews the spec. Team B—isolated, screened, with no connection to the original code—implements from the spec alone. Blanchard was Team A AND Team B. He extracted the API and test suite (Team A work), then prompted AI to implement (Team B work). The required separation doesn’t exist. Simon Willison’s analysis nails it: You can’t claim you don’t know something you’ve intimately known for 12 years. The AI didn’t provide clean room protection—it provided plausible deniability that doesn’t withstand scrutiny.
The Vercel Hypocrisy Exposes Everyone Wants Control
Vercel AI-reimplemented GNU Bash as MIT-licensed “just-bash,” championing open sharing. Then Cloudflare did the same to Vercel’s MIT-licensed Next.js, creating “vinext” in one week with $1,100 in AI tokens. Vercel reacted with outrage, attacking it as insecure and experimental. When YOUR code gets forked, suddenly “sharing is good” rhetoric disappears.
Cloudflare’s blog post details how one engineer rebuilt 94% of Next.js API using AI. Vercel’s response wasn’t celebration—it was accusations of security risks and buried disclaimers about vinext being experimental. Hong Minhee nails the hypocrisy: All licenses embed social expectations. Copyleft makes them explicit (legal requirements); permissive licenses keep them implicit (social norms like “don’t compete with us”). When implicit norms break, permissive advocates react identically to “restrictive” copyleft defenders. Everyone wants control; the difference is honesty about it.
Bruce Perens’ Alarm: AI Copyleft Becomes Unenforceable
Bruce Perens, who wrote the Open Source Definition, declares the entire software licensing model is collapsing. “Copyleft code like the GPL heavily depends on copyrights and friction to enforce it,” he told The Register. “But because it’s fundamentally in the open, with or without tests, you can trivially rewrite it these days.” If AI can reimplement any GPL code as MIT in a week, copyleft protections that built Linux, GCC, and countless projects evaporate.
Perens proposes “Post-Open” licensing: shift from copyright to contractual revenue-sharing. Companies earning over $5 million annually pay 1% revenue to an admin organization (like ASCAP for music), distributed to contributors. Why? Because copyright-based licensing assumes human authorship and friction—AI destroys both assumptions. This isn’t academic hand-wringing. If Perens is right, the entire open source sustainability model collapses. Maintainers lose even copyleft protections requiring contribution. Corporate “take without giving” becomes trivial.
The Legal Paradox: Derivative or Public Domain, No Middle Ground
AI-generated code faces a bizarre copyright paradox. If the AI “learned” from LGPL chardet to produce the new version, it’s arguably a derivative work and must remain LGPL. If it’s truly independent, the U.S. Supreme Court ruled AI-generated material cannot be copyrighted (lacks human authorship). Result: The new code is either an illegal derivative work or uncopyrightable public domain. There’s no legal path to MIT licensing that’s both lawful and copyrightable.
Feeding prompts with original LGPL artifacts—the API and test suite—arguably destroys “clean room” separation, making the output a derivative work bound by the original license. No court has ruled on this. Copyright law wasn’t built for AI-generated code. Developers and companies using AI tools face unresolved compliance risks. The “it’s legal” defense may not survive judicial scrutiny.
What This Means for You: Daily Ethical Decisions
Every developer using AI coding tools now faces Hong Minhee’s question daily: Legal vs legitimate? You CAN ask Claude to reimplement GPL libraries as MIT. You CAN use Copilot output without checking licenses. Should you? Werner Vogels (AWS CTO) coined “verification debt” for reviewing AI code—but verification extends beyond functionality to ethics. Who wrote the original? What license? What social contract are you breaking?
ByteIota covered the AI Verification Bottleneck: 96% of developers don’t fully trust AI code, yet only 48% always verify it. Time pressure, convenience, and workload override caution. The same dynamic applies to licensing ethics—developers skip due diligence even when they know they should check. The chardet controversy isn’t distant drama. It’s a mirror showing what happens when “just because you can” becomes normalized. Community must collectively decide: Is AI license laundering acceptable? If not, how do we enforce norms when law is ambiguous? Don’t hide behind “the AI did it.”

