The MeshCore open-source project announced a team split TODAY after discovering core member Andy Kirby secretly used Claude Code to rewrite most of the codebase, then filed for the MeshCore trademark on March 29 without telling anyone. The team called it “a slap in the face” — he kept it secret that the code was “majority vibe coded,” and when confronted about both the undisclosed AI code and the trademark grab, communication collapsed.
This is the first major open-source breakup explicitly caused by undisclosed AI-generated code combined with IP theft. The timing matters: This happened three weeks after Claude Code’s source leak revealed its “undercover mode” that strips AI attribution from commits.
The Betrayal: AI Code Plus Trademark Theft
Andy Kirby extensively used Claude Code — Anthropic’s AI coding agent — to rewrite major components of the MeshCore ecosystem. Standalone devices, mobile app, web flasher, web config tools. He never told the team. They discovered the secret through internal Discord discussions about AI trust, then found out he’d filed the MeshCore trademark on March 29 without disclosure.
From today’s MeshCore blog post: “It’s been a slap in the face to the team that have worked so hard on this project, to have an insider team up with a robot and a lawyer.”
The core team launched meshcore.io as their new official site since Andy controls the original meshcore.co.uk domain and Discord server. They’re continuing development with 38,000 network nodes and 100,000 active users.
Claude Code’s “Undercover Mode” Enabled This
Three weeks ago, on March 31, Claude Code’s entire source leaked — 512,000 lines of TypeScript exposed via a missing .npmignore file. The leak revealed “undercover.ts,” roughly 90 lines of code that instructs the AI to hide its involvement.
The system prompt is explicit: “Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.” Claude Code strips “Co-Authored-By” attribution from commits automatically. It forbids mentioning “Claude Code” in any external contribution. The tool was designed to hide AI authorship.
MeshCore used Claude Code. The leak’s timing isn’t coincidental — this tool is built to enable exactly what happened at MeshCore. When your AI coding assistant has an “undercover mode,” how many open-source projects have undisclosed AI contributions right now?
The Quality Problem: Vibe Coding’s Hidden Costs
Vibe coding — coined by OpenAI co-founder Andrej Karpathy in February 2025 — means describing what you want in natural language and letting AI generate the code. Karpathy said to “embrace the vibes, forget the code even exists.” That’s the opposite of code review.
CodeRabbit’s December 2025 analysis of 470 open-source pull requests found AI co-authored code contains 1.7× more major issues, 2.74× more security vulnerabilities, and 75% more misconfigurations compared to human-written code.
MeshCore’s codebase is “majority vibe coded” according to the team. That means 100,000 users are potentially running code with 2.74× higher security vulnerability rates. Was any of it properly reviewed? The team didn’t even know it was AI-generated until months later.
AI code needs MORE scrutiny, not less. Vibe coding culture promotes the opposite.
Legal Gray Area: Can You Trademark AI-Generated OSS?
Andy’s trademark claim may not be legally valid. The U.S. Copyright Office maintains that AI-generated work isn’t human authorship and doesn’t qualify for copyright protection. In March 2026, the Supreme Court denied challenges, affirming AI isn’t a legal person capable of holding copyright.
If the code is “majority vibe coded,” Andy may not legally own it. Filing trademark on AI-rewritten collaborative open-source code is ethically questionable at minimum, potentially legally invalid.
The Linux kernel established formal AI code disclosure policies in early 2026, requiring an “Assisted-by” tag for AI contributions. The human submitter takes full legal responsibility for bugs. OpenJDK has similar interim policies. The EU AI Act’s Article 50 transparency requirements become enforceable August 2.
This is the first test case of AI code ethics meeting real-world IP disputes. The legal system hasn’t caught up to vibe coding yet.
ByteIota’s Take: AI Disclosure is Mandatory
AI code disclosure isn’t optional in collaborative projects — it’s essential. This isn’t about being anti-AI. Claude Code, GitHub Copilot, and similar tools can accelerate development when used transparently.
But when you secretly replace your teammates’ work with AI output and then claim ownership via trademark, you’ve violated the fundamental trust that makes open source work. The MeshCore team got it right: “An insider team up with a robot and a lawyer” is not ethical open source.
If you use AI in collaborative open source, you must:
- Disclose it to your team BEFORE replacing their work
- Review AI code carefully (remember: 2.74× more security vulnerabilities)
- Tag commits appropriately (“Assisted-by: Claude Code”)
- Never claim sole ownership of AI-rewritten collaborative work
- Take full responsibility for code quality (human signoff required)
The Linux kernel has the right model. Disclosure via “Assisted-by” tags. Human responsibility for all bugs. Quality standards maintained. That’s how you use AI transparently in open source.
What Happens Next
The MeshCore team continues at meshcore.io with their 38,000 nodes and 100,000 users. New GitHub repository, new Discord, official documentation all relocated. They’re maintaining the project while Andy controls the old domain.
This won’t be the last AI code controversy. Vibe coding was Collins English Dictionary’s Word of the Year for 2025. As it becomes mainstream, more teams will face similar trust issues.
The question is whether open source will adopt transparency norms before more projects split. MeshCore is the cautionary tale. Don’t let your project become the next one.










