Vibe Kanban, an open-source orchestration platform for AI coding agents, is trending #4 on GitHub today with +825 stars after solving a workflow bottleneck developers didn’t see coming. With 84% of developers now using AI coding tools, the productivity problem has shifted from writing code to managing the 2-5 minute wait times per agent task—downtime that typically devolves into distraction. Vibe Kanban transforms that idle time into productive orchestration, enabling simultaneous execution of Claude Code, Cursor, GitHub Copilot, and other agents via isolated git worktrees.
From Code Writer to AI Orchestrator
The developer workflow has fundamentally changed. Where developers once spent majority time writing code, they’re now planning tasks, reviewing AI-generated output, and managing execution across multiple agents. Stack Overflow’s 2025 Developer Survey confirms 84% adoption of AI coding tools, but adoption created a new bottleneck: agent wait time.
“We were feeling pretty useless working synchronously with coding agents,” explains Louis, BloopAI’s co-founder, in the project’s Hacker News discussion. “They take 2-5 minutes to finish a task and we would start doomscrolling.” That frustration led to Vibe Kanban’s core insight: developers don’t need faster agents—they need parallel orchestration.
The results speak for themselves. Luke Harries, Growth Lead at Eleven Labs, calls it the “biggest increase in productivity since Cursor.” Research on parallel AI workflows shows heavy AI developers save 6-7 hours weekly when properly orchestrating multiple agents.
Related: AI Coding Trust Crisis: 84% Adoption, Only 33% Confidence
How Git Worktrees Solve the Parallelization Problem
Vibe Kanban’s technical foundation is git worktrees—a Git feature that creates multiple working directories from a single repository. Each worktree operates on its own branch in an isolated directory while sharing the underlying .git repository data. This isolation prevents the file conflicts that would normally occur when multiple agents edit the same codebase simultaneously.
The traditional sequential workflow looks like this: checkout feature-auth branch, wait 3 minutes for Claude to implement, checkout main, checkout feature-ui branch, wait 3 minutes for Cursor. Total time: 6+ minutes of mostly idle waiting.
In contrast, Vibe Kanban’s parallel approach creates isolated worktrees automatically—one agent refactors authentication in worktrees/auth/ while another builds data visualization in worktrees/viz/. Both run simultaneously. The developer reviews diffs in the Kanban UI when complete, typically reducing total time to 3-4 minutes.
Performance testing shows ~4 concurrent agents is the sweet spot before hardware constraints kick in. Beyond four, CPU and memory contention cause diminishing returns.
Orchestration Features: Kanban Meets AI Agents
The interface is straightforward: a Kanban board with TO DO, IN PROGRESS, REVIEW, and DONE columns. Developers create tasks, assign them to agents (Claude Code, Cursor, Copilot, Gemini, or Amp), and the platform handles worktree creation and isolation.
Each agent type brings different strengths. Claude Code scores 80.9% on SWE-bench, making it the top choice for complex refactoring. Cursor excels at multi-file autonomous coding through Composer mode. Copilot delivers 55% productivity improvements on routine development tasks. Consequently, Vibe Kanban lets developers match agents to task types strategically.
The code review workflow mirrors pull request reviews. When agents complete tasks, the built-in diff tool displays changes for inspection, editing, and approval. Approved changes can be merged automatically and submitted as GitHub PRs with one click. Moreover, the platform exposes a Model Context Protocol (MCP) server, enabling agents to autonomously create their own tickets based on work insights.
The project’s development itself demonstrates AI orchestration: BloopAI reports 80% of Vibe Kanban was built using Amp, another AI coding agent. The tool is dogfooding itself.
Privacy Concerns and Code Quality Skepticism
The Hacker News discussion (199 points, 155 comments) reveals both enthusiasm and legitimate concerns. The biggest controversy centered on privacy. Initial versions included opt-out telemetry collecting email addresses and GitHub usernames. “This really strikes me as something that should be opt in,” noted commenter gpm, citing GDPR requirements for personally identifiable information. BloopAI responded fast—merging PR #146 to switch to opt-in telemetry within an hour.
Code quality concerns run deeper. Multiple developers questioned whether parallelization genuinely improves outcomes or simply increases review burden. “Code quality degrades with more agents,” warned one commenter. The skepticism is grounded: current AI coding reliability isn’t sufficient for unsupervised parallel execution. Every agent output needs human review.
The honest assessment: Vibe Kanban accelerates code generation but doesn’t eliminate quality gates. You’re trading idle time for review time. Whether that’s a net positive depends on your review capacity.
Related: Accenture’s $1B Claude Bet Challenges GitHub Copilot Dominance
When Vibe Kanban Makes Sense
The tool shines when you’re already comfortable with AI coding agents, have 3+ parallel tasks that could execute simultaneously, and can handle increased code review volume. Teams shipping quarterly roadmaps in 3-4 weeks typically fit this profile.
Skip it if you’re still learning AI coding tools—master single-agent workflows first before adding orchestration complexity. Also avoid for simple projects (single-file changes don’t benefit from parallelization), limited review capacity (4 parallel agents = 4x review burden), or highly interdependent tasks where parallel execution creates merge conflict nightmares.
Best practices from early adopters: limit to 4 concurrent agents, create task templates for common workflows, cherry-pick the easier 50% of your backlog for agent execution (reserve complex work for manual development), and review diffs immediately after task completion rather than batching.
Installation is simple: npx vibe-kanban (requires Node.js 18+). The project is fully open source (Apache-2.0 license) with 7,893 GitHub stars and active development (145 releases, latest v0.0.142).
Key Takeaways
- Developer role evolution is real: 84% AI coding adoption shifted the bottleneck from writing to orchestrating.
- Git worktrees enable clean parallelization: Isolated working directories prevent agent conflicts without repository duplication.
- Productivity gains are significant but nuanced: 6-7 hours saved weekly for heavy AI developers, but review burden increases proportionally.
- Code quality concerns are valid: Current AI reliability requires human review gates—parallelization accelerates generation, not quality assurance.
- Strategic agent matching matters: Claude for refactoring (80.9% SWE-bench), Cursor for multi-file work, Copilot for routine tasks.











