OpinionAI & DevelopmentSecurityDeveloper Tools

The Vibe Coding Cult: When AI Tools Become Religion

Bram Cohen, the inventor of BitTorrent, recently called out the “cult of vibe coding”—engineers who refuse to look at their own AI-generated code because they think it’s “cheating.” This isn’t a minor debate. On April 6, Hacker News exploded with 495 points and 335 comments. The community is divided: Is vibe coding the future of software development, or dangerous ideology dressed as productivity?

What is Vibe Coding?

The term was coined by Andrej Karpathy, OpenAI co-founder, in February 2025. The concept is simple: Describe what you want in English, let AI generate the code, and accept it without reading. Karpathy’s vision was to “fully give in to the vibes, embrace exponentials, and forget that the code even exists.” The promise? Anyone can build software without coding knowledge. However, the reality tells a different story. Currently, 92% of developers now use AI tools, and 46% of all code is AI-generated. When does a useful tool become a cult?

The Cult: Refusing to Look

In his April 5 blog post, Cohen accuses Anthropic’s Claude team of extreme dogfooding. Their practice: Never look at the generated codebase. They characterize “looking under the hood” as cheating. The consequence? An external developer found massive code duplication between agents and tools that internal engineers missed through willful ignorance.

Cohen’s rebuttal is simple: “Code is written in English. Anyone could read it.” He’s been using AI coding tools for months, but with a different approach—specifically, humans identify patterns, AI executes, and humans audit results. This produces better outcomes than blind faith.

When engineers refuse to examine code on ideological grounds, they’ve crossed from tool use to religion. You wouldn’t merge a junior developer’s pull request without review. Why treat AI differently?

The Damage is Real

Moltbook, an AI social network launched in January 2026, proves vibe coding’s dangers. The founder stated he “didn’t write a single line of code.” Within three days, security researchers discovered 1.5 million API keys and 35,000 email addresses exposed to the public internet. The root cause: a misconfigured Supabase database with no Row Level Security. As one researcher noted, “Coding agents optimize for making code run, not making code safe.”

The data backs this up. Veracode’s 2025 report found AI-generated code contains 2.74x more vulnerabilities than human-written equivalents. Moreover, AppSec Santa tested 534 AI-generated code samples in 2026 and found 25.1% contained OWASP Top 10 vulnerabilities. Similarly, an analysis of 470 open-source pull requests showed AI co-authored code had 1.7x more major bugs. Additionally, GitClear’s analysis of 153 million lines revealed code churn up 41% and duplication up 4x.

The most common failure pattern? AI implements features correctly but omits authorization checks. The vulnerability lives not in individual files, but in integration—exactly what happens when no human audits the system.

Here’s the paradox: 92% of developers use AI tools, but only 29-46% trust the output. Furthermore, 96% admit they don’t “fully” trust AI-generated code. As adoption rises, trust falls. Something has to give.

Open Source is Collapsing

Vibe coding doesn’t just hurt individual projects—it’s killing open source sustainability. Adam Wathan, creator of Tailwind CSS, reported documentation traffic down 40% from early 2023, with revenue down 80%. Why? AI tools answer questions without visiting docs, breaking the engagement that funds maintenance.

Maintainers are burning out. Specifically, Daniel Stenberg shut down cURL’s bug bounty program. Mitchell Hashimoto banned AI-generated code from Ghostty. Steve Ruiz made tldraw auto-close all external pull requests. Research shows the open source economic model—worth $7.7 billion annually—is at risk. Notably, vibe-coded users would need to contribute 84% of what direct users generate. That’s unrealistic.

AI-generated pull requests contain 1.7x more issues, overwhelming already-burned-out maintainers. Ultimately, when the infrastructure we all depend on collapses, everyone loses.

The Way Forward

AI coding tools are valuable. However, vibe coding as practiced by zealots is dangerous. The distinction matters.

GitHub Copilot has a 30% acceptance rate—humans filter aggressively. Apple began rejecting vibe-coded apps in March 2026. Even platforms recognize that human oversight isn’t optional. Therefore, use AI to accelerate implementation, but read the code. Understand the logic. Verify security and authorization. Audit for patterns. Think of AI as a junior developer, not an oracle.

With 46% of code now AI-generated, we can’t afford blind faith. Security breaches like Moltbook are real. Open source sustainability is at stake. Consequently, the future isn’t AI replacing human understanding—it’s AI augmenting it.

Use the tool. Reject the cult. If you didn’t write it and won’t read it, you’re not engineering. You’re praying.

ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to cover latest tech news, controversies, and summarizing them into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *

    More in:Opinion