OpinionAI & Development

The Vibe Coding Hangover: Why AI Code Is Failing

In September 2025, Fast Company declared “the vibe coding hangover is upon us”—just seven months after OpenAI’s Andrej Karpathy coined the term that would become Collins Dictionary’s Word of the Year. While Y Combinator CEO Garry Tan proclaims vibe coding “the dominant way to code,” senior engineers are reporting something very different: development hell, technical debt nightmares, and security vulnerabilities in half of all AI-generated code. The emperor has no clothes, and the party’s over.

This isn’t just another AI hype cycle. It’s a fundamental debate about what software engineering means when 84% of developers use tools they actively distrust.

Karpathy Was Right (For Weekend Hacks)

Here’s what everyone forgot: Karpathy explicitly said vibe coding was for “throwaway weekend projects.” His February 2025 tweet described accepting AI-generated code without reading it, debugging via copy-pasted error messages, and letting code “grow beyond my usual comprehension.” However, he ended with a critical caveat: “It’s not too bad for throwaway weekend projects, but still quite amusing.”

The tech world ran with it anyway. By March 2025, a quarter of Y Combinator’s Winter 2025 batch had codebases that were 95% AI-generated. These weren’t weekend hacks—they were funded startups with real users, production systems, and data at stake. Garry Tan called it “the dominant way to code” and predicted it wasn’t a fad.

The problem isn’t the tool. It’s the catastrophic misapplication. Karpathy was right about his narrow use case. Consequently, YC startups applying it to production systems are setting themselves up for exactly what Fast Company reported: a hangover.

The Security Time Bomb Nobody’s Talking About

Veracode’s 2025 GenAI Code Security Report tested over 100 AI models across four programming languages. The results should terrify anyone shipping vibe-coded apps to production: 45% of AI-generated code contains security vulnerabilities. Moreover, for Java specifically, the failure rate hits 70%.

Cross-site scripting vulnerabilities? AI models failed to secure code in 86% of tests. Log injection? 88% failure rate. Furthermore, the kicker: newer and larger models don’t generate significantly more secure code than their predecessors. Throwing more parameters at the problem doesn’t fix it.

This isn’t theoretical. In May 2025, Lovable—a popular vibe coding platform—had 170 out of 1,645 deployed applications with security issues allowing unauthorized data access. That’s 10.3% of apps leaking user data. When you “Accept All” without reading the code, as Karpathy’s workflow suggests, you’re deploying vulnerabilities blindly.

For weekend hacks? Who cares. For production apps handling user data? Catastrophic.

The Trust Paradox: Using Tools You Don’t Believe In

Stack Overflow’s 2025 Developer Survey reveals a stunning paradox. Adoption is up: 84% of developers now use or plan to use AI coding tools, climbing from 76% in 2024. Nevertheless, trust has collapsed.

Only 33% of developers trust AI accuracy, while 46% actively distrust it. Additionally, just 3% report “highly trusting” AI output—among experienced developers, that drops to 2.6%. Positive sentiment has cratered from 77% in 2023 to 60% in 2025. The number one frustration, cited by 66% of developers: “AI solutions that are almost right, but not quite.”

This cognitive dissonance can’t last. Developers are using tools they don’t trust because the pressure to ship fast is overwhelming the instinct to ship right. This is the smoking gun. The hangover is just beginning.

Related: AI Code Verification Bottleneck: 96% Don’t Trust AI Code

The Complexity Cliff: When AI Breaks More Than It Solves

Jack Zante Hays, a senior software engineer at PayPal who works on AI development tools, warned that vibe-coded projects hit what he calls a “complexity ceiling.” Small codebases work fine until they don’t. Then AI tools “break more than they solve,” and what started as rapid development becomes “development hell.”

The pattern repeats across startups. Fast initial velocity, impressive demos, 10% weekly growth rates. Subsequently, the cliff hits. Features take longer to ship. Bugs multiply. Onboarding new developers becomes impossible because nobody understands the AI-generated codebase. In fact, multiple startups reported needing complete rewrites after vibe-coded MVPs became unmaintainable.

A survey of CTOs tells the real story: 16 out of 18 reported “vibe coding disasters in production systems.” This isn’t a gradual slowdown. It’s a cliff. AI works until it doesn’t, and when you hit that complexity ceiling, your velocity doesn’t just slow—it reverses. Therefore, the technical debt bill comes due, with compound interest.

Related: AI Tech Debt Crisis: 75% Hit by 2026, Studies Warn

Even AI Enthusiasts Are Warning Against It

If the senior engineers weren’t enough, Andrew Ng—AI pioneer, Google Brain founder, and self-described AI enthusiast—criticized the term “vibe coding” as “misleading” in May 2025. His teams “hate to ever code without AI assistance,” but they review and understand every line. That’s AI-assisted engineering, not vibe coding.

Ng’s pushback cuts to the core issue: “When I’m coding for a day with AI coding assistance, I’m frankly exhausted by the end of the day. It’s a deeply intellectual exercise.” The “vibe” framing suggests you can “just go with the vibes” and forget the code exists. That’s exactly what Karpathy described—for throwaway projects. In contrast, Ng’s approach—AI as force multiplier, human as architect—works for production. Karpathy’s “forget the code even exists” approach works only when failure is acceptable.

When even the AI optimists warn against vibe coding, the message is clear: the problem isn’t AI. It’s the abdication of responsibility.

Key Takeaways

  • Karpathy explicitly said vibe coding was for “throwaway weekend projects,” yet YC startups are building production systems with 95% AI-generated codebases—this is dangerous misapplication of a tool used beyond its intended scope
  • Veracode’s 2025 report found 45% of AI-generated code contains security vulnerabilities (70% for Java), with real consequences like Lovable’s 170 compromised apps leaking user data
  • Stack Overflow’s 2025 survey exposes a trust paradox: 84% of developers use AI tools, but 46% actively distrust them (vs 33% who trust), with positive sentiment dropping from 77% (2023) to 60% (2025)
  • The complexity cliff is real—16 out of 18 CTOs reported vibe coding disasters in production, with codebases becoming unmaintainable and requiring complete rewrites
  • Even AI pioneer Andrew Ng calls the term “misleading,” emphasizing that AI-assisted development is “a deeply intellectual exercise,” not “just going with the vibes”—the difference between AI as force multiplier vs abdication of responsibility

Vibe coding is a brilliant tool used badly. Karpathy was right for weekend hacks where failure is acceptable. For production systems with real users and data? The hangover is real, and the worst is still coming.

ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to simplify complex tech concepts, breaking them down into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *

    More in:Opinion