Andrej Karpathy just admitted something remarkable: “I’ve never felt this much behind as a programmer.” This isn’t a junior developer struggling with React hooks. This is OpenAI’s co-founder who ran Tesla’s Autopilot AI. The man who literally built the neural networks inside coding assistants can’t keep up with coding assistants.
If that doesn’t signal a paradigm shift, nothing will.
Karpathy dropped this admission on X on December 26, 2024, then followed up with a detailed year-in-review blog post exploring what changed in 2025. The Hacker News thread drew 255 comments. The developer community is divided.
The answer is both progress and threat. And that’s exactly what makes it uncomfortable.
Programming is Being Refactored
Karpathy describes the shift bluntly: “The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between.” Translation: you’re no longer writing code. You’re orchestrating AI systems.
The new programming primitives didn’t exist 18 months ago. Now developers juggle agents, subagents, prompts, context management, memory modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, and IDE integrations. Each one evolves weekly. Karpathy calls it “alien technology with no manual.”
Three major developments drove 2025’s shift:
RLVR (Reinforcement Learning from Verifiable Rewards) trains AI against correctness checks: Does the code compile? Does the math work? The DeepSeek R1 paper from January 2025 showed models spontaneously developing reasoning strategies – aha moments, self-reflection, backtracking mid-solution. They figured it out on their own.
Vibe Coding is Karpathy’s term for building software in English without seeing code. “AI crossed a capability threshold necessary to build impressive programs simply via English,” he wrote. Code becomes “free, ephemeral, malleable, discardable after single use.” He’s written custom tokenizers in Rust, demo apps, single-use scripts – all through conversation.
Local-First Agents like Claude Code run on your machine, not the cloud. They access your environment, data, and context directly. Anthropic’s Agent Skills framework lets agents discover and load instructions dynamically. The JetBrains integration means your IDE is now an AI orchestrator.
None of this is incremental. The shift is from deterministic (predictable code execution) to stochastic (unpredictable AI systems). From manual to manual-less. From ownership to delegation.
The Contradiction: 10X Potential, Still Behind
Here’s where it gets weird. Karpathy feels both empowered and overwhelmed.
“I could be 10X more powerful if I just properly string together what has become available over the last year,” he wrote. But in the same breath: “A failure to claim the boost feels decidedly like skill issue.”
The tools make him more capable. But they also multiply faster than he can master them. He’s simultaneously ahead (using vibe coding for projects that wouldn’t exist otherwise) and behind (struggling to keep up with weekly primitives).
One X user nailed the irony: “Andrej Karpathy literally built the neural networks running inside coding assistants. If he feels ‘dramatically behind’ as a programmer… that tells you everything about where we are.”
ByteIota recently covered vibe coding skeptically. But this isn’t hype from a startup founder. It’s an admission from someone who understands AI architecture at the deepest level. That credibility makes the concern real.
Summoning Ghosts, Not Training Animals
Karpathy offers a mental model: “We are summoning ghosts rather than evolving traditional intelligence.”
LLMs aren’t optimized for survival or human-like reasoning. They’re optimized for imitating text and solving puzzles. This produces “jagged intelligence” – genius-level performance on verifiable tasks, surprisingly dumb on simple ones. They lack manuals because they’re fundamentally unintelligible.
Developers now face what Karpathy calls building “an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities.”
You can’t treat AI like traditional tools. Predictable, documented, stable systems are gone. Welcome to the ghost economy.
The Learning Debt Problem
Not everyone is adapting well. The Hacker News thread revealed a troubling pattern: junior developers submitting code they can’t explain. “ChatGPT wrote that,” they say when asked about implementation details.
One commenter called this “learning debt” – progressing without foundational knowledge, eventually hitting problems LLMs can’t solve. Another noted the same issue existed with Stack Overflow copy-paste, but AI accelerates and obscures it.
The safety concerns are worse. An industrial automation engineer shared: “I have had to correct issues three times now in the ladder logic on a PLC that drives an automation cell that can definitely kill or hurt someone. When asked where the logic came from, they showed me the tutorials they feed to their LLM.”
Licensing issues loom too. Developers report LLMs regurgitating exact code with unit tests hitting specific edge cases – “very clearly the code of somebody” whose permission was never asked.
The optimists argue code review and CI/CD gates solve this. Maybe. But the pace of AI capability growth outstrips the pace of process adaptation.
Adapt or Fall Behind
Karpathy’s assessment: “Despite rapid advances, the industry has realized nowhere near 10% of their potential and the field remains wide open conceptually.”
This is not slowing down. The changes happened in roughly 18 months. The pace is accelerating, not stabilizing. Waiting for the dust to settle is a losing strategy.
What should developers do?
Learn the new abstractions. Master agents, prompts, orchestration. “String together what has become available,” as Karpathy puts it. Build mental models for unpredictable systems. Accept that traditional programming is one layer among many.
But also stay critical. Not all vibe is substance. Understand fundamentals to avoid learning debt. Review AI-generated code carefully, especially in safety-critical contexts. The profession is changing, but carelessness still kills.
The field is wide open. Opportunity exists for those who adapt. But Karpathy’s admission proves even the experts are struggling to keep pace. If the person who built AI feels overwhelmed by AI, the rest of us need to take the paradigm shift seriously.
Programming isn’t dead. It’s being refactored. And the new codebase has no documentation.











