MIT Media Lab researchers published findings in June 2025 that should concern every developer using GitHub Copilot, Claude, or Cursor daily. Their four-month study using EEG brain scans found that ChatGPT users showed the weakest neural connectivity and “consistently underperformed at neural, linguistic, and behavioral levels” compared to participants using search engines or no tools at all.
This isn’t theoretical. A senior engineer with over a decade of experience recently admitted that when Copilot went down during an outage, they found themselves staring at an AWS error message “genuinely uncomfortable about debugging without LLM-based help.” That’s cognitive debt.
What the Study Found: Your Brain on AI
The MIT research team tracked 54 participants across four months, dividing them into three groups: ChatGPT users, traditional search engine users, and a “brain-only” control group using no external tools. The results were stark. Brain-only participants exhibited “the strongest, most distributed neural networks” during cognitive tasks. ChatGPT users displayed the weakest neural connectivity.
When researchers switched groups in session four, the LLM-dependent participants showed “reduced alpha and beta connectivity, indicating under-engagement.” Their brains had adapted to outsourcing cognition. Even more concerning, LLM users reported the lowest sense of ownership over their work and struggled to accurately quote what they’d written. Their brains never encoded the information in the first place.
Developers Are Already Showing Symptoms
The MIT findings align with what’s happening in real development teams. Stack Overflow’s 2025 survey found 65% of developers now use AI assistants weekly, with 85% using AI tools regularly.
GitClear analyzed 153 million lines of code and found that code churn—lines reverted or updated within two weeks—is projected to double in 2024 compared to the pre-AI baseline of 2021. Their conclusion: “downward pressure on code quality” as developers lean more heavily on AI suggestions without fully understanding what they’re committing.
GitHub’s counter-research showed productivity gains, but critics note their tests measured style issues, not functional understanding.
Junior Developers Face an Existential Crisis
The job market tells a grimmer story. Junior developer positions have dropped significantly since 2022, with computer science graduate unemployment climbing to 6-7%. AI has made traditional junior developer work largely redundant.
AWS CEO Matt Garman called replacing junior developers with AI “one of the dumbest things” companies can do, arguing that you need junior developers to grow into senior engineers. Without them, “companies create a skills vacuum for the future.” But GitHub found AI-assisted junior developers complete tasks 56% faster while producing significantly more code that senior developers must review, saturating the capacity of mid-level staff to provide meaningful feedback.
A junior developer in 2026 can prompt AI to generate a sorting algorithm but may not understand time complexity or when to choose quicksort over mergesort. That’s not learning—that’s cognitive outsourcing with compounding interest.
How Cognitive Debt Accumulates
Cognitive science research has studied this pattern for decades. When you use external tools to reduce mental processing, immediate performance increases but memory formation decreases. ChatGPT and Copilot offload problem-solving, critical thinking, and system design—core developer skills.
When you accept a Copilot suggestion without thinking through the logic, your brain doesn’t encode the problem-solving process. Over repeated sessions, you lose the ability to solve similar problems independently.
The research offers one piece of good news: awareness matters. The Session 4 switchers who started brain-only before moving to LLM “performed better in both cases, with more strategy, structure, and quality.” Starting with your brain establishes a cognitive foundation that persists even when you add AI tools later.
What Developers Should Do About This
The MIT study’s “brain-to-LLM” pattern points toward better practices: attempt the task yourself first, even if it’s just sketching pseudocode, then bring in AI to review or optimize.
Practical steps include coding without AI one day per week to maintain a skill baseline, never merging code you can’t explain, and typing out AI suggestions manually instead of auto-accepting them to force cognitive engagement.
If your AI tools disappeared tomorrow, could you still do your job? If the answer is no, you’ve accumulated cognitive debt. That debt compounds over time and physically changes your brain’s problem-solving networks.
AI coding assistants are extraordinarily productive tools. The question isn’t whether to use them—we’re well past that decision point. The question is how to use them without eroding the cognitive capabilities that make us valuable developers. Mindless delegation creates dependency. Intentional collaboration preserves skills while gaining efficiency.
Your brain is keeping score whether you’re paying attention or not.











