The tech industry spent billions on developer tools in 2025. Faster IDEs, smarter CI/CD pipelines, AI coding assistants that promise to 10x your output. The bet: better tools equal better productivity. However, new data from 24,534 developers across 194 countries reveals something the industry didn’t want to hear.
According to the JetBrains State of Developer Ecosystem 2025, 89% of developers say non-technical factors influence their productivity, compared to 84% who cite technical factors. The gap is small, but the implications are massive: you can’t code your way out of organizational dysfunction.
While companies pour budgets into tools, the actual productivity killers—unclear requirements, poor communication, broken collaboration—get ignored. The data shows what developers have been saying for years. We’ve been optimizing the wrong things.
The Data Upends Conventional Wisdom
The JetBrains survey breaks down what actually matters. Non-technical factors include job design, clear communication, peer and manager support, and actionable feedback. Moreover, technical factors cover the usual suspects: CI/CD speed, IDE quality, infrastructure reliability.
Here’s the shift: as developers gain experience, they move from technical focus to coordination responsibilities. Context switching increases with seniority. At higher levels, communication matters more than coding speed. The industry has spent decades treating this as a distraction from “real work.” The data says otherwise.
“Developers highlight both technical (51%) and non-technical (62%) factors as critical to their performance,” the report states. “Internal collaboration, communication, and clarity are now just as important as faster CI pipelines or better IDEs.”
This challenges the tool-buying reflex that dominates tech. When a team is slow, the default response is to buy something: a better IDE, a faster build system, an AI assistant. Nevertheless, the survey suggests we should be asking different questions. Are requirements clear? Do teams communicate effectively? Does management provide actionable feedback?
Why AI Isn’t Delivering the Promised Gains
The DORA State of AI-assisted Software Development 2025 report adds context. AI coding assistants boost individual output: 21% more tasks completed, 98% more pull requests merged. Yet organizational delivery metrics stay flat. The disconnect reveals a fundamental problem.
AI acts as an amplifier. It magnifies existing team dynamics, good and bad. Teams with clear communication and strong collaboration see gains. Teams with dysfunction see AI make things worse. Furthermore, you’re “accelerating into a bottleneck rather than through it,” as the DORA report puts it.
This explains the AI productivity paradox. The Stack Overflow 2025 Developer Survey shows 84% adoption of AI tools, yet positive sentiment dropped from 70% in 2024 to 60% in 2025. Developers are using AI more but trusting it less. Why? Because individual gains don’t translate to organizational value when communication breaks down, requirements remain unclear, and coordination fails.
Teams optimize locally—faster code generation—while the constraint shifts to code review, integration, and deployment. Consequently, companies investing in AI without fixing these underlying issues are throwing money at symptoms, not causes.
The Measurement Crisis: 66% Don’t Trust the Metrics
JetBrains found that 66% of developers don’t believe current metrics reflect their real contribution. The problem isn’t just bad metrics. Instead, most companies lack specialist roles for developer productivity. Responsibility falls on team leads and developers themselves.
“Teams focused on delivery are also expected to master measurement methodology, often without resources or proper training,” the survey notes. This creates a structural dysfunction: the people doing the work are also expected to figure out how to measure it, usually while being measured by someone else’s broken system.
DORA metrics and similar frameworks measure outputs, not underlying system dynamics. They’re lagging indicators. They tell you what happened after problems already impacted delivery. Additionally, they can be misused for individual performance evaluation. They overlook product quality, user satisfaction, and team well-being.
The SPACE framework from ACM attempted to address this by combining operational metrics with human-centric dimensions. But adoption remains low. Companies stick with simpler, easier-to-game metrics that measure the wrong things.
The Investment Mismatch
The JetBrains survey reveals a critical disconnect. Technical managers prioritize addressing communication issues and reducing technical debt. But company investment “significantly lags behind identified needs.” Billions go to tools. Communication and clarity initiatives get pennies.
The consequences show up in failure rates. Research on collaboration barriers found that 86% of employees cite lack of collaboration as a reason for workplace failures. Collaboration research shows 78% of organizations report significant barriers between technical departments, and 65% of project failures stem from poor communication between teams.
Technical debt compounds the problem. It reduces development speed by 30%, with developers spending 23% of their time fixing debt instead of building features. But here’s the twist: poor communication creates technical debt. Duplicated efforts, inconsistent code, and misunderstandings all stem from communication failures.
Organizations are solving the wrong problem. Faster CI/CD won’t fix unclear requirements. A better IDE won’t fix broken team dynamics. Therefore, companies need to reallocate investment from tools to organizational effectiveness.
What Needs to Change
For engineering leaders, the shift is structural. Stop defaulting to tool purchases. Invest in clear communication frameworks, better team structure, and reducing context switching. Hire or designate specialist roles for developer productivity instead of piling more responsibility on team leads. Measure non-technical factors alongside technical metrics.
For developers, the data validates what you already knew. Unclear requirements and poor communication hurt more than slow CI/CD. Push back on purely technical productivity metrics. Demand transparency in how you’re measured. Recognize that coordination and communication are part of the job at every level, not distractions from it.
For organizations, the “AI will save us” fantasy is dead. You can’t automate around organizational dysfunction. Organizations with poor communication will see AI amplify the problems, not solve them. Investment in communication, support, and clarity pays off more than investment in tools. Fix the foundation before adding more technology.
The JetBrains survey, with responses from nearly 25,000 developers worldwide, provides the data the industry needs but doesn’t want to accept. Tools are easier to buy than organizational problems are to fix. But the numbers are clear: 89% say non-technical factors matter. The question is whether companies will finally listen.






