The 2025 DORA report from Google drops a bombshell that contradicts the AI hype machine. Ninety percent of developers now use AI tools, and 80% report productivity gains. Sounds great, right? Not so fast. Thirty percent don’t trust the code AI generates, and software delivery stability is crashing across the industry. The research, based on nearly 5,000 tech professionals worldwide, reveals why: AI isn’t fixing broken teams. It’s amplifying their dysfunction. High-performing teams are using AI to accelerate their existing strengths. Struggling teams are watching AI intensify their existing problems. The gap between software engineering winners and losers isn’t closing. It’s widening, and AI is the accelerant.
AI as Amplifier, Not Equalizer
The core finding from DORA 2025 destroys a popular narrative. AI doesn’t democratize software development. It magnifies what’s already there – your strengths, your weaknesses, your practices, your dysfunction.
Twenty percent of teams fall into the “Harmonious High-Achievers” category. These teams excel across product performance, delivery throughput, and team well-being. For them, AI is rocket fuel. They leverage it to accelerate without sacrificing stability.
Ten percent face “Foundational Challenges” – teams in survival mode with high burnout, friction, and system instability. For these teams, AI doesn’t help. It hurts. Delivery stability drops 7.2%. Throughput declines 1.5%. They’re not moving faster. They’re sinking deeper.
The promise was that AI would level the playing field. The data shows the opposite. AI is a performance multiplier. If you’re multiplying excellence, you get more excellence. If you’re multiplying dysfunction, you get more dysfunction.
The Trust Paradox in AI-Assisted Development
Here’s the uncomfortable math: 90% adoption, 80% productivity gains, but only 24% have “a great deal” or “a lot” of trust in AI-generated code. Thirty percent trust it “a little” or “not at all.”
Organizations are shipping code they don’t trust at unprecedented scale. The DORA researchers call this a “trust but verify” approach – healthy skepticism in action. But there’s another interpretation: teams are moving so fast they can’t afford to slow down and ask whether they should.
The adoption surge is undeniable. AI use among software professionals jumped 14% in just one year. But trust isn’t keeping pace with adoption. When 30% of an industry doesn’t trust the tools they’re using daily, that’s not healthy skepticism. That’s a warning sign.
Speed Without Stability Is Accelerated Chaos
The throughput news looks good at first glance. AI adoption now correlates with higher software delivery throughput, a reversal from 2024’s findings. Teams are releasing more software, faster.
The stability news tells a different story. AI adoption continues its negative relationship with software delivery stability. Only 8.5% of teams achieve 0-2% change failure rates. Over half experience failure rates of 8-16% or higher. Fifteen percent of teams need more than a week to recover from failed deployments.
DORA’s warning is blunt: “Speed without stability is just accelerated chaos.”
The problem isn’t AI. The problem is that teams adapted AI for speed but didn’t adapt their systems for stability at AI-accelerated velocities. Weak automated testing, immature version control practices, and slow feedback loops can’t handle the volume increase AI enables. The cracks that were survivable at lower speeds become catastrophic at AI speeds.
Forty percent of teams prove the trade-off isn’t inevitable. They achieve both high throughput and high stability. But they’re the minority. Most organizations are building faster to fail faster.
Seven Capabilities That Separate Winners from Losers
DORA introduces the AI Capabilities Model – seven foundational practices that amplify AI’s positive impact. These aren’t AI skills. They’re organizational capabilities. High performers build these before scaling AI adoption.
- Clear AI stance. Explicit policies about AI tool usage, not ambiguous guidelines that leave developers uncertain.
- Healthy data ecosystems. High-quality, accessible, unified internal data that enables AI to provide relevant, contextual assistance.
- AI-accessible internal data. Connecting AI to internal repositories, documentation, and decision logs transforms it from generic assistant to specialized tool.
- Strong version control practices. Critical when AI increases code volume and velocity. Frequent commits and rollback capabilities become essential.
- Working in small batches. Incremental change discipline prevents overwhelming downstream systems at AI speeds.
- User-centric focus. Product strategy clarity ensures user needs guide AI-assisted development, not just acceleration for its own sake.
- Quality internal platforms. Technical foundations that enable scale. The data shows a direct correlation between platform quality and AI value unlock.
The insight is stark: AI value comes from the surrounding practices and culture, not the tools. Seventy-six percent of organizations now have dedicated platform teams. Platform engineering moved from experimental to essential because AI without infrastructure is a recipe for chaos.
The Uncomfortable Questions
Should struggling teams even adopt AI? The data suggests that for teams in survival mode, AI makes things worse. Yet 90% adoption rates mean AI is becoming a baseline expectation. Organizations face a difficult choice: delay AI adoption to fix foundations, or adopt AI and risk amplifying existing dysfunction.
Does the “AI democratizes coding” narrative hold up? The DORA data says no. AI widens the gap between high and low performers. Twenty percent thrive. Ten percent suffer more. That’s not democratization. That’s stratification.
Is the industry accepting higher failure rates as the new normal? When half the organizations surveyed experience 8-16% failure rates, and the market continues to reward speed over stability, it suggests a collective decision to tolerate more failures in exchange for faster delivery. Whether that decision is conscious or ignorant of the consequences is unclear.
What Engineering Leaders Should Do
The DORA report is clear: treat AI adoption as organizational transformation, not tool adoption. Organizations rushing to adopt AI without addressing fundamental engineering practices risk accelerating their existing dysfunction rather than improving performance.
Build the seven capabilities before scaling AI. Fix your automated testing. Mature your version control practices. Invest in your internal platform. Establish clear AI policies. These aren’t barriers to AI adoption. They’re prerequisites for AI success.
The research from 5,000 tech professionals worldwide offers a roadmap. High performers demonstrate that AI can boost throughput without sacrificing stability. But they also demonstrate that it requires intentional investment in foundations, not just tools.
AI is an amplifier. The question isn’t whether to adopt it. The question is: what are you amplifying?

