OpinionDeveloper Tools

TDD Is Fundamentally Wrong: The Mocking Debate

Broken test-driven development illustration

Test-Driven Development has been sold as gospel for 25 years. Write tests first. Let tests drive your design. Red, green, refactor. But there’s a dirty secret the TDD evangelists don’t talk about: the mocking epidemic. When you practice TDD correctly – writing tests first, mocking dependencies, following the London school – you don’t end up with better code. You end up with two implementations of the same logic: one in production code, one in mocks. And that’s not just wasteful. It’s fundamentally wrong.

The Double Implementation Problem

Here’s what nobody tells you about Test-Driven Development: when you mock heavily, you’re not testing your code. You’re implementing it twice.

Every mocked dependency is a reimplementation of that dependency’s behavior in your test. Your test says “when I call userService.save(), return true.” Your production code says “call userService.save() and proceed if true.” You’ve written the same logic in two places. Now multiply that across thousands of tests.

The numbers don’t lie. Developers report that “construction of mocks can require more effort than writing the integrations themselves.” You’re literally spending more time on fake implementations than real ones.

And here’s the kicker: those mocks drift. Your real UserService gets updated with validation logic. Your mock doesn’t. Tests pass. Production breaks. Congratulations, you just wasted everyone’s time.

Test-Induced Design Damage

TDD advocates claim tests improve your design. The opposite is true when mocking is involved.

To make code “testable,” you’re forced to inject every dependency. Every collaborating object needs an interface. Every function needs to accept its dependencies as parameters. Your codebase fills with IEmailService, IUserRepository, IPaymentGateway – interfaces that exist solely so you can mock them in tests.

One developer put it bluntly: “DI really, REALLY hurts code readability.” You can’t follow the code flow anymore. To understand how a simple user registration works, you need to trace through dependency injection configuration, locate the concrete implementation, then figure out which interfaces are being mocked in tests.

The pattern repeats across every TDD codebase: single entry points multiply into two to four linked methods. Associated mocks and dependency handles proliferate. Soon you need detailed framework knowledge just to understand how operations wire together.

This isn’t theoretical. In production GraphQL APIs, developers report authorization logic forced into the transport layer – scattered across resolvers and types – purely to satisfy testing requirements. Business rules that should live in domain code end up in framework-specific DataLoaders because that’s what the mocks demanded.

As one critic observed: “The cure is worse than the disease.”

The Refactoring Lie

TDD’s central promise is that comprehensive tests enable fearless refactoring. The reality is the opposite: TDD tests prevent refactoring.

Tests written using the London school (mockist approach) are tightly coupled to implementation details. They verify not just what your code does, but how it does it. Which methods get called. In what order. With which parameters.

Refactor your code to improve the implementation? Every test breaks. Change from composition to inheritance? Tests break. Introduce a new abstraction layer? Tests break.

The result: developers spend as much time “constantly revisiting tests to match discoveries during implementation” as they do writing production code. One analysis pegged this overhead at 20-50% project extension. That’s not a testing tax. That’s project failure.

The contradiction is stark. TDD claims to enable refactoring through comprehensive tests. TDD’s actual effect is to create brittle tests that shatter at the slightest architectural change.

Both Schools Failed

The TDD community split into two camps decades ago, and neither has an answer.

The Detroit school (Classic TDD) avoids mocks. It tests state, not interactions. It works bottom-up. Kent Beck pioneered this approach at Chrysler in 1996. The problem: it only works for algorithmic code on greenfield projects. Most software isn’t algorithms. It’s integration of databases, APIs, filesystems, third-party services.

The London school (Mockist TDD) embraces mocks. It tests interactions between objects. It works top-down. Steve Freeman and Nat Pryce codified this in “Growing Object-Oriented Software Guided by Tests.” The problem: mock explosion. Design damage. Tests that are harder to maintain than the code they’re testing.

After 25 years, we have two approaches to TDD, and both have fundamental flaws. Maybe the problem isn’t the approach. Maybe the problem is TDD itself.

What Actually Works

Here’s what experienced developers have figured out: separate your pure logic from your side effects.

For business logic, write pure functions. Functions where the output depends only on the input. No database calls. No network requests. No filesystem access. Just logic.

Pure functions are trivially easy to test. No mocks required. No dependency injection. No interfaces. Just call the function with test inputs and assert on the outputs.

For I/O code – the stuff that talks to databases, APIs, filesystems – use integration tests with real dependencies. Spin up a test database. Make real HTTP calls to a test server. Write to temporary files.

“If there’s no logic in code that primarily handles I/O, there’s nothing meaningful to unit test,” developers report. And they’re right. The integration points need integration tests.

This pattern appears in production codebases that abandoned heavy mocking:

  • Business logic in pure functions: easy to test, no setup
  • I/O pushed to the edges: integration tested with real dependencies
  • Boundary classes wrapping external services: minimal mocking surface

One team measured the difference. Their over-mocked test suite took 20 seconds to run and regularly broke during refactoring. After restructuring to pure functions and integration tests: 200 milliseconds. That’s a 100x improvement.

Stop Following the Dogma

Test-Driven Development isn’t a best practice. It’s a tool with trade-offs. It works brilliantly for certain contexts – greenfield projects with clear domains and stable requirements. It fails catastrophically when applied universally.

The mocking epidemic proves TDD doesn’t work as advertised. You end up with fragile tests, damaged architecture, and code that’s harder to maintain than if you’d written tests after implementation.

Martin Fowler asked “Is TDD Dead?” in 2014. The debate remains unresolved because the honest answer is: TDD with heavy mocking should be dead. The emperor has no clothes. It’s time to admit it.

ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to simplify complex tech concepts, breaking them down into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *

    More in:Opinion