Alan Keefer
Posts: 29
Nickname: akeefer
Registered: Feb, 2007
|
|
Re: Cedric Beust on Test Extremism and Test-Driven Development
|
Posted: Mar 6, 2008 3:48 PM
|
|
I posted this on the original blog, but I figured I'd cross-post the comment here.
Good post . . . I largely agree with the general sentiment. I work at a place (Guidewire Software) that has over 36,000 unit tests running continuously, and we’ve tried just about every testing trick or methodology you can imagine, so I feel somewhat qualified to talk about the subject.
There are a few important lessons that we’ve learned. One lesson is that “unit” testing isn’t as valuable as you’d think and that higher-level tests (i.e. what functional tests) are more valuable. We’ve had situations where all the “unit” tests were passing but the application was totally broken because the parts didn’t fit together properly and we didn’t have enough higher-level tests to detect it. We’ve also gone from trying to mock out things like the database to just giving up and running our whole app stack in all our unit tests (including an H2 database). Basically, anywhere that the test code diverges from the app code paths is suspect for us; it leads to issues where the tests don’t catch bugs between components because the tests mock things out and the mocks don’t really properly encapsulate the full range of behaviors of something (mocking out a database being a good example), and it also leads to problems where simple refactorings of the code lead to test-fix death-marches where 6,000 unit tests are written expecting that some part of the app behaves in manner X, so a 2-day code change becomes quite literally requires a month of test fixing. So maybe that’s not how Coplien sees an architectural meltdown happening, but it’s what I’ve seen happen: your unit tests are a double-edged sword, and when you’ve still got tests hanging around from version 2.0 when you’re working on version 6.0 there’s going to be some serious drag from old tests just like there’s drag from old code.
The upshot is that bad tests are worse than no tests, since they just exert a huge amount of friction on your code base and actually make some changes harder. And some things are just inherently hard to test, and while that often means you need to rewrite the code in question it isn’t always worth it. I’ve seen TDD make code worse (i.e. harder to understand) by decoupling things so fully and behind so many different interfaces that it becomes much harder to follow. I’ve also seen TDD make code better by causing things to be decoupled in a way that actually works.
What I really, really hate is when the TDD/agile advocates blame the user instead of the tool. Too often the assumption seems to be that TDD/Pair Programming/Agile Practice X is perfect and that if it doesn’t work for you it’s because you’re doing it wrong. Not all writers write the same, not all painters paint the same, so why do people think all programmers program the same?
At the end of the day, my job as a software developer is about getting quality software that we can sell out the door in a timely fashion. That’s the metric against which all other things must be measured, and it’s always a trade-off. Sometimes it’s worth rewriting a class to make it testable, sometimes it’s not. Sometimes it’s worth spending 2 days fixing some tests, sometimes it’s better to just delete them. It all depends on your code base, project, time constraints, and team composition. Last time I checked, people have managed to write software for the last 30 or 40 years using tools that essentially look like rocks and sticks compared to things like IDEA or Eclipse or languages like Java and Python, so I’m guessing that there’s more than one way to properly develop a working piece of software.
TDD is a great tool, but being blind to its limitations or the tradeoffs that you have to make as a developer of real-world software isn’t a great way to develop.
|
|