This post originated from an RSS feed registered with Agile Buzz
by Keith Ray.
Original Post: TDD - Refactor Sooner To Avoid Throwing Away Tests
Feed Title: MemoRanda
Feed URL: http://homepage.mac.com/1/homepage404ErrorPage.html
Feed Description: Keith Ray's notes to be remembered on agile software development, project management, oo programming, and other topics.
Write test suites that exercise everything written so far.
Fix the bugs found by the tests.
Write a few more tests; fix some more bugs.
Add new features.
Write tests for the new features.
GOTO 5.
Which he explains by:
This process lets me experiment and make a few design mistakes during step 1, perhaps starting from scratch a few times. Since I wait before writing the initial set of test cases, I don't have to write and rewrite tests that are exercising code that I'm probably going to throw away.
I'm thinking he hasn't quite "gotten" Test Driven Development yet. Then again, if you have no idea on how to do something, a spike (without unit tests) lets you figure it out. The XP practice would then be to throw the spiked code away, and now that you know how to do it, implement it test-first. This way you spend very little time in the debugger, and almost no time fixing bugs.
However, his desire to avoid "exercising code" that he's going to throw away may be counter-productive. He may be writing too-big tests for too-big features, and probably writing too many tests before refactoring. My experience with TDD is that tests rarely have to change if they test for expected results rather than testing "how" something is done.
When I'm doing TDD well, I write a small test that does very little, and then a small piece of code to pass the test, repeating with additional small tests until the code does everything I think it needs to do. Along the way, I refactor to eliminate duplication... I am very alert to writing a test that looks very similar to an existing test, and I'm on the alert for chunks of code under test that also seem to be duplicates. These are telling me that it's time to refactor to create an abstraction. I don't want to have 15 almost-identical tests that can be replaced by a single test after a refactoring; but replacing a second or third duplicate test by creating an abstraction is not painful, because the tests are small and I catch the duplication as soon as I see it.
When I work this way, I rarely need to throw away and rewrite tests as the design evolves. I may need to change a name here or there, or some other small refactoring, but the tests are mostly immune to internal changes to the code under test.
When I have problems doing TDD, it usually comes from not knowing how to use an external API. I take a stab at it. Find it doesn't work at all. Have to look up [more] documentation, take another stab. Search for example code, take another stab. And so on. (Also let my partner drive, if I'm pair programming). It's particularly irritating if the external API I'm trying to use is poorly documented, buggy, or behaves differently on different versions of the platform I'm working on. (This gets discovered when the test fails when I build and run on that other platform.) The test just sits there until I figure it out. Sometimes I take an entirely different path, and that test has to be eliminated -- though that's not a big deal, since that test is probably only three or four lines of code anyway.