This post originated from an RSS feed registered with Agile Buzz
by Keith Ray.
Original Post: The Point of TDD
Feed Title: MemoRanda
Feed URL: http://homepage.mac.com/1/homepage404ErrorPage.html
Feed Description: Keith Ray's notes to be remembered on agile software development, project management, oo programming, and other topics.
I would much rather aim for 100% test-driven development than aim for 100% test coverage. It seems odd to want to use coverage tools to determine after-the-fact that the development process isn't working as intended, rather than to work constantly with development teams at the source of the problem. High path and branch coverage is a byproduct of the effective use of TDD, not the aim of it.
But I fear he's missing the point. If I'm only testing 1% of my code paths, 100% of the time, what am I accomplishing? Squat.
As my boss has said, adding a non-intrusive code coverage tool to an already successful unit-testing strategy can absolutely find gaps in coverage that staring at the screen just won't find.
But juxtaposing 100% TDD with a 100% coverage goal doesn't make sense. They are complimentary, parallel even, but not opposing goals.
My reply: By aiming for 100% TDD, one can get 100% statement-coverage and branch-coverage as a side-effect. It's not that coverage isn't valued, but that getting coverage via TDD is easier than trying to get coverage by after-the-fact unit testing. (And nowhere did Tim Bacon say that he was only aiming for 1% of the code being written by TDD. Read what he wrote!)
TDD's rule is "write no line of code without a test to force its existence." So that pretty much guarantees 100% statement-level coverage. The rule also applies to "if" statements, etc., so you don't write an "if" statement unless you have a test to force it to exist - pretty much guaranteeing 100% branch-coverage.
The key misunderstanding is that Test Driven Development isn't primarily a testing technique, it's a development technique - a way to drive the design of working code that happens to leave a suite of "programmer tests" as a highly valuable side-effect. We use the term "programmer tests" to try to avoid misunderstandings based on the term "unit test". XP has "customer tests" / acceptance tests (driven by the customer) as a whole second level of tests -- but the point is to get "just enough" tests to insure high quality -- which is a LOT more testing than most software projects get.