|
Re: Test Driven Development?!
|
Posted: Dec 15, 2005 4:49 PM
|
|
I have used TDD to varying degrees over the last two years, and I am very troubled by it. I have just run into too many problems and limitations for TDD to be useful to me. I grant that perhaps I have a misunderstanding of TDD, and also that the environments that I have worked in have not always been supportive of TDD, or agile programming in general.
First, there are the areas that others have raised, where TDD is difficult or impossible - distributed services and APIs, user interfaces, etc. How do you come up with a test suite for Swing? At some point, there has to be visual inspection of the results.
A bigger problem is that I spend 2 - 5 times as long writing the unit tests as I do the code being tested. Evangelists of TDD typically use small, self-contained examples, such as developing a factorial method. That's easy to write tests for, since the method depends only on its arguments. But, that is not the real world. At my current job, I am responsible for a batch process - yes, they still exist, even in the Java world - that reads from about 40 tables in MySQL, processes the data according to complex business rules, then generates output files in a complex, arcane format (the files are read by a legacy application). For reasons that are too long to get into, this is a plain old command-line app. It could perhaps have been written in multi-tier form, but that would not change the basic problem. In order to set up the inputs for a test, I need to condition the database. This can mean inserting records into many tables. In addition, most of the tables have auto-incrementing key columns, which must be passed onto the output files. So, I cannot pre-condition the database to the same exact state each time - even if I clean up the rows that I inserted from the test, the next time I run the test, the new rows will have different values for the key columns, making automated validation more difficult. It can be done - I can get back the auto-generated values - but it is more code to write in the test case.
So now, I have test cases that are themselves so complex that they themselves need to be tested! Sure, I could separate the app's data access code out into a separate layer that implements an interface, and then use mock objects to test the business logic. But, at some point, the data access code has to be tested. It doesn't really matter if I'm going directly through JDBC or through an ORM layer, such as Hibernate - DAO code has to be tested, and this means preconditioning the DB in the test case.
Similarly, code to validate the arcane output format will be similarly complex. I have to parse the generated file and compare the parsed result to hand-coded expected results (containing potentially hundreds of data items).
In the end, I wind up writing test cases for fairly self-contained logic, but for anything that involves preconditioning external data sources or examining output files, it is just not worth it. It is much faster to visually inspect the output file to verify correctness, even if I have to do it dozens of times. For this particular product, unit testing can cover less than half of the functionality.
Also, what happens if a test case needs refactoring? Perhaps setup code needs to be extracted to a setUp() method. Suppose that involves four tests. Do I now have to recalibrate each of those four tests? That greatly discourages refactoring test cases.
There's also the issue of boilerplate code, such as value objects, Struts form beans, etc. Classes such as these are usually just dumb data containers, with one-line getters and setters and a constructor. Indeed, I have used XSLT to generate value-object code from a small XML file. Using TDD for such code is pure drudgery, and also interferes with using and developing automated code generation tools.
Perhaps I am missing some basic concept. How can I get TDD to work, given these issues?
|
|