Anyone working on any large TDD (Test Driven Development) project knows the drill:
● Make changes to code ● Update with latest version of code base ● Fix inconsistencies and conflicts ● Start running tests ● Make cup of tea / have lunch / go home �� depending on how far into the project you are ● Fix one or two test failures �� fingers crossed you have not screwed up the other tests ● Try to check in code ● Swear loudly because someone has made changes to the code base that conflict with your changes ● Repeat ad nauseum
OK �� I am exaggerating a bit but full end-to-end system tests can take a while on a large system �� particularly if you have to reset the system to a known state before each test. If you are attempting to get a release out regularly, fixing the build ready for the next release involves banning commits of new code and days of pain and torment fixing all the build issues.
Dividing the tests into groups (e.g. using TestNG) and building the project from separate projects can help �� but in the world of Spring, AOP, proxies, ORM technology and so on �� it is very hard to tell which group of tests you need to run to validate a code change.
The solution given here (which has been successfully implemented) goes a long way towards eliminating these problems. It uses the JDK 5 Instrumentation technology to record at run time which tests touch which classes �� AND conversely which tests you need to run for each class change. Before anyone screams �� What about new classes? What about files that are not classes? I am well aware of these issues �� but this technique will find the correct tests 90% of the time �� and I rely on the build server to catch the remaining 10%. It certainly beats either not running the tests or waiting a long time for the tests to complete in their entirety.