I've encountered this a few times. A code base running under cruise control on continuous integration that slowly grinds to a halt as the tests take longer and longer to run. In one situation a once healthy build system started taking 2 hours to run; and the blame could be placed on too much integration style testing (starting J2EE containers, deploying and then testing through cactus).
Generally, I now seperate my unit tests into two levels. Pure unit tests and integration unit tests, just like you describe in your article. In one situation though, some real pure tests that were doing some fairly exhaustive testing on a lot of different combinations of options were taking too long to run, so they got moved into the integration test section. Sometimes things from the integration test section that are quick and easy and considered important to run on every build get moved into the pure section.
I run the pure level on continous build (many times a day). The integration level gets run less frequently (say, once a day). I don't really care about the exactness of what is pure and what is not, the real seperation is more pragmatic; its just what is fast and what is slow. But as you rightfully point out these two definitions of how to split the tests are usually the same thing. Generally this keeps the continous build quick and encourages developers to write real pure unit tests.
A further disadvantage of the integration style tests is that they are, in my experience, more likely to fail. Pure unit tests once passing tend to stay that way (until the code is altered). Integration tests are a bit more risky because maybe some external resource is not working. This is annoying, especially as they get run only once a day because the interval between fixing them and re-running them is longer. Generally, I let my daily build pass even with broken integration style tests but report the errors to the developers but then take a stricter view at certain times. For example, nearing the end of a development cycle I will start to enforce the integration style tests.
An application, which interacts with a database quite often (several times within each method), which unit test program should be used? If we are required to write our own customised program for unit testing, how are we required to write such program, please suggest.
As others have stated in this topic, your DB access should be encapsulated behind some interface, e.g., MyDatabaseService or MyDAO (DAO = Data Access Object). The methods in your app that access the DB will do so through this interface, and as such can be mocked/stubbed (I recommend EasyMock for this). If you DI (dependency injection, a.k.a. IOC, inversion of control), changing your methods to use the mock/stub instead of the real DB-accessing implementation of MyDatabaseService is easy (I recommend Spring for the DI).
When unit testing the implementation of MyDatabaseService itself, use an in-memory DB, e.g., HSQLDB. There is no reason unit tests should ever access a stateful DB; this is what integration tests are for. If you cannot write unit tests without accessing an external resource like a DB, you need to refactor your app code.
DI and the practice of dividing your app up into modules/services that all have generic interfaces is a wonderful enabler of proper "pure" unit testing. Go to Spring's website for examples of these practices.
I beleive concurrency unit testing deserves a special mention here. I have written concurrency unit tests (CUTs) that are "pure" according to your rules, but still require a non-trivial running time.
The unfortunate nature of CUTs is that they are non-deterministic, i.e., one iteration does not prove that the code is correct, as it does for a regular unit test. Uncovering concurrency bugs requires hitting the code with multiple threads for a sufficient length of time. (Even then, correctness can only be stated in terms of probability, e.g., I have hit this code with 100 threads for an hour, so it is highly probable that there are no concurrency bugs.)
Some might argue that CUTs are load tests, but I beleive that the purpose of a unit test is to verify the correctness of a unit of code, and a unit of concurrent code cannot be verified without CUTs.
Perhaps a fast/slow categorization of tests is more useful than a pure/impure one for the purposes of deciding run frequency. Although pure tests are typically fast and impure can often be slow, the inverse is not uncommon.
BTW, I figured out recently that just because you hit code with multiple threads for a significant period of time doesn't mean that there are no concurrency bugs. Under testing conditions, the threads will often always be interrupted at the same points in the code. I found a library by IBM called ConTest (bad name) that instruments code so that the interruption points will vary on each iteration. I added a task to my Ant scripts that instruments my concurrent code before running my unit tests.
<Running the [unit-test] suite takes twenty five seconds. [...] I think it's fine for a test run to take a minute or even two, so long as it can be done in the background.> I saw a unit-test framework, written by a gaming programmer, that would fail the test suite if it took longer than 250 ms to run! For me, the unit-tests must run in less than ten seconds (ideally less than a second) to be as useful as they can be.
Flat View: This topic has 50 replies
on 4 pages
[
«
|
1234
]