|
Re: Is Complete Test Coverage Desirable - or Even Attainable?
|
Posted: Feb 17, 2005 8:33 AM
|
|
I think testing as much as you can is always desireable. Where I have an issue with unit tests generally is that the hard core XP/Unit test advocates always seem to say "If you had good unit tests..." every time somebody even mentions having any kind of bug, as if Unit Testing is some silver bullet that can find all bugs, fix all problems and likely help me lose 30 pounds.
"I got N% coverage with my unit tests." Great. You did all your development on Red Hat 9. What if I try running that on a Debian based distribution? You said that you support Linux, why won't it run? Or substitute your favorite versions of Windows or Office or whatever. They are not a panacea and I hate that they get portrayed as such by some people.
What Frank's article touches on is that there are system wide issues that need to be accounted for and things that you just can't unit test. If I'm writing an application for any mainstream operating system these days, except maybe OS X because it hasn't been through too many versions yet, I cannot possibly test every permutation of every possible environment including security updates, kernel updates, browser updates, etc. Some things you have to take on faith. That being said, I don't think you need to question whether complete test coverage is desirable. I think we all know it is not attainable, but that should not stop you from beating on your application as much as possible during the testing phase. All other things being equal (that pesky ROI acronym came up... :-), the more you know the better off you will be.
As far as making applications configurable goes, this helps in some ways and makes things harder in others. Generally there are less areas in which a configurable application can go wrong because the goal is to reduce the amount of code and logic in the code. Less code equals less chance for bugs. However, if you do not select good defaults and the majority of users have to tweak things, you can bet people will be putting in, accidentally of course, all kinds of wacky data which you will have to defensively code against. Granted that code isn't hard to write, but it is boring, which means that it doesn't get done a lot of times. That is its own sort of problems, although those issues are usually a lot less problematic than horrible nasty logic errors that cause your data base to go bye-bye or your machine to periodically and predictably crash.
In our projects we put a lot more weight on system and integration testing than we do unit testing. That is where all the interesting bugs have come up, anyway. At least that's been my experience. You can unit test Component A and Widget B to death and have them all work, but then you hook Component A to Widget B and in some cases where the date ends in 9 on a month starting with J during lunch in the Pacific time zone, the whold damn thing fails. If the program was responsibly logging its problems and told you the database was locked and it couldn't access it and you followed this trail and saw that this is when database maintenance was happening, that's useful. No amount of unit testing is going to tell you that.
Eat well. Exercise. Die anyway. There has to be something similar somewhere about unit testing, code reviews and crashing software...
|
|