This post originated from an RSS feed registered with Agile Buzz
by Laurent Bossavit.
Original Post: The purpose of testing
Feed Title: Incipient(thoughts)
Feed URL: http://bossavit.com/thoughts/index.rdf
Feed Description: You're in a maze of twisty little decisions, all alike. You're in a maze of twisty little decisions, all different.
I was recently reviewing a PowerPoint presentation on testing for a colleague. One slide said: "The purpose of testing is to find bugs." Wrong, wrong, wrong.
Actually, the presentation didn't quite say that. The first such bullet point said "to find bugs in the product before customers do". (The next slide did baldly state "the purpose of testing is to find the bugs".)
The purpose of testing is not to find bugs - finding bugs is a fun thing to do, but it has no economic justification in and of itself. The purpose of a software effort is to create value; the purpose of testing is to support that larger purpose.
My experience with software - in programming, mostly, but also to some extent in maintenance and testing - suggests that testing supports the main purpose of software efforts in two ways. First, testing creates opportunities to remove software defects - "testing finds bugs", in other words. Second, and much more important, testing provides information about the software and the process of creating it.
That information can be of an economic nature. A more valid "purpose of testing", as the presentation also suggested, could be the following: to determine a break-even point computed by subtracting, from the expected revenue from the software, the cost of releasing the software to the field (in support costs, sales lost to a reputation for poor quality, etc.) or the cost of further testing the software (and removing the defects which cause the other category of costs).
One problem with this way of framing what testing does is that the information could come in too late to do any good. The information obtained might be along the lines of "we don't know how much more testing we need to do, but we do know it's likely to take us until later than the competitor's release, which will make our own project a loser because of time-to-market advantage".
Testing can provide information of a much more valuable sort, however: it can tell us how the defects are being put into the software in the first place. Such information about "defect injection" can make more of a difference than efficiency of "defect removal". It doesn't take much thinking to see that the fewer defects you write in the first place, the less costly it will be to fix the defects.
Seen primarily as a way of figuring out how defects come to be put into your software, "testing" also becomes easier to reconcile with what Extreme Programming and other agile approaches advocate. Automated tests detect defects early and thus provide better information about how defects appear. Involving the customer and the QA team early in the effort, and throughout, result in better information about what to call defects, and why they come about (including misunderstandings about the requirements). Software teams that obtain early and continuous information, and construct adequate theories, about how their software might go bad, are more effective at producing good software.
That is why I would much prefer the following definition:
"The purpose of testing is to provide information leading to adequate models of defect injection."