This post originated from an RSS feed registered with Agile Buzz
by Dave Churchville.
Original Post: Are Acceptance Tests an Agile Specification?
Feed Title: Agile Project Planning
Feed URL: http://feeds2.feedburner.com/AgileProjectPlanning
Feed Description: Thoughts on agile project planning, Extreme programming, and other software development topics
Acceptance tests are less visible in Agile discussions than their more popular agile brethren, such as iterations, stories, and unit tests. But what exactly is an "acceptance test" anyway?
Well, to paraphrase the extremeprogramming.org Rules page, an acceptance test is something the customer creates which will determine when a story has been correctly implemented. In other words, a set of acceptance tests forms the specification for the user story. Anything the customer thinks the story needs to support should be specified in one or more acceptance tests.
And here's where the fun begins. Since much of the focus on acceptance tests has been on automating them, there's very little material available on how to write a proper acceptance test. So the developers may get better and better at automating the tests they are given from the customer, but how does a customer know how to write good tests in the first place?
An acceptance test is just a statement about the functionality of the system. It can be specific, or fairly general, as long as it's testable.
Let's say, for example, that Bill Gates needs a new word processor. He writes a couple of stories like: "Need to create and modify text" and "Need to be able to save and load text".
A team of eager developers starts working on the story and they have questions. Does it need to be compatible with WordStar? What sort of keystrokes does it support? How much memory will you need to run it?
Bill gives some basic answers (since he works on site with the team), and they go back and forth in a two week iteration. The developers write lots of unit tests and they all pass. Brilliant!
Now, for the next iteration, Bill gets a little preoccupied with some nuisance lawsuit, and isn't able to spend as much time with the developers. Since they can't really talk to him, they make some educated guesses about the features, and try to keep it simple.
So Bill has asked for some online help in the next iteration. Our cheerful developers build a maniacal animated paperclip, which certainly fulfills the high level story. Millions of innocent people are made to suffer needlessly.
What if Bill had just written some acceptance tests to clarify things? How about:
1. "When I type F1, the main index of help should appear on top in a separate window."
2. "When I have a word selected and type F1, the definition of the word should popup if available."
3. "Under no circumstances should online help use any cute, animated, or otherwise demonically possessed metallic office supplies."
So take the sum of these acceptance tests (as documents or sentences, not necessarily as executable code), and put some fancy headers on them, and maybe some pictures, or even a couple of throw pillows, and you've got yourself a nice looking functional spec.
Now, if the developers want to, they can automate some of these for regression testing, and to get the warm fuzzy glow of green lights. And the next time someone asks, "Why don't we add a paperclip for online help", the team will rise as one to defend the customer, using the acceptance tests as objective evidence, not just their collective memories.
The key thing is for the acceptance tests to come from the customer (or customer representative for shrinkwrap products). The tests can tell you what the system should do, and what it should not do, but still leave room for creative solutions, intuitive interactions, and elegant architecture.
So are stories combined with their acceptance tests the agile equivalent of a functional specification? Well, in the spirit of agile philosophers everywhere, if it works for you, who cares what you call it?