Summary
Code can be perfect, and also perfectly useless at the same time. Getting requirements right is as important as making sure that requirements are implemented correctly. How do you verify that users' requirements are addressed in the code you're working on?
Advertisement
Many books and tutorials about agile development start out with a familiar scene: a developer sitting with a customer, together defining the requirements and setting the milestones for a project. Complex project management tools, as well as suit-and-tie business analysts, are shunned in favor of paper and pencil or, better yet, the back of a napkin or an envelope.
This scene about agile requirements gathering came to mind earlier this week, when I interviewed for an Artima news story Rob Cheng, Borland's director of developer solutions. Cheng, whose job involves talking not only to developers, but also to CIOs and CTOs, noted that real-world requirements gathering is far from being agile in most IT shops, often resulting in the following scenario:
The most common thing we hear from CIOs is that code can be perfect, but the application can be perfectly useless because it's not doing the right thing. It may be doing the wrong thing very well, but not doing what the business needs...
[Direct communication between developers and users] is a best practice that's not done nearly enough. There is often not enough involvement not only of developers, but also of Q&A practitioners and management, in meetings with customers... Agile development tools [such as a continuous integration server] are not enough. If people are not in the room, the process is broken.
Cheng considers getting requirements right a part of overall project quality:
Testing doesn't help, unless you're testing for the right thing. It's not so much about tools and products that can automate testing, but about [whether] you're testing the right things. You may have some... steps in the functionality of your application that you can write into a test suite, and you can automate the execution and tracking of that test, but that may not be what the end-user or the business really wants...
You need to make sure that you consider not only code coverage, but also requirements coverage: How much of the validation and testing are covering the things that the end-user or business values?
That's easier said than done. Even if end-user and developer communicate directly and constantly, it's not obvious how to formally validate requirements coverage. Requirements are often narrated in a language as imprecise as English, Dutch or Kirgiz, and are captured either as word-processing pamphlets, Wiki pages, email fragments, not to mention the back of Starbucks napkins.
At your company, how do you validate requirements coverage? What tools and methods have worked for you in capturing requirements so that your code ends up not only perfect, but also perfectly useful?
Release more frequently. That will provide the most accurate information on whether the system is usable and valuable.
It might be inappropriate to do that in some cases though. So, along with releasing more frequently, do user testing. And by that, I don't mean user interviews. I mean actually test whether given a task or goal, a user can accomplish it without assistance using your system. Do this repeatedly before a release and respond to what you learn.
User testing might not help with all the behind the scenes business processes though. So have business process experts specify the rules in an executable form themselves. Don't try to translate it through an analyst or a tester. Provide a medium that they understand and can use directly. This is what FIT is supposed to be about. It will require development effort to support so you trade that off with how important it is to build the right thing before discovering it's wrong after releasing.
Your question has not a simple single answer. It depends on the kind of activity and the type of requirement specification. One common method uses a requirement engineering tool like DOORS to link testcases ( mentioned are black box tests ) with informally written textual specifications.
In another project a more agile approach and an almost continous delivery schedule is selected. It's about a testtool and each additional test can be considered as an improvement. The requirements are selected making priorities and there is almost no outside force driving the development team. The collection of test specifications and the tools knowledgebase are merged. This way the largest part of the user documentation and the testspecs are one and the same thing. The knowledgebase entries are linked from the users testreport which is the primal "product". The documentation is nothing but a simple bunch of static HTML sites which again link to the requirements which can have any form: from corporate E-Mails to public specs. This way an unscheduled, almost chaotic flow of new requirements are mapped into the test system. This is accepted by the customers because they always notice progress and criticism will cause immediate feedback ( "...will be fixed on the next delivery on friday..." ). Can't say it won't work well.
I think Frank and Rob are asking two different questions.
Frank is asking how do I make sure that my system meets the specified requirements. That is the easier question and I am sure there are several ways of approaching it. In our case the project manager periodically goes over the requirements document and informally tests the system against it to see how close we are to done. Hardly a bulletproof solution but it keeps things more or less on track.
What Rob seems to be asking is how do I know the users really do not care about the things that are not in the written requirements and how much do the users care about each of the requirements I do have. This I find to be a difficult question. Here we just do trial and error. Show something to the user and get some feedback. What do you use to make sure you have captured all the needs and their relative importance?
We have developers periodically visit customers and watch them use the product. Not perfect but one or two site visits a year for a day or two can radically alter a developer's prespective of how an app is used on a day to day basis.
Certainly not perfect but we can at least talk somewhat intelligently about new features with business analysts and program managers.
"At your company, how do you validate requirements coverage? What tools and methods have worked for you in capturing requirements so that your code ends up not only perfect, but also perfectly useful?"
Someone from the development team has to learn what the users do, inside and out, so that there's a "virtual domain expert" on the team. That person also needs to know the technology really well, unlike many pure business analysts.
The more disciplines (and departments) involved the harder this becomes, so it doesn't really scale. Often times different stakeholders have a harder time communicating with each other than they do with the development team, in which case you need at lesat one person who understands all sides. That's not always possible.
You also need to focus people on what's important. I once had a group of users obsess for over a month about fitting a certain set of information on one screen without scrollbars. Of course during this time they didn't realize that the application was producing subtlely incorrect numbers.
The GUI was a nit. The incorrect numbers where an major issue, making the application unacceptable for deployment.
What's the point? IMHO, one of the major problems with the "release often" and "have the users test right away" method is that they focus on the wrong things. You can spend forever getting a GUI or some other obvious aspect of the system right while subtler problems accumulate.
FIT (http://fit.c2.com/) strips off the gui and focuses on the behavior that matters to the customer. It allows for close developer/customer collaboration.
I guess ideal case would be to have a set of requirements mapped to automated system test cases. The system test cases could be implemented with tools like Fitnesse / Selenium.
The input data and conditions to these tests can vary. This is something which can then be verified by the business analyst or the client since there is nothing technical about executing these tests once they are constructed.
One system test should map to one or more requirements and once the test passes you are sure that the requirement has been taken care of. Once you have all the tests passing against the requirements then you know that all the requirements have been implemented the way they were desired.