Summary
At last week's SPA conference, Paul Dyson and I ran a workshop on planning non-functional requirements in agile projects. Here is a personal account.
Advertisement
Agile projects measure progress in terms of the business value realized. This is a huge leap forward from the practice of implementing a technical infrastructure first and subsequently layering business logic on top. Since commercial stakeholders are the best judges of value, prioritizing and planning feature implementation is their prerogative. But has the pendulum swung too far? In the abhorrence of 'Big Design Upfront', architecture has been discredited. As a result, many agile projects struggle with planning technical work that apparently does not add any value, yet must be performed for long-term system viability. Many techniques have been tried to reconcile planning according to business value with the need for maintaining and improving infrastructure, including reserving a portion of the development effort for 'technical work' and hence not giving commercial planners access to the complete team's time budget, or even planning entire iterations around technical improvements without any demonstrable business value. These are, at best, a fudge and detract from the original intuition behind agile planning. Hence, in this workshop we aimed to grasp the nettle and look for a more fundamental resolution of the tension between the desire to create value and the need to provide a solid technical infrastructure.
The question is not whether and when to let economics guide planning as opposed to technical considerations. In the end, economics always win. The problem, in my view, is that we tend to see the value of a new feature, but not its cost. By cost, I do not mean the effort we need to invest into implementing the feature, but rather the cost of the nightmare scenario's that may execute as the system offers some new functionality. I developed this approach in the context of my work in secure development (see my paper on agile security requirements): it is easy to see how a successful application creates an attractive target for malicious users. For example, it is likely that an online casino will attract players that cheat. Therefore, according to one nightmare scenario an attacker is able to consistently beat the bank. If this were to happen, this would clearly be costly. The impact of such a successful attack must, however, be tempered by its likelihood. The cost we should therefore take into consideration is the loss incurred by a successful attack times the probability of its occurrence. This calculation is at the heart of the insurance industry and has kept it very profitable.
The cost of nightmare scenario's, should be taken into account when optimizing the business value in an iteration plan.
If this approach works for security requirements, it is but a small step to apply it to other non-functional requirements. If it works for the wicked, it is likely to work for the incompetent. If it tells us how much effort to expend on fending off Denial of Service attacks, it should be solid enough to decide how much attention performance issues deserve.
We confronted a planning technique that makes nightmare scenario's explicit with a more traditional approach which only makes use of user stories in a simulation with 4 teams. 2 teams wrote explicit nightmare scenario's and estimated them for cost and effort alongside the user stories. The 2 other teams took a more traditional approach and only produced user stories. However, they did not try to fudge the non-functional requirements, but rather factored them into existing user stories as acceptance criteria, or wrote new user stories to capture them. In the former case, the cost of things going wrong is replaced by an increased implementation estimate as additional acceptance criteria must be satisfied. In the latter, customers are asked what value they attach to a non-functional requirement being met. This can work well: a business person should be able to assess what value it brings to be able to serve 100 versus 10 concurrent users.
The 'everything is a user story' approach definitely has the advantage of simplicity. As one is writing nightmare scenario's, it soon becomes apparent that pessimism is prolific and contagious. Nightmare scenario's quickly outnumber the user stories and all looks bleak. The optimization problem becomes intractable as the set of nightmare scenario's that an iteration plan should take into account depends on the mix of user stories to be implemented. Therefore, for those non-functional requirements amenable to the user story approach, this seems to be the way to go. For the others, ignore them, unless they prove to be particularly costly. In such case, tracking them explicitly is probably wise.
In the second part of the workshop, there was an open discussion intended to mine the participants' experience.
The point was made that, like functional requirements, non-functional requirements need failing tests. Without failing tests, it is all too easy to get stuck in a mire of '-ilities' that lack precision and cannot be validated. Unfortunately, tests for non-functional requirements are substantially harder to write and perform than functional tests. One of the challenges is running the tests in an environment that sufficiently resembles the target to yield significant results. Scalability, for example, is difficult to test for unless the test environment makes the same provision for load balancing as the production environment. Hence, projects increasingly make use of a staging environment on the production servers.
Many non-functional requirements are orthogonal to user stories. This is an impediment to planning their implementation as part of the user stories.
Like functional requirements, non-functional requirements deserve to be revisited at each iteration. It may be comforting to think that, if you get it wrong, you get a second crack at the whip. On the other hand, you are never really done since new non-functional requirements may emerge throughout the duration of the project and old requirements that were initially deemed of secondary importance may take on an increased significance.
What does it mean when a customer wants the system to be maintainable? Someone put it like this: could you work faster and not charge as much? This is an understandable sentiment which we try to accommodate in the agile community, but it is hardly a testable requirement. Or is it? One suggestion was to measure maintainability by the velocity of the project. In my opinion this is tricky since the capabilities of the team also influence velocity; for example, as a team gains confidence, velocity goes up, when people take holidays, velocity goes down.
This was my take on the workshop. If you were there and feel I left something important out, or misrepresented some of the discussion, please leave a comment. In any case, I would like to hear from you if you have any views on how to treat non-functional requirements, particularly in agile projects.
> What does it mean when a customer wants the system to > be maintainable? > > Tom Gilb has been writing about those issues for a long > time - you might find the stuff on his website > interesting. > > http://www.gilb.com/
When it comes to development cost, I think program managers have to start looking at their product's "throw away" date, i.e. the life expectancy of the product. Then determine how much investment they should make in it.
When you seriously consider obsolesence rates: hardware, operating enviroment, "library" code used, and other factors -- the "throw away date" for some products are now being measured in months. Products have shorter and shorter life expectancies.
More research needs to be placed in this area. Management responsible for purchasing software solutions really need some guidance in this area.
Craig Larman's book "Applying UML and Patterns", 3rd edition, gets into this topic in chapter 7, "Other Requirements".
Larman's book is not about applying a specific Agile process, nor is it about the Rational Unified Process (which is often associated with UML). Instead, his goal is to teach OOA/OOD within an iterative process, with UML used for modeling.
His chapter 7 suggests capturing non-functional requirements in a Supplemental Specification, a text document that captures these requirements as well as defining how to measure the successful achievement of them.
He's *not* suggesting Big Requirements Up Front: In the Inception phase the Supplementary Specification is a "first approximation" of what is wanted, as well as a place to capture these requirements and ideas. Larman then advocates revisiting and refining these requirements as needed during the elaboration phases of the project.
At the end of the elaboration phases, Larman says it's feasible to have uses cases, a Supplementary Specification, and a product/project Vision that reasonably reflect the system to be built, however this is not something to "freeze" and "sign off on". These may need to be revisited as the system being built takes shape and undergoes tests.
In projects at my company, we tend to run them roughly (but not exactly) as is discussed in Larman's book. We definitely think about the non-functional requirements that could have potentially major impacts on our systems. We write our equivalent of Larman's Supplementary Specification, and this helps guide us during construction and testing so that we don't, for example, discover we've missed or forgotten a non-functional requirement until late in the project.
Conversely, thinking about and writing down non-functional requirements serves to keep us from spending time trying to meet unrealistic goals that are not needed for the project nor by our customers.
One suggestion was to measure maintainability by the velocity of the project.
In most circumstances, when I'm coaching, I recommend against doing this, though it's often very tempting to the client.
For background, I generally have the group estimate tasks in (arbitrarially-valued) points, rather than in time units such as hours or days. I find that this helps avoid the, "What? Six people did only eight man-days of work this week?" reaction from managers.
However, whether you use points or hours, there's often a temptation to see if you can manage to increase the number of these units you achieve weekly. This is not hard to do: the units, as with any currency, can fluctuate in value (and often do during the first few iterations as everyone gets settled in). Unfortunately, that doesn't make them a good measure of productivity: if there's a desire (even an unconscious one) to make that figure go up, it will, but that doesn't mean you're actually getting more work for that expenditure, all it means is that you're paying more.
The danger of this, of course, is that by using an unstable value for these units, you're undermining the prime purpose of the whole exercise: to predict what stories you can get done over a given amount of time. If you lose that ability to plan, you lose one of the big selling points of agile development.