This post originated from an RSS feed registered with Agile Buzz
by Laurent Bossavit.
Original Post: Modeling design risk
Feed Title: Incipient(thoughts)
Feed URL: http://bossavit.com/thoughts/index.rdf
Feed Description: You're in a maze of twisty little decisions, all alike. You're in a maze of twisty little decisions, all different.
In discussions of Extreme Programming, the YAGNI principle often turns out to be the subject of heated controversy. That is the principle which says we should code an application one requirement at a time, in sequential order, and that design elements should be introduced "just in time" - just when we are about to implement the requirement which calls for that design element. We should not introduce design elements ahead of time on the grounds that they support a requirement that we expect to be implementing later on.
This is a good test case for the idea that methodologies are risk models. We should not expect to find that either advocates or critics of Extreme Programming are "right"; rather, we will find that in both cases the person arguing for "designing for the future" or "delaying all design decisions" has in mind a model of the important risks involved in creating software, and a strategy which addresses these risks.
The people advancing either idea are not in conflict (there is no such thing as a conflict). Rather, the disconnect reflects either different experiences with software projects, which can be resolved by looking at which projects the one at hand is most like; or the unthinking use of a model which is not appropriate to the project at hand, which could be resolved by testing (perhaps through some manner of measurement) which model best describes that project.
In general two major risks that people are concerned with on software projects are requirements risk (i.e. volatility of the requirements, so that by the end of the project something different is needed from what was stated at the outset), and design risk (the internal structure of the software does not support the evolutions that are necessary during creation of the software, or in maintenance once it has been deployed).
Typically, even critics of XP recognize that in projects where the requirements are highly volatile, the YAGNI principle makes sense; it allows the software to remain flexible and respond to unforeseen changes throughout development. But critics argue that requirements are often less volatile than is claimed. If we assume that the requirements are completely stable, they say, YAGNI no longer makes sense, so we should "temper" our use of YAGNI depending on how stable the requirements are. The conclusion is often in favor of "light" (or "right-sized")up-front design.
Thus the implication of arguing for "up front" design, i.e. in advance of implementation, is that doing so mitigates design risk. Let's leave requirements risk aside, then, and focus on design risk. I will try to identify what model is held by people who argue for "right-sized" up-front design.
The way up-front design mitigates risk, in the model, is that we can (somehow) check for conceptual consistency among the various design elements. Design risk is mitigated by the detection (and resolution) of inconsistencies. (I'll take "missing design elements" to be a special case of inconsistent design elements.)
For instance, we decided to put data in an RDBMS to address the "searches are fast" requirement, and we decided to put data in XML because of the "ability to faithfully represent our semi-regular business data" requirement. (Let's not make too much of the example - I'm just giving an example to make concrete the meaning of "design".)
The problem with this risk model for design risk is that it is incomplete - it ignores important considerations and simplifies too much. In particular, a design decision can turn out to be inconsistent, not just with another design decision, but also (and perhaps predominantly) with what would be called an implementation decision.
I need to state this a little more accurately. The design decisions that an early high-level review of the requirements will get you are likely to be homogenous in level of abstraction. And I prefer to call "implementation" decisions low-level design decisions - it's design all the way down to me. So the equivalent statement becomes: the classic model of design risk includes inconsistencies between design elements at the same level of abstraction, but (implicitly) assumes that high level design elements will not be called into question by unforeseen lower-level design elements.
The opposite turns out to be true (or at any rate XP assumes it to be true). The effect is close to what Joel Spolsky "The Law of Leaky Abstractions". (Joel's name for it is somewhat unfortunate and his model has weaknesses, but the meme has gained some traction in the blog-reading public; I mean to use it only for illustration.)
At the same time, the early high-level design decisions may well turn out to exert a more constraining influence on later design decisions, i.e. lower-level ones, leading to inconsistencies we cannot back out of.
If we add this into the above model of design risk, then we see that design risk is increased the longer design decisions are delayed relative to corresponding "implementation" (lower-level design) decisions; thus, designing early increases design risk.
In the presence of static requirements, XP would still address design risk by insisting that design elements should be introduced one requirement at a time, in order of the requirements having higher business value. (We may still make an effort of detecting inconsistencies ahead of time, but in XP this is not a systematic effort; it's expected to happen "in passing" as it were.) Design elements are introduced in cyclic order of levels: High, Medium, Low, High, Medium, Low. "Upward" conflicts are detected and resolved sooner; moreover, this also tends to drive the design toward a more modular region where conflicts are generally easier to resolve.
The net effect of this more complete risk model (or so XP argues) is that we should prefer delaying design decisions, even in the presence of stable requirements.