Summary:
Ward Cunningham talks with Bill Venners about the flattening of the cost of change curve, the problem with predicting the future, and the program as clay in the artist's hand.
The ability to add new comments in this discussion is temporarily disabled.
Most recent reply: January 13, 2004 1:53 PM by
Isaac
|
Ward Cunningham talks with Bill Venners about the flattening of the cost of change curve, the problem with predicting the future, and the program as clay in the artist's hand. Read this Artima.com interview with Ward Cunningham: http://www.artima.com/intv/clay.htmlWhat do you think of Ward's comments?
|
|
|
> "If we made a change during week one, and it took us two days to understand what was really required, it took two days to make the change. If we made a change during week 21, and it took us two days to understand what was really required, it took us two days to make the change."
This is a disappointingly disingenuous statement. It says that a change that takes two days at the beginning of a project takes the same time as a change that takes two days, late in a project. This is self-evident but is not the problem that the "exponential cost of change" scenario addresses. The problem is the cost of making a given change early in the development cycle, as compared with the cost of making the SAME CHANGE later in the project.
At the beginning of a project, the development team and the code-base are both relatively small. Bugs at this stage are easy to find and easy to fix. Similarly, design changes are easy to implement. Implications on other as yet unwritten parts of the system will be zero because the other parts do not yet exist. As they are developed, they will - by definition - work with changes that preceded them. Thus, we might have the two day fix that was mentioned.
However, later in the project, the number of developers and the code-base will likely both be larger. By this time THE SAME BUG may be very much more difficult to understand, particularly if the problem has been handed off to a developer who was not involved in the original code. Not only that, but the implications on other parts of the system are unpredictable because those other parts were developed in the absence of the code change and may not be able to work with it without further changes. This is what the well documented exponential cost of change problem is and, unfortunately, the given answer skirts the issue and does nothing to address it.
Vince.
|
|
|
Another aspect of this topic that I would like to see more discussion on:
In my experience programming in a dynamically-typed language makes it much easier to keep your architecture more flexible and amenable to a change in requirements (which always happens). It seems like statically typed languages like C++ or Java are marble in this analogy, while dynamically typed languages like Ruby, Lisp or SmallTalk are modelling clay. It's much easier to make changes to a sculpture made with modelling clay than it is with marble - often a mistake with marble will require that you throw out the sculpture and start over.
I'm currently converting a project which was originally done in C++ to Ruby. The project should probably have been done in a 'scripting' language initially since it involves a lot of string processing with regexen and the production of html from a template (and it has no requirements for high performance). Since I inherited the project about a month ago, there have been some changes in requirements which will essentially cause us to eventually throw out the C++ code since it is not able to be adapted easily (it would take longer than the Ruby re-write). Hence the impetus to rewrite in Ruby. Using Ruby we'll be able to make the system much more flexible with much less work (productivity will be greatly improved) since it won't be tied to certain types when changes are made.
|
|
|
It's much easier to make changes to a sculpture made with modelling clay than it is with marble - often a mistake with marble will require that you throw out the sculpture and start over. Where-as we would just flatten the modelling clay? (Good analogies are hard to find.) The project should probably have been done in a 'scripting' language initially since it involves a lot of string processing with regexen and the production of html from a template (and it has no requirements for high performance).scripting language and/or template language http://today.java.net/pub/a/today/2003/12/16/velocity.html (Humbly suggest this has everything to do with choosing the wrong tool for the job, and not that much with change.)
|
|
|
Not only can a change made later in a project be the same price as one made early if the product is well-designed, it can be cheaper. As you go on, the product develops a clear shape and it should be obvious how to modify it. If it isnt, either your change is wrong headed (trying to make a train float or fly) or your original code is obscure not properly kneaded.
In a way, this is exactly what agile programming is all about. Design a minimalist product first and then add functionality.
|
|
|
> In my experience programming in a dynamically-typed > language makes it much easier to keep your architecture > more flexible and amenable to a change in requirements
I hear that a lot, but my experience doesn't match.
A statically-typed lanaguage will make the implications of your change immediately apparent. When you make the change code no longer compiles.
In a dynamically-typed language, the implications are still there, but you may only find them by testing.
I've found strong, compile-time typing helps make changes in a robust way, because I can see a large proportion of the effects immediately. When the type checks only occur at runtime, I am less confident about the changes because I cannot easily tell how deeply my change will impact the system as a whole. The tests will show me, but the feedback loop is slower.
|
|
|
> Not only can a change made later in a project be the same > price as one made early if the product is well-designed, > it can be cheaper.
While that may be true, Ward didn't provide a very convincing argument for why it is the case.
As Vincent noted, Ward did a particularly bad job of identifying why the cost of change has been observed to increase, and therefore the strategies he gave for overcoming it are not sufficient.
Agile methodologies provides a number of other practices that help keep the cost of change low, but Ward didn't really cover them, and for thoroughness he probably should have.
|
|
|
scripting language and/or template language http://today.java.net/pub/a/today/2003/12/16/velocity.html
(Humbly suggest this has everything to do with choosing the wrong tool for the job, and not that much with change.)Sure, we're essentially doing the same thing with erb, a Ruby templating package. It's used heavily for examples in Jack Harrington's Code Generation in Action book. erb and velocity seem to be targetting the same problem space. Where-as we would just flatten the modelling clay? Depends on what the problem is. If you just accidentally chipped the nose off of your marble copy of David you're pretty much done. If you knocked it off of your modelling clay version, you could just stick it back on. ;-) (well, I suppose you could go looking for a tube of super glue :) And of course, the modelling clay allows for design reuse - You like the arms on your old version of the statue? It's easy to just cut them off and stick them on the new one ;-) (Good analogies are hard to find.)How true. Certainly my analogy isn't perfect. My point has to do with flexibility (agility?) I've done projects with both statically typed and dynamically typed languages and it just seems to me that I'm having to 'set things in stone' up front to a much greater degree with the statically typed language, while in a dynamically typed language I seem to have a lot more flexibility for a longer portion of the project. I suppose that some would argue that 'setting things in stone' up front is a good engineering practice, but given the nature of software engineering where requirements are often not 'set in stone' (and it really isn't something the engineers have much control of) it seems to me that flexibility is a plus since the only certainty is that change will happen. Based on my experience, it seems a lot easier to evolve a program written in a dynamically typed language, than it is with a statically typed one.
|
|
|
A statically-typed lanaguage will make the implications of your change immediately apparent. When you make the change code no longer compiles.
In a dynamically-typed language, the implications are still there, but you may only find them by testing.
I've found strong, compile-time typing helps make changes in a robust way, because I can see a large proportion of the effects immediately.
Oh, I'm not talking about weak typing. For example in Ruby if an object doesn't respond to a message that you're sending it you're going to get an error - we tend to refer to this as 'duck-typing' in the Ruby community (if it walks like a duck and quacks like a duck, I don't care what class it is - just so long as the interface for the class conforms to the way I want to use the object) Sure it will have to be at run-time that you see the error. That's where unit tests come in. I know that in my initial forrays into dynamic typing I had the same worries that you're addressing, but in practice I find it's not a big issue. The productivity gains to be had with dynamic typing seem to, in practice, be greater than the comforts offered by static, compile-time type checking. (Yes, I know this sounds like heresy in some quarters, you've probably got to try it out to see how it works for you. :-)
|
|
|
> Not only can a change made later in a project be the same > price as one made early if the product is well-designed, > it can be cheaper.
I can't see any reason why this should be true. From the day the first requirement is agreed, requirements are added and changed (usually added). The code-base inevitably increases in size and complexity as it includes more functionality. Equally, the team size generally grows as does the need to interface the product with existing hardware and software and the need to keep an increasing number of customers/users abreast of the progress of the project.
You may have the prettiest, most "Agile" code in the world but it is still increasing in complexity every time new functionality and interdependence is added.
None of the factors listed above are addressed by the "Agility" of the code.
Vince.
|
|
|
I've done projects with both statically typed and dynamically typed languages Before this degenerates into static type checking versus dynamic type checking, let's note that the notions expressed in the article are independent of programming language.
|
|
|
the notions expressed in the article are independent of programming language.
Perhaps. But practically speaking, the choice of language type (static or dynamic) can have a huge impact on the notion of code flexibility.
|
|
|
> Not only can a change made later in a project be the same > price as one made early if the product is well-designed, > it can be cheaper.
A couple of people have challenged this notion, so let me see if I can do a better job of explaining myself so the theory can either be confirmed or knocked out of consideration.
Building a highrise (to use the original paradigm for software projects) is a serial activity. You dig the basement, add the lobby and all the floors, and put a roof on the top. Deciding later that the basement should have been bigger is the kind of change that generated the original idea that the later in the project the changes are made, the more expensive they are.
Now consider a properly designed project -- where by proper, I simply mean that you know from day one whether youre building a residential or office tower, for example. Construct the edifice with only those items that are expected to be unchangeable (basement, supports, elevator tower). Dont, for example, build internal walls.
Then new tenants can be quickly and easily moved in and out, by adding or removing internal walls and so on according to their specifications. Leaving the internal design to last makes changes cheaper.
The important part, as Ward emphasized, is that you have to be clear early on what is fundamental (i.e. static) and what should be left sketchy or omitted (dynamic). The error BDUF projects used to make was in not differentiating these parts, but instead just creating a big pile of code.
|
|
|
> the notions expressed in the article are independent > of programming language. Perhaps. No 'perhaps' - the article is about process.
But practically speaking The topic is practice not theory - I was speaking about practice.
the choice of language type (static or dynamic) can have a huge impact on the notion of code flexibility There are huge differences between programming languages within those 'language types' (statically checked / dynamically checked).
Working with Haskell and ML is so different from working with C++
Working with Erlang is so different from working with Ruby
Experience with C++ doesn't generalize to 'statically checked languages'; experience with Ruby doesn't generalize to 'dynamically checked languages'.
|
|
|
Construct the edifice with only those items that are expected to be unchangeable (basement, supports, elevator tower...That seems to be a "traditional" approach: "We propose instead that one begins with a list of difficult design decisions or design decisions which are likely to change. Each module is then designed to hide such a decision from the others." Parnas 1972 http://www.acm.org/classics/may96/you have to be clear early on what is fundamental (i.e. static) and what should be left sketchy or omitted (dynamic).Yes, that's the problem! What's the solution? How do you know what will not change?
|
|