Like many others here, the metaphor of software development shop as
factory has always raised my hackles. There are a lot of reasons it's
misleading, but I finally put my finger on what I think is the biggest one.
Imagine a traditional factory with your standard assembly line. Let's
just think of a simple one: you've got your receiving department on one
end, five stations on the line, and some final quality control just
before the shipping department. Let's further assume that your people
all get things right 99 times out of 100. Sounds pretty good, right? Of
course, when you multiply it out, that means either quality control or
your customers will toss out circa 5% of your parts, but as long as you
account for that in your pricing, you're good to go.
Now let's imagine that's software they're making. If the typical bug
database is any guide, software quality control is not as effective as
factory quality control. Let's suppose that half the bugs make it
through and onto the truck. Then the truck pulls away and pulls right
back up at your receiving department. In other words, the main input to
most software development is what the team produced the day before.
This difference entirely breaks the factory model, If you repeat the
same quality-reducing operations over and over on the same material,
your quality inevitably declines and your costs for producing something
worthwhile will increase. In the end, you have to throw out everything
as unsalvageable. Which sounds exactly like a lot of software projects
I've seen.
To survive with a circular assembly line, aiming for perfection isn't
good enough. Instead, every time we touch the code, we need to leave it
better than we found it.
Quoted with permission.