This post originated from an RSS feed registered with Agile Buzz
by Laurent Bossavit.
Original Post: It's the reviewing, not the reviews
Feed Title: Incipient(thoughts)
Feed URL: http://bossavit.com/thoughts/index.rdf
Feed Description: You're in a maze of twisty little decisions, all alike. You're in a maze of twisty little decisions, all different.
I've been reading a thread on a Java forum, about a hapless engineer who had been "assigned code to review it for performance issues". The story was fascinating - wrong in so many different ways. To name but the top two, making "review" an individual assignment, and skimping on the money to buy the natural tool for performance issues - a profiler.
This hints to blind belief in the effectiveness of "code reviews". I'm reminded of Eisenhower's quip, "Plans are useless, planning is indispensable". Reviews are useless, but reviewing is indispensable. That is, what makes reviews work is not the form by itself - the fact of having a meeting (or asking one person) to examine code, designs, or other artefacts. The benefits of reviews come from an understanding of the results that are expected, and matching the form to the results.
In the thread, the engineer (I'll call him Seth - not his real name) was explaining that the system had just gone live and the users were complaining about performance. Just like testing, reviews coming this late in a development effort are useless. Reviews can't "add performance" or "add quality" to code that didn't have it to start with. What they can do is provide an early warning system for quality or performance issues.
Seth started out with a checklist provided by his management - look for uses of String rather than StringBuffer, unnecessary logging, etc. That didn't help him much. Then he spent some time working with some of his developer colleagues, asking each of them to review one method with him. He came out of that with a much larger palette of "efficiency tricks". This is a key reason why reviews are typically conducted as meetings, or at least by pairs of programmers as in XP. The idea is that many brains are better than one.
In the end, Seth realized he would probably get nowhere. One reason was that he knew he wasn't adressing the real problems, only getting nickel-and-dime efficiency savings. It really takes a profiler to find the kind of potential optimizations that will make a difference, the 10% of the functionality where 90% of the time is wasted. Worse, it looked as if the problem really was with an underpowered infrastructure. Reviews should have realistic and achievable objectives, other than to salve guilty consciences.
There was a more subtle reason Seth decided he was getting nowhere. After a while the users' complaints started to focus on bugs, and all of a sudden performance wasn't an issue anymore, Seth's recommendations filed, unread, in his manager's inbox. As I said, reviews are beneficial only if you understand what results to expect from them, and chif among these results is establishing an atmosphere of accountability, visibility and "reviewability" of a knowledge team's work products. If you go looking for problems, you must be prepared to actually do something about these problems.
In addition, "reviewability" should also be a feature of the projects's process... and of its management.