Sponsored Link •
|
Summary
In a recent developerWorks article, Andrew Glover suggests you continuously monitor code metrics to help you correct code quality problems, which could affect the long-term viability of your architecture. How useful have are code metrics in practice?
Advertisement
|
In the article, In pursuit of code quality: Code quality for software architects, Andrew Glover explains the difference between afferent (incoming) and efferent (outgoing) coupling. He also describes metrics for abstractness, a measure of the number of abstract versus concrete types in a package, and instability, a measure of how likely a module will change over time. Finally, he defines balance, a measure of how "balanced" the abstractness and instability of your packages are. These concepts were described in Bob Martin's 1994 paper, Object Oriented Design Quality Metrics: An Analysis of Dependencies (PDF).
I believe that one hindrance to creating quality software is the difficulty of visualizing what you've built. Most of the time we look at our system through a tiny window (our IDE), peering at one small area of code at a time. Imagine the difficulty of designing a quality chair if you could only look at your creation through a one-inch square cut in a board, held close up to the chair. I believe that having higher-level views of your system, and looking at those views regularly, can help you create better quality software.
The main tool I use in Java development to visualize my design at a higher level than code is JavaDoc. As I better organize and simplify the JavaDoc view of my design, I find I improve my design.
Another technique I think is useful is simply drawing a diagram of the system's architectural layers, which indicates which layers depend on which other layers. The very act of attempting to make such a drawing can help you realize where you should avoid coupling. Once you've decided what your layers are, you can then manage coupling by monitoring and enforcing those layers as you code.
A couple months ago, I integrated JDepend in our build at Artima, because I wanted a tool that would generate a graphical view of the coupling between our packages. I was unfortunately unable to prune unwanted detail out of the resulting graph, so I'm still looking for a better graphical solution. Nevertheless, I was intrigued by all the numbers JDepend generated in its raw output. JDepend reports the very code metrics described in Bob Martin's paper and Andrew Glover's article. Could these numbers be used as another high-level view of the system to help me visualize my design?
The suggested way to visualize all these numbers is by graphing the abstractness and instability of each package, and then looking at the distance from each point to the "main sequence." The main sequence is the line drawn from the two ideal points of (instability 0, abstractness 1) and (instability 1, abstractness 0). The distance between a package's point on this graph and the main sequence measures that package's balance between abstractness and instability. According to Bob Martin's paper, the most desirable points are those close to the main sequence line.
JDepend calculates this distance for you, so you can just scan through the output looking for high distance numbers. When doing so for our code, however, the numbers don't match my intuitive notion of what constitutes good and bad coupling. For example, we have a package of utilities called com.artima.common
. My goal is to have the classes in this package depend on nothing else in com.artima.*
, but right now it has two minor dependencies to two other com.artima
packages. Many classes in other com.artima.*
packages call into com.artima.common
, so its instability is quite low, 0.22. But because the package contains only concrete classes, its abstractness is 0. As a result, its distance is high, 0.78, which according to Bob Martin's paper, should be considered a warning sign of bad coupling.
I think the main question I have about these metrics is the notion that packages whose types don't have many external dependencies should contain more abstract than concrete classes. From the paper:
...If a category is to be stable, it should also consist of abstract classes so that it can be extended. Stable categories that are extensible are flexible and do not constrain the design.
Thus, the reasoning is that abstract classes can be extended in other packages, but this is also true of concrete classes. In addition, the notion that a stable package should be extensible I think undervalues the utility of stable classes that can simply be used, not extended.
What I find most useful about the output of JDepend is the explicit lists of dependencies, which help me see couplings that I intuitively don't like. The other numbers are interesting, but I don't currently have much confidence in the distance value as a measure of quality. Have you used such metrics in practice? If so, how do you used the numbers to guage and improve the quality of your design?
Have an opinion? Readers have already posted 32 comments about this weblog entry. Why not add yours?
If you'd like to be notified whenever Bill Venners adds a new entry to his weblog, subscribe to his RSS feed.
Bill Venners is president of Artima, Inc., publisher of Artima Developer (www.artima.com). He is author of the book, Inside the Java Virtual Machine, a programmer-oriented survey of the Java platform's architecture and internals. His popular columns in JavaWorld magazine covered Java internals, object-oriented design, and Jini. Active in the Jini Community since its inception, Bill led the Jini Community's ServiceUI project, whose ServiceUI API became the de facto standard way to associate user interfaces to Jini services. Bill is also the lead developer and designer of ScalaTest, an open source testing tool for Scala and Java developers, and coauthor with Martin Odersky and Lex Spoon of the book, Programming in Scala. |
Sponsored Links
|