Summary
In a recent developerWorks article, Andrew Glover suggests you continuously monitor code metrics to help you correct code quality problems, which could affect the long-term viability of your architecture. How useful have are code metrics in practice?
Advertisement
In the article, In pursuit of code quality: Code quality for software architects, Andrew Glover explains the difference
between afferent (incoming) and efferent (outgoing) coupling. He also describes metrics for abstractness, a measure of the number of abstract versus concrete types in a package, and instability, a measure of how likely a module will change over time. Finally, he defines balance, a measure of how "balanced" the abstractness and instability of your packages are. These concepts were described in Bob Martin's 1994 paper, Object Oriented Design Quality Metrics: An Analysis of Dependencies (PDF).
I believe that one hindrance to creating quality software is the difficulty of visualizing what you've built. Most of the time we look at our system through a tiny window (our IDE), peering at one small area of code at a time. Imagine the difficulty of designing a quality chair if you could only look at your creation through a one-inch square cut in a board, held close up to the chair. I believe that having higher-level views of your system, and looking at those views regularly, can help you create better quality software.
The main tool I use in Java development to visualize my design at a higher level than code is JavaDoc. As I better organize and simplify the JavaDoc view of my design, I find I improve my design.
Another technique I think is useful is simply drawing a diagram of the system's architectural layers, which indicates which layers depend on which other layers. The very act of attempting to make such a drawing can help you realize where you should avoid coupling. Once you've decided what your layers are, you can then manage coupling by monitoring and enforcing those layers as you code.
A couple months ago, I integrated JDepend in our build at Artima, because I wanted a tool that would generate a graphical view of the coupling between our packages. I was unfortunately unable to prune unwanted detail out of the resulting graph, so I'm still looking for a better graphical solution. Nevertheless, I was intrigued by all the numbers JDepend generated in its raw output. JDepend reports the very code metrics described in Bob Martin's paper and Andrew Glover's article. Could these numbers be used as another high-level view of the system to help me visualize my design?
The suggested way to visualize all these numbers is by graphing the abstractness and instability of each package, and then looking at the distance from each point to the "main sequence." The main sequence is the line drawn from the two ideal points of (instability 0, abstractness 1) and (instability 1, abstractness 0). The distance between a package's point on this graph and the main sequence measures that package's balance between abstractness and instability. According to Bob Martin's paper, the most desirable points are those close to the main sequence line.
JDepend calculates this distance for you, so you can just scan through the output looking for high distance numbers. When doing so for our code, however, the numbers don't match my intuitive notion of what constitutes good and bad coupling. For example, we have a package of utilities called com.artima.common. My goal is to have the classes in this package depend on nothing else in com.artima.*, but right now it has two minor dependencies to two other com.artima packages. Many classes in other com.artima.* packages call into com.artima.common, so its instability is quite low, 0.22. But because the package contains only concrete classes, its abstractness is 0. As a result, its distance is high, 0.78, which according to Bob Martin's paper, should be considered a warning sign of bad coupling.
I think the main question I have about these metrics is the notion that packages whose types don't have many external dependencies should contain more abstract than concrete classes. From the paper:
...If a category is to be stable, it should also consist of abstract classes so that it can be extended. Stable categories that are extensible are flexible and do not constrain the design.
Thus, the reasoning is that abstract classes can be extended in other packages, but this is also true of concrete classes. In addition, the notion that a stable package should be extensible I think undervalues the utility of stable classes that can simply be used, not extended.
What I find most useful about the output of JDepend is the explicit lists of dependencies, which help me see couplings that I intuitively don't like. The other numbers are interesting, but I don't currently have much confidence in the distance value as a measure of quality. Have you used such metrics in practice? If so, how do you used the numbers to guage and improve the quality of your design?
To be honest, I'm currently using JDepend just as a gauge for whether the app has any level of abstraction. If I remember right, interfaces are included in the abstract number as well. The other thing I use it for is in identifying any cycles. I'm able to pretty easily explain to a developer why avoiding cycles is a good thing in helping ensure a loosely coupled application.
Not sure if counts as a "metric", but we use JDepend to enforce a "no cyclic package dependency" policy. (We actually halt the build if cycles are detected.) Given the growth of the main project, we find ourselves carving out stable "chunks" into other sub-, sibling, and more generic projects. Ensuring a lack of cyclic dependencies makes this possible.
More subjectively, I've also found that an acyclic dependency graph makes unit testing easier, in terms of building up any required objects and mocks. It also seems to permit more use immutable objects, as cyclic object references (in Java) generally require mutability.
> Not sure if counts as a "metric", but we use JDepend to > enforce a "no cyclic package dependency" policy. (We > actually halt the build if cycles are detected.) Given the > growth of the main project, we find ourselves carving out > stable "chunks" into other sub-, sibling, and more generic > projects. Ensuring a lack of cyclic dependencies makes > this possible. > Do you have main layers identified for the project? I have been wanting to build an automated check that the code doesn't violate our layering rules. I was planning to look at Macker, but if that is overkill I figure it would be quite easy to write a Ruby script to check that. I wasn't too worried about cyclic package dependencies within those layers, but I think that might sound like a good idea too. By not allowing cyclic dependencies between any packages, you kind of force some layering, even if ad hoc. In my case, it would be enforcing primarily within my identified major architectural layers, because the major layers would be enforced separately by whatever tool I end up using.
I think digitized metrics are not that efficient. Codebase may have 0.001 cyclomatic complexity or 0.9999 abstractness and still may be brain-dead or full of bugs.
To me the dynamics presented in numerical form is not of much use. If coupling decreased 10% last week - can we process more requests on the front end? Has the DB load decreased? Or, consider let say 0.5 bugs per 1 KLOC. Is it a lot?
It's definitelly a good idea to run such tools as a part of an integration build (or a daily build if the codebase is large). I've also seen settings when any of these tool returnig non-zero bug count would fail the build, which also makes sense.
I’d heartily second the importance of reasoning about the code-base at levels above (but not excluding) the actual lines of code. I call this aspect of code its “structure” - a continuum from the lines of code to the deployed system of systems, with the dependencies rolling up through each level of composition.
One aspect of structure is largely subjective - as you say, just looking at those views regularly can help you create better quality software. By “quality” you’re not really talking about something that can be measured, rather your understanding of how it should be. The key to this is mostly just seeing accurate architectural views as the code evolves.
The 2nd aspect is the quality of structure. Bob Martin’s metrics on the relationship between abstractness and instability is interesting but it works best at the macro-scale (libraries, components, etc.), and there’s a lot of structure between that level and the code level, and it all needs to be measured.
Complexity is a quality of structure that is worth controlling – at any level of compositional breakout, the complexity of the sub-components and their inter-component dependencies should be limited.
Martin touches on this with his Acyclic Dependency Principle (ADP) and does a nice job describing the havoc that is created by cyclic package-level dependencies.
Cyclic dependencies are one kind of complexity. “Too much stuff” is another. Keep your code-base so that every level of breakout is an acyclic “mind-sized-chunk” and that’s a code-base you can work with.
Please excuse the plug, but a few of us at http://www.headwaysoftware.com build a tool called Structure101 that does this kind of stuff because we think it really is important. We limited the complexity at every level of the structure (naturally ;-) and the code-base is a dream to develop on.
> I believe that one hindrance to creating quality software > is the difficulty of visualizing what you've built. Most > of the time we look at our system through a tiny window > (our IDE), peering at one small area of code at a time. > Imagine the difficulty of designing a quality chair if you > could only look at your creation through a one-inch square > cut in a board, held close up to the chair. I believe that > having higher-level views of your system, and looking at > those views regularly, can help you create better quality > software.
I once created a very simple, very high-level visualization for my VB code:
Each routine was represented by a single-pixel-high stripe, with its length and width drawn out from the center and a background color to indicate its module type. I'm sorry to report I didn't find it very useful, but it was a fun little project. There's obviously a lot more value in more sophisticated metrics, but I think it's important to consider the presentation as well, and very dense displays like this can do a better job of giving you the big picture than can the table-based displays that seem to be the norm for code metric representations.
I think digitized metrics are not that efficient. Codebase may have 0.001 cyclomatic complexity or 0.9999 abstractness and still may be brain-dead or full of bugs.
This is a good point to remember. The problem with automated metrics is that they appear to provide you with a very easy set of targets to develop your code against. However, these targets are ones that we have made up for ourselves, they do not relate to the purpose of the software being developed. The function of any piece of software is to meet the requirements of the customer who will be using that software and no customer ever specifies code metrics.
That's not to say that they're not not useful, only that they're part of the code optimisation process not the code development.
> The function of any piece of > software is to meet the requirements of the customer who > will be using that software and no customer ever specifies > code metrics.
Not forgetting that the customers requirements usually include a timeframe and a budget. And they at least expect that it will work reliably and that when they want to make changes after initial delivery, that they can be made in a reasonable time.
The value of a "good" set of structural metrics is not that they guarantee you meet all these requirements, but that given everything else, your chances of meeting those requirements are significantly greater with a simple structure than a tangled mess.
> To me the dynamics presented in numerical form is not of > much use. If coupling decreased 10% last week - can we > process more requests on the front end? Has the DB load > decreased? Or, consider let say 0.5 bugs per 1 KLOC. Is it > a lot? > I think that the basic numbers of efferent/afferent coupling and abstractness can give some insight into what you've got--a higher level view of the design that may help you see ways to improve it. Those are just raw numbers. What I find less useful are the calculated instability and balance (distance from main line) numbers, because those are based on some assumptions that I find questionable.
I like the intent, which is to do some static analysis on the code to get actual metrics (real numbers, not just intuitive feelings) and then draw something graphical that can help you visualize what you've got, which in theory could help you see ways to improve the design. But I think we need to picture these numbers differently than plotting "distance from the main sequence."
> What have much higher value are static analysis tools that > find real coding bugs damaging product quality. Such tools > generally give a list of items that you can act on rather > than volumes or percentages. The tools I like in this area > are findbugs http://findbugs.sourceforge.net and PMD > http://pmd.sourceforge.net. There is a commercial JTest > http://www.parasoft.com/jsp/products/home.jsp?product=Jtest > I agree that such tools are more important. I integrated findbugs, and plan to integrate PMD and others into our build. But I also think having ways to visualize the design at a higher level--like stepping back and looking at the design without all the details--is useful. When you're designingn a chair, you can step back and gaze at it whole. You can sit in it and drink a cup of coffee. You can get a feel for it. That isn't so easy with software. I think in the case of Java, JavaDoc really does a great job of this, because that's the view developers who use your APIs should be operating from. But I still feel it would be useful to have other high level ways to view the design too--views derived from the real code, not just how you think the design should look.
> It's definitelly a good idea to run such tools as a part > of an integration build (or a daily build if the codebase > is large). I've also seen settings when any of these tool > returnig non-zero bug count would fail the build, which > also makes sense. > Yes. I really want to make use of all the static analysis that's available out there. So much is available for free, and then there are a number of commercial tools as well. I'm not sure if a non-zero bug count should stop the build, though, because there are usually false reports, or reports you want to ignore. FindBugs for example finds a bunch of bugs in the code generated by JavaCC, which is how we implement our code generators. Those bugs really are there, but they aren't mine and they aren't critical to my project. I can turn reporting of those bugs off, i.e., tell FindBugs to ignore them, but then I could do that for anything, so what's the point of stopping the build?
> But I still feel it would > be useful to have other high level ways to view the design > too--views derived from the real code, not just how you > think the design should look.
Bill can you say more about what kind of views of the design you would like to see. E.g. something like this http://headwaysoftware.com/images/snag14.jpg ? Or are you thinking of something more numerical?
> > But I still feel it would > > be useful to have other high level ways to view the > design > > too--views derived from the real code, not just how you > > think the design should look. > > Bill can you say more about what kind of views of the > design you would like to see. E.g. something like this > http://headwaysoftware.com/images/snag14.jpg ? Or are you > thinking of something more numerical?
Not numerical. Specifically, I am looking for a graphical view of my dependencies, something customizable such that I could say ignore certain packages, and combine others, to simplify the resulting graph. The image you pointed to looks like what I'm looking for. Another one might be from OptimalAdvisor:
Basically, just a box for each package or package group I care about, and lines with arrows showing the dependencies. I was able to draw a graph from the output of JDepend, but unable to do the customization of it.
> Specifically, I am looking for a graphical > view of my dependencies, something customizable such that > I could say ignore certain packages, and combine others, > to simplify the resulting graph. The image you pointed to > looks like what I'm looking for.
It seems common for the physical package structure and what people have in their minds as the logical architecture to be related but different enough that the straight package diagrams don't quite do it.
We are addressing this by allowing the definition of "transformations" on the hierarchy using simple regular expressions. Our next release (next week if I ever stop arsing round with blogs ;-) will include this, plus the ability to make a structural comparison of 2 builds. This means you can see new dependencies (color coded) on your architecture diagram, and decide if each one constitutes a reasonable evolution of your architecture or not. If not, you can refactor to remove the rogue dependency before it becomes too entrenched.
(The timing of this discussion has been great - a good use-case, and the will to live while I do the online tutorials :).
I've bought OptimalAdvisor for stuff I work on. It currently doesn't correctly parse one of my projects (http://sf.net/projects/elevatorsim, see the j2se5conversion branch in cvs), but they have mentioned it should be fixed in the next version. Other than that problem (which is not trivial for me, and I haven't yet found a workaround), I've been extremely happy with it.
There is an Eclipse plug-in in development called Byecycle (http://byecycle.sourceforge.net/) that shows promise. I'm considering adding a levelization option to it (see the Lakos book "Large Scale C++ Design"), but I don't really have time right now. It doesn't have any build features that I'm aware of, which I believe OptimalAdvisor does.
Regarding metrics, I find these visual programs easier to understand and react to quickly. Visually seeing the cycles and looking at the depth tells you what your code looks like very quickly. When working on the elevator simulator on our Saturday sessions (before switching over to 1.5), my partner and I would reload the metrics after every significant change to see (yes, actually SEE!) the structural changes, and decide whether we liked the dependencies, fix any cycles, and break up large packages. That combined with refactoring is a wonderful way of working.
One pattern that has emerged is that removing complexity of one kind sometimes moves it elsewhere. That is a case where metrics (such as CCN) comes in handy. As a design philosophy, I think the priorities are (roughly):
1) package structure 2) class structure 3) interface design 4) CCN 5) other metrics such as cohesion
(1=High priority, 5=Low priority)
Dealing with lower priority first is like rearranging the deck chairs on the Titanic. (Or like eliminating gas taxes to lower gas prices, but I won't get going on that.) It only really helps if the ship isn't sinking.
Flat View: This topic has 32 replies
on 3 pages
[
123
|
»
]