Summary
Few things bug me more than code that's not obvious. And what it takes to make things obvious seems so, well, obvious.
Advertisement
I was subclassing something the other day and while looking at the super class to figure out how to do that I ran into a chunk of code that look like this:
syncronized(aList){
object = aList.get(0);
}
doSomeStuff(object);
synchronized(aList){
aList.removeFirst();
}
I'm looking at this thing and asking myself why 2 synch blocks? It's so obviously there it has to have a reason. I didn't see it though. I read the java docs for List. Maybe I didn't know something fundamental about how those worked. Nope. I look at the users of the class. No they didn't need the behavior...
At this point I'm obsessed. This double synch thing is offending me beyond all reason and it better justify it's existence damn quick 'cause I'm gonna commence to chopping it out. Time to break out the big guns and go ask someone else for an opinion. It was lunch time but luck was with me (or against that little code snip) and someone worth asking was around. I point. He nods. He thinks. He looks up. He looks down. Nope, he can't see any justification for this little guys existence either. Excellent.
There's no doubt this annoying snippet is trembling now I thought. Time to get de-rezed you itchy, what the hell are you doing in the middle of my day, useless chunk of code. Mark, cut, compile, sub-class, test, check in. Woohoo! Getting rid of useless code is one of my favorite things to do. The rest of the day is just that much more satisfying.
The next day this mail pops into my box. Uh-oh. It's a forward of my checkin mail (our source control system mails on each checkin). This is almost never good news. Sure enough the double synch thing had a purpose and broke something. Turns out there was a method, maybe 40 lines above the double synch that relied on the object being in the list until after the doSomeStuff() call.
So I back out my change (Tenacious snippet. I heard it laughing as it slimed it's way back out of the bit bucket.) and add a comment to the code. The comment was this:
// This double synch thing is used by the method XYZ() above
This reminds me of door handles that look like they should be pulled, and you do, and it's only then that you discover that you have to push. Look, if your design esthetic is such that your creation could be non-obvious to others I'm ok with that. But could you put a Push sign on the door? How hard is that?
One big problem is that the concept of "obvious" is not the same for the creator and the consumer. It's difficult to know what won't be obvious to other people. I'm not sure what a solution to such a problem is.
Apparently the test suite is incomplete, if it were possible to go through this routine and yet get an email the next day indicating something was broken. Or perhaps if there had been some communication between the developer who wrote it and the developer who changed it, or even -- pair programming -- the code in question would have been understood, replaced with something less fragile, or at least been properly covered by the unit tests.
> > Mark, cut, compile, sub-class, test, check in. > > Apparently the test suite is incomplete, if it were > possible to go through this routine and yet get an email > the next day indicating something was broken. Or perhaps > if there had been some communication between the developer > who wrote it and the developer who changed it, or even -- > pair programming -- the code in question would have been > understood, replaced with something less fragile, or at > least been properly covered by the unit tests.
I was guessing tests were run automatically on checkin judging from the way the blog was written. If everything had passed, Rick wouldn't have gotten an e-mail. The log isn't clear on who sent the mail and why.
And while it may be true that the code more than likely wasn't pair programmed, I think it is a naive assumption that pair programming would have resulted in different code. Two people are just as capable of writing poorly encapsulated, poorly documented, obscure code as one. It's not as likely, but it is entirely possible. I've found that in most cases code reviews do a better job of eliminating this kind of issue than pair programming, because more eyes are involved and you have a wider diversity of opinion. In pairs it is too easy for a dominant partner to emerge. I say this because I've done it both ways and in general I've been better served by code reviews over pair programming.
If the method was 40 lines above that relied on the double-synch, why would that case get hit in an ordinary unit test? So now that special case can be added to the test suite and it is now 'complete' until the next esoteric case is found.
Actually, no test suite is ever complete. It can't be unless you can possibly think of every permutation every user in the world might think of for your application. If any test suite was truly complete, all code would be bug free. It just isn't possibe with a complex application. You can cover the common cases and the esoteric ones you know about. That's it. You can't test what you don't know.
And unit tests aren't ever a substitute for thorough product testing. I've been involved in may cases where everything passes their unit/regression tests and then things starting falling apart upon testing the complete product suite. You can't mimic complex application interactions with stubs. You don't know if it all works until you really test everything together.
You're also assuming the developer that wrote the code and the developer that changed the code were available to each other. My life would be a lot easier if that was the case more than about 20% of the time. Alas, that's not usually the case :-(
In general your statements are true, but remember the fact that this code uses synchronization implies that the system is multi-threaded. Creating unit tests that catch problems when multiple threads are accessing an object at the same time is extremely difficult.
Sorry, the code is obvious, but it is also wrong in two ways.
There's also no gurantee at all the object will be in the list for doSomeStuff because only the get is protected. A delete from a higher priority thread could come in after the get and before the doSomeStuff.
By the time removeFirst is called the entry may no longer be first so you could be removing the wrong entry. For the code to work the lock would need to be kept over doSomeStuff.
Just removing locking code without understanding the run time behaviour and without running the unit tests seems to be bad practice.
> Two people are just as capable of writing poorly > encapsulated, poorly documented, obscure code as one.
With pair programming least you'd have more than one person who understands the obscurity. That understanding would spread as as the pairs shift.
> So now that special case can be added to the > test suite and it is now 'complete' until the next > esoteric case is found.
Yes, that's exactly what should be done. Very insightful of you to note that.
> Actually, no test suite is ever complete. It can't be ... > Ifany test suite was truly complete, all code would be > bug free.
Yes that's often said, in shorter form, as "everyone knows that testing can't prove that a program works". Which is true, but misses the point. What's the alternative? Not testing?
> Yes that's often said, in shorter form, as "everyone knows > that testing can't prove that a program works". Which is > true, but misses the point. What's the alternative? Not > testing?
The alternative is not to claim that the bug in question should have been caught by regression tests (or, unit tests). Sure, every bug should be caught by testing, but that's not realistic, by far.
Other alternatives include other quality assurance methods, such as code reviews (of which the original entry was an example of sorts), formal methods and other testing methods. Reviews in general have proven to be rather efficient in finding flaws in programs or designs.
This puts me in mind of the following extract from "Code Complete" by Steve McConnell in 1993.
"...people who refuse to write comments (1) think their code is clearer than it could possibly be; (2) think that other programmers are far more interested in their code than they really are; (3) think other programmers are smarter than they really are; (4) are lazy; or (5) are afraid someone might figure out how they're code works."
This is a quote (admittedly somewhat out of context) from chapter 19 "Self-Documenting Code". The chapter covers pretty much every conceivable agrument both for and against code comments and should be compulsory reading for all those who advocate comment-free self-documenting code as a coding standard rather than a coding aim.
Perhaps slightly off topic, but lets look at this code here. My best guess by your post is that what you did was to change this all to something like this: synchronized { object = aList.remove(0); } doSomeStuff(object);
There is a reason for the synch blocks - aList is a collection that is accessed by multiple threads, so access to it must be synchronized (at least, that's the only reason I can think of). Of course, it would be a lot easier to use a synchronized collection for this in the first place (Collections.synchronizedList()). And as someone above mentioned, the original implementation does not guarantee that the same object that was pulled out of the collection is still in the collection, even if doSomeStuff() didn't remove it. To do that properly, you would need to write this:
syncronized(aList){ object = aList.get(0);
doSomeStuff(object);
aList.removeFirst(); }
to ensure that another thread doesn't remove the first item. I imagine that there are even better solutions, but that's the quick one.
> There is a reason for the synch blocks > : > (at least, that's the only reason I can think of)
Exactly the point of the original article. Even after analysing the code it is not possible to determine for certain why it is doing what it is doing. A quick comment might easily have resolved the situation, if only to demonstrate a mismatch between the original coder's intention and code.
"This double synch thing is offending me beyond all reason and it better justify it's existence damn quick 'cause I'm gonna commence to chopping it out. "
This is exactly backwards. If you don't know why some code is in place, and you can't find out, don't go chopping it out.
If you think its useless, check its history. Search the changelogs to find out who wrote it and when. Did they leave a reason? Is it still true? Are they still around? Talk to them.
> "This double synch thing is offending me beyond all reason > and it better justify it's existence damn quick 'cause I'm > gonna commence to chopping it out. " > > This is exactly backwards. If you don't know why some > code is in place, and you can't find out, don't go > chopping it out. > > If you think its useless, check its history. Search the > changelogs to find out who wrote it and when. Did they > leave a reason? Is it still true? Are they still around? > Talk to them.
I think the one of the points of the blog was that obviously wrong code isn't always.
At least that's one of the things I got out of it. Generally I'm of the mind that if ain't broke, don't break it, like you suggest. On the other hand, if something just looks BAD, I'm not above rewriting it, especially if I can do so cheaply. Tight coupling usually makes for an expensive refactor, so I avoid those until I have the time to give it the correct level of attention.
> > Yes that's often said, in shorter form, as "everyone > knows > > that testing can't prove that a program works". Which > is > > true, but misses the point. What's the alternative? Not > > testing? > > The alternative is not to claim that the bug in question > should have been caught by regression tests (or, unit > tests). Sure, every bug should be caught by > testing, but that's not realistic, by far. > > Other alternatives include other quality assurance > methods, such as code reviews (of which the original entry > was an example of sorts), formal methods and other testing > methods. Reviews in general have proven to be rather > efficient in finding flaws in programs or designs.
That's exactly the point I was trying to make. Every time somebody mentions something like 'hey, I found this oddity' or 'look, a bug', the XPZ's always seem to jump out of the woodwork saying
"Precious should have run complete unit tests. Stupid nasty hackerses didn't pair program."
as if to say that any bug found HAD to be a result of some boob using some methodology other than one that seems to promote no planning or forethought, delight at ignoring efficiency whenever possible and disregarding years of hard won experience in favor of 'just solving the problem right in front of you right now'. I still don't understand how it has become generally regarded as bad practice to take care of something that you KNOW will be a problem in the future.
At least, this is what the popular notion of XP seems to entail, which is funny because if one actually reads the agile manifesto and related documents, this isn't really the case. At least, that's not what I've taken from it. However, to really adhere to the manifesto on anything large, you would need an awful lot of talented people working on the project. Really doing XP takes a level of commitment, talent, discipline and personality characteristics that just don't seem to be contained in most people I've ever met. Most of the people I know that have the talent and discipline would rather work alone, or perhaps even lose a limb, than pair program on a consistent basis. Most of the people I know that really dig pair programming as a 24-7 thing belong in marketing, not at a terminal slinging code.
On the flip side I would bet that you can take any large non-trivial project that passes all of the unit tests and has been running just fine for years and if you were to actually do something completely nuts like sit down and read the code, you would be able to find bugs just waiting to rear their ugly little heads. Check out http://www.reasoning.com/pdf/Open_Source_White_Paper_v1.1.pdf (requires registration) for one opinion on the value of code inspection in combination with testing.
To me, "... while comprehensive software testing is a vital part of any quality assurance program, it is not a panacea. Software testing, although extremely valuable, is inadequate in light of the increasing need for high reliability in very large code bases. Testing alone cannot guarantee defect-free code, nor can it ensure a sufficiently high level of software quality." jives really well with my experience. According to the XP way of thinking, you may as well not bother fixing a problem until it comes out in a test and that, to me, seems short sighted and perhaps even a bit arrogant.
Flat View: This topic has 15 replies
on 2 pages
[
12
|
»
]