This post originated from an RSS feed registered with .NET Buzz
by Frans Bouma.
Original Post: More on the badness of E&C and how to debug software.
Feed Title: Frans Bouma's blog
Feed URL: http://www.asp.net/err404.htm?aspxerrorpath=/fbouma/Rss.aspx
Feed Description: Generator.CreateCoolTool();
Several people, mostly VB.NET developers, have disagreed (and still do) with my opinion about Edit and Continue (E&C) in a debugger. It's a free world (well, for most people). However let me elaborate some more about why I think Edit and Continue isn't a feature you really should be using and if you do, especially often, you are not using a good debugging style.
A good debugging style and where it comes from I like debugging. It's like solving a puzzle: the only clues that are given to you are a piece of code, it's misbehaving nature and the functionality it should be providing and you have to find the cause of the misbehavior in the shortest amount of time possible. Now, before you get the idea I create bugs on purpose just to experience the joy of fixing them later: fixing bugs eats up a lot of time in the total development cycle, and therefore should be avoided at all costs. For all Edit and Continue-fans, read that last sentence again. Good. Now, let's move on to what to do when you want to solve such a puzzle: a bug in your software.
Debugging isn't about searching for forgotten quotes or a ';' at the wrong spot. It's about a totally different thing. Let's categorize some types of bugs to make understanding how to fix them a little easier, shall we?
Functionality bugs. These are the ones at the highest abstract level: in the functionality the software has to provide. An example of this kind of bug is the ability to execute script in an email in Outlook (Express) and enable that feature by default.
Algorithmic bugs. These are the ones at the abstract level below the functionality bugs. An example of this kind of bug is the (now patched) flaw in Microsoft's TCP/IP stack which marked the TCP/IP packets with numbers that weren't random enough which then could lead to data exposure via sniffing. The code was good, the algorithm used was bad.
Algorithm implementation bugs. This is the kind of bug you'll see when an algorithm is implemented wrong. This type shouldn't be confused with the next category, however. Algorithm implementation bugs are bugs which originate in the developers mind when the developer thinks s/he understands how the algorithm to implement works and starts cranking out code, however the developer clearly didn't fully understand the algorithm so a piece of code was written which will not function as expected, although the developer thinks it does.
Plain old stupidity bugs.. Everyone knows them: forget to update a counter, add the wrong value to a variable, etc. etc.
Now, let's grab E&C by its balls and re-read our list of bug-categories. What do we see? Only category '4', the Plain old stupidity bugs, can probably be solved by Edit & Continue. However, when do you know when your software's misconduct is a 'category 4' bug? By stepping through all the code in the debugger? I surely hope not. Not that stepping through a piece of code in a debugger can't be fun, but it eats a hell of a lot of time and it is unnecessary. The reason for this is simple.
How to find a bug the Right Waytm When you test a piece of code, and it misbehaves (unit tests fail, your own tests fail), you have to start at the beginning of the bug category list to investigate what probably is causing the software to not do what it should do. If this sounds strange, think about this for a second: if you use unit tests, are the unit tests bug free? (in other words: are the unit tests implemented in such a way that they test against the correct values?) If you use your own test plans or simply trial and error testing during development, do you know exactly what the correct behaviour of your software should be? This can only be the case if you test first against the proposed functionality, the functional design you try to implement via algorithms which are projected on code statements.
When that's done, you test the algorithms to implement for bugs. This means, all algorithms used, thus not only the patented, highly abstract algorithm of the 'buy' module in your webapplication, but also the tiny algorithms used to implement that higher abstract algorithm, like where and why caching is used and how the software determines which data to load from the persistent storage and when. If you are sure the algorithms are good you can go to the next step. You can't proceed to that next step when you do not know if your algorithms are correct. The reason for this is that when you do not know if an algorithm A is correct, you can't test the code that represents that algorithm in full, because you can't detect an Algorithm implementation bug.
After you've made sure the algorithms to implement are correct, you can also be sure the algorithm is understood correctly (and you know how to test algorithms, do you? Hint: not with E&C). Now, you open the code editor and read the code you've written. You read it as an algorithm, because it is in fact the 1:1 representation of your algorithm in program code. This means that when it is given to a developer who doesn't work on the project, s/he should be able to reconstruct your algorithm you tried to implement from the code you've given to that developer. Methods called (including constructors) by the code under investigation should already be correct, since these methods imply sub-algorithms which should have been tested before you test an algorithm which consumes other algorithms. You can use a debugger for reading the code, but that is most of the time not necessary, because a good developer uses pre- and post-conditions for methods and blocks of code, which are taken from the algorithm tests. Reading sourcecode to reconstruct your algorithm, thus when testing if the projection you've made of the algorithm you had to implement on the sourcecode you've typed in, means you read all the statements, again. Not all statements in your program, just the piece of code that is the representation of your algorithm you are currently testing.
When the code seems to be good, you have to test the code for stupidity bugs, in other words, the bugs you've overlooked when re-reading the code you've written when testing the algorithm projection. To help you with this, you can use a debugger. Place a breakpoint at the statement inside your code which represents the algorithm to test and place it at the statement which marks the start of a suspicious block of code. Before you open the debugger, make sure you understand what the values for the variables to check have to be when you break at that statement. This is not a mind-boggling adventure, you have already walked the particular piece code when testing for category 3 bugs. When you are sure about this, fire up the debugger and start stepping through a couple of statements. If you can't find the bug in at most 10 step-overs in a debugger, you placed your breakpoint at the wrong spot. Stepping through the code, adding carefully chosen watches, is not about 'Hmm, let's look what can be wrong', but about 'I think it is the value I pass on to that method that's wrong, but I'm not sure'. The debugger will tell you that.
Conclusions Is Edit and Continue now of any help? No. Most stupidity bugs are found when the category 3 checks are done, and the rest (you hope) when stepping through a couple of statements in a debugger. When the stupidity bug is found, you stop the debugger, you fix the code and compile again. If the code still fails, you have left anther stupidity bug further on in the code. If you are not sure if your fix for your stupidity bug is correct and you think you have to try in the debugger to go through a trial/error loop to see if your fix is the right one, you don't understand the code you have to write, because if you do not understand which code you should have written there, you can't fix it properly either.
People who grew up with assemblers, the gnu commandline C debugger and other terrible tools, know that debugging using a debugger is a last resort and also learned that debugging is not about using a debugger, but about understanding the difference between the sourcecode which should have been written and the sourcecode that is written. Edit and Continue doesn't help you with finding more bugs at a faster rate. You know what does? Design by contract like Eiffel has, pre/post conditions in the code and proper design by designing algorithms first on paper or in a design tool, not behind a keyboard with a code editor in front of you. Crying out loud that you want Edit and Continue and you need it bad, is showing, sorry to say it, that you do not understand what debugging is all about. I hope I've enlightened you all a bit about how debugging should be done. I also hope you understand that a project like Visual C#.NET has to be completed in a limited (i.e. not an endless) amount of time using a limited (i.e. not endless) amount of resources. If the devteam has to make choices which features to implement, I truly hope they choose them wisely. If a feature, like Edit and Continue, takes a lot of resources but doesn't bring a lot to the table, is chosen, you have to have a hell of a set of reasons to do so. I hope you now can imagine why there is no such set of reasons for Edit and Continue, besides the understanding of a debugging style among the majority of the users of your tool which requires that feature. I'm glad the C# team has decided that Edit and Continue is not a feature that should be implemented so other features now will. If you, dear reader, still think I don't understand debugging and I don't understand what the value of Edit and Continue is, please GOTO 10. Thank you.