Back when I was a C programmer, I ran across the "but it works in the debugger!" problem more than once - which usually meant that a buffer was being torched in normal runs, but the debugger was masking that problem.
Well, Steve Wessels points out a similar type of issue in Smalltalk - you can fool yourself with lazy accessors:
So you single-step through your code with the debugger. Here's the rub. When you get to where the debugger is showing your object in it's inspector or text pane, it's going to use the #printOn: method to show the object. This #printOn: method uses a combination of accessors and lazy initialization to show the contents of the object. But in the real code, real time, when your application was producing the report, let's say that the "time" instance variable had not been instantiated. It was still nil. In the world you think you are debugging the object was nil. In the context of using the debugger that instance variable got instantiated while you were single stepping the debugger.
Something like Heisenberg Uncertainty. You changed the object while looking at it.
For those of you who don't know, a lazy accessor looks something like this:
time
^time isNil
ifTrue: [time := Timestamp now]
ifFalse: [time]
The idea is to ensure that the variable is never in a bad state - but, when you are debugging that particular method, you can end up with surprises. Lazy accessors have their uses - I use them in the blog server, mostly because I update the server on the fly with new code - and I want to make sure that new objects (i.e., instance variables that didn't exist at all before I updated the server) are in the right state. Steve points out that you can fool yourself with this stuff though.