Maybe
Vorlath just needs to sit down with a Smalltalk system for a few
minutes instead of just throwing uninformed rocks. In his latest
post, he's decided to explain (again) what's wrong with objects,
and he's using
Dan Ingall's 1989 talk as the starting point. His target this
time is polymorphism:
If you have a queue and can put any kind of type in it.
When you get to processing them, you would have to check the type
of each kind. There's more on this at around 9:22 of the video. His
solution and what OO is about is that instead of checking each type
before doing an operation, you would just ask the object in the
queue to do this action. In C++, this is somewhat like virtual
functions. In SmallTalk, it's a little simpler than that in that
you only have to send a message. Ok, fine. If these are actions
that these objects are supposed to do, then I'm all for it.
However, many operations such as inserting itself into the queue is
not the job of the object. It's the job of the higher-level running
program. There needs to be something that controls the interaction
between the objects and the queue as well as the sending of the
object to another location after it has been removed from this
queue. There's a bigger problem here. Are all these objects
independant? No! He's talking about a common Display operation.
Well, you can't display anything without a screen. So these objects
have a common incoming interface in that they all implement the
Display command. But where is the outgoing interface? What commands
do these objects require? What resources do they need? There is no
mention of this. In fact, I've yet to see a language that has this.
If I want to replace the display, it's impossible because the
object is coupled to it. Not only that, but this coupling would
happen with any external function call. This is what is wrong with
objects and I've mentioned in previous entries.
VisualWorks (and Squeak, for that matter) is a direct descendent
of the system Ingalls was discussing. The "display" operation being
discussed is #displayOn: - which is implemented by any object that
knows how to display itself out on the screen (or whatever device
you are displaying on. Ingalls is making the common case against
the switch statement. Basically, instead of this:
Display(x)
Case type(x) of
Doctor ["do doctor display"]
Nurse ["do nurse display"]
Window ["do window display"]
...
The OO complaint against this kind of code is simple - that sort
of case statement tends to litter a codebase, and every time you
add a new kind of thing to be displayed, you have to modify every
one of those case statements. Whereas, if you use polymorphism:
displayOn: someDevice
"insert code here for the object to display itself"
Note that the device is sent in as an argument; that way you can get more specific if you need to. That kind of code actually tends to be specific to widgets; the more common case in application code is a method like #displayString - i.e., a common method sent by a widget whenever it needs a way to display some object inside itself. For instance, a Listbox that holds a disparate set of objects. Instead of the case statement, which switches amongst the types, the system sends #displayString and we simply implement it ourselves. For instance, say I have a user in a system whose roles I'm modifying; I might have a simple listbox that displays their name, with a collection of other widgets that, upon selection, displays their access rights. The #displayString method might look like this:
displayString
^self lastName, ', ', self firstName
That way, if the system has different employee types classified
(Manager, LineWorker, etc), then I merely need to implement a
different #displayString appropriate for each one. I don't need
some grand switch statement outside - that code simply does
something like this - here's the code that assigns a new text to a
label in class Label:
text: aValue
"Set the value for text. If you subsequently modify aValue, you will need to invoke this
method again in order to register the change with the receiver."
text := aValue.
text := text displayString.
width := nil.
self updateNeedsScan
Notice how it doesn't care what kind of text is being handed to it? It simply tells it to display itself, and lets that object handle the details (i.e., it reduces coupling). That's the part Vorlath misses; he wants the #displayString code to be all bunched up with the specifics of what kind of device is attached. Those are details that are irrelevant at this level.
Well, then he decides to come out strongly (again) against GC:
Then around 23:00, he starts talking about automatic memory management. While I agree that some automatic memory management is good if it's clear and concise such as the stack, Dan Ingalls offers no reasons other than it's easy to get it wrong and it doesn't say anything about the problem being solved. I can't believe a programmer would ever say such a thing. Perhaps he believes 640K will be enough for everyone too? Programming in general is easy to get wrong. By that notion, we should just give up entirely until we can get robots to do the programming for us. So the getting things wrong argument is bunk. The next thing he says is that it doesn't say anything about the problem being solved. Well, last I checked, memory is a resource. I can't think of anything MORE important than correct management of this resource. I've yet to see a GC that operates correctly. And if your data is so unmanagable that you can't keep track of it, you're doing something wrong. Sorry to all you out there that disagree, but if you can't keep track of your data, why should I expect you to do anything useful with it? So far, I haven't heard ONE good reason that would explain why automatic memory management is a requirement. This is just shameful all around.
There's no one memory management policy that is appropriate for all client and server systems. This fact drives Vorlath to state that the entire pursuit is meaningless. I might as well come out strongly against speed limits, since no one policy is useful for both city streets and the freeway. What you want is a system that will allocate more memory as you need it, and get rid of it f when you don't - within a set of guidelines that are appropriate to your deployment situation. Smalltalk systems allow for that level of control. In VW, we have a set of tuning parameters in two places:
- ObjectMemory, where we can set (amongst other things) the starting sizes of the various memory zones used by the VM
- MemoryPolicy, where we can change the policy used by the VM to allocate/collect
At the simplest level, you can do things like decide that your application should stay within a given range of memory usage, and set the policy appropriately. BottomFeeder does that (thanks to some nice policy code created by Terry Raymond). The developer may need to tweak those parameters specifically for a given application.
The point being, we don't need to throw our hands up in despair and go back to sharp rocks and pointed sticks. His point about being able to specify the outgoing interface makes no real sense to me; the developer presumably knows what set of objects he's dealing with, and designs accordingly.