> Michael, > > I don't recall saying that. I'm all for having the > > ability to express invariants in code. > > I interpreted your blog as stating that a programming > language where invariants can be expressed and are > enforced is patronizing. > > You, of course, didn't use the word "invariant," but your > focus was very much on how enforced restrictions are > unnecessarily patronizing. I interpret most of the > restrictions you identified (e.g. sealed and final > classes) as means of expressing invariants.
They do help in enforcing invariants; the problem is that they are very blunt tools.
Final / sealed / frozen classes is used to ensure that nobody later makes a child class that break the invariants of the parent class. A common example is immutable classes: If somebody inherit from them, they could make the class mutable, e.g. by adding a changeable field. By making the class final, you make sure that classes that inherit can't break this invariant.
However, you also make sure that they cannot do valid inheritance. Let's as an example say that the immutable above is an immutable list that allows mutable objects to be contained. Due to allowing mutable objects to be contained, it needs to recalculate hashCode() every time hashCode() is called. Now, a class that inherit from it and only allowed *immutable* objects in the list could cache the hashCode() value, potentially getting significantly better performance. This could be enforced through the type system - but it can't be implemented because the blunt tool of a final class.
This is a drawback with final classes. The benefit is that you can assume that nobody but the original class is violating the promised invariant.
Which of these are more valuable probably varies depending on what environment you are in. My personal experience has been that these kinds of protections added by others regularly get in my way, while I haven't gotten any problems where I say "Oh, I'd love to have final" from programming in languages that lack them.
The same cost/benefit calculation goes for multiple inheritance: While I know that in theory inheritance diamonds can create problems, in 20 years of OO programming I've come across problems with this once, while when I program in Java, I come across duplication and clumsy APIs daily - clumsiness and duplication that could have been cleanly implemented with multiple inheritance.
> Without bondage & discipline you are not an "engineer" or > come even close but an "artist" or worse: a "cowboy > coder". It's almost like a class distinction.
> It seems that languages focused on preventing you from > doing the wrong thing (e.g. Java) receive harsh criticism, > whereas languages focused on enabling and encouraging you > to do the right thing (e.g. Scala) receive much praise. > Different still are languages that encourage you to do > o stupid things like cast a pointer to an integer or > vice-versa (e.g. C), which are languages that frequently > come under attack.
the developer can really do stupid things without any encouragement
I must have at least 300 programming books scattered through my home. Michael Feather's book on working with legacy code is one of the most useful three or four books I have. It's a superbly practical, pragmatic, coherent book about the real issues that experienced developers deal with when working on mature code-bases.
For this reason, his opinions carry a lot of weight with me.
Note to Artima moderators - I may want to write a long reply to this post, based on my recently reviewing/updating my C++ knowledge, since my last decade was working with the "patronizing" language, Java. The particular book I'm reading, Sams Teach Yourself C++ in 21 Days, is good, and full of Dos and Don'ts, like
"DON'T mark the constructor as virtual."
There's even an exercise testing this. Apparently, coders do this and bad things happen(?) Seems like this is a perfect place for a "patronizing" compiler to prevent the coder form being just plain dumb.
I'm off camping for a week or so, but how would I go about replying with a long post/blog?
> If you mark a constructor virtual this happens: > > -sh-3.00$ g++ test.cpp && a.out > test.cpp:7: error: constructors cannot be declared > virtual > -sh-3.00$
True. Not to derail the thread, but the reason constructors can't be virtual in languages like Java, C#, and C++ is not patronization, it's the fact that they would make no sense semantically. FWIW, I feel that constructors are really a bit of a language hack. That's why they have so many odd rules.