Summary:
Anders Hejlsberg, the lead C# architect, talks with Bruce Eckel and Bill Venners about why C# instance methods are non-virtual by default and why programmers must explicitly indicate an override.
The ability to add new comments in this discussion is temporarily disabled.
Most recent reply: November 15, 2004 0:53 PM by
|
"Don't let the fear that testing can't catch all bugs stop you from writing the tests that will catch most bugs." --Martin Fowler
Similarly, some nice compiler features might not be able to catch all bugs, but I'm not fear to embrace them if they can catch some. Explicit overriding can catch some bug at cost of just few keystrokes. I prefer to type them rather than spending time in debugger or writing trivial unit-test. Unit-test is not the answer for everything. I write unit-test for most of my code recently, but not all.
Anyway, luckily, as my experience in OO-design grows, I've found most of the time the method I'm overriding is abstract. :)
|
|
|
One feature I've found myself wanting recently is the ability to indicate that a class does not add any methods to the public interface of it's parent (class or interface). Using Java terms, I'm thinking of keywords like 'only_implements' and 'only_extends'; perhaps the 'purely_subsitutable' would be better. The rule would be that the child class can have no public methods/attributes that are not defined in the parent.
A side effect of this feature would be to allow the compiler to detect when a method intended as an override no longer has a match in the parent, but my primary interest is allowing a developer to communicate/express the intent that a sub-class does not add anything to the contract/interface of the parent, it merely changes the implementation.
In some respects, the C# mechanism is too fine-grained.
Cheers,
Geoff Sobering
|
|
|
> this DbC stuff sounded familiar -> Eiffel [...]
Absolutely! I've not looked really far into the history of software development, but to my knowlege much of the current focus on separation of interface and implementation derives from Meyer's work on DbC. In fact, the group I'm working with frequently talk of the combination of our Java 'Interface' classes and JUnit tests as defining the contract for a particular concrete class (or classes). In a couple of cases, we've even tried refactoring our tests to allow multiple implementation classes to be tested by the same test-suite, as a way of validating that all the implementations are subsitutable.
Cheers,
Geoff Sobering
|
|
|
"Bill Venners: In Java, instance methods are virtual by default—they can be overridden in subclasses unless they are explicitly declared final."
Default methods are NOT dynamic (or virtual) - Child methods that override default methods from Parent (of a different package) are statically bound.
After reading this first sentence,I could not read any further .
|
|
|
> Default methods are NOT dynamic (or virtual) - > Child methods that override default methods from Parent > (of a different package) are statically bound.
It took me a couple of tries to parse this sentence, but I think you're referring to the case where method x in class a.A has default protection (i.e. not protected or public), and class b.B extends a.A and defines its own method x . In that case, yes, there's no relationship between a.A.x and b.B.x , and both compiler and runtime will prevent calls to a.A.x from within package b .
However, the original quote doesn't mention which package the subclass resides in. A class a.C that extends a.A and overrides x will require dynamic binding for x . Furthermore, the JVM must always be prepared to load a.C with the correct semantics, even if it isn't in the same jar as a.A , unless the jar containing a.A is sealed.
In those (common) cases, then, even methods with default protection are "virtual". HotSpot may statically bind them as an optimization, but it must undo the optimization as soon as a.C is loaded.
|
|
|
> Having now done plenty of C++, Delphi, Java and C#, I'd > say there is no particular advantage to the more explicit > inheritance mark-up in C# to offset the annoyance of extra > typing and spurious (unrelated to the task I'm really > working on and trying to think about) decision-making. > The problems it ostensibly solves don't seem to be big > ig ones, they are corner-cases and the solution only > shifts to different problems. Conversely, I never had > much of a problem in other languages with > accidentally overriding base methods -- is this > really such a common problem that it needs to be so > aggressively tackled? Anyone else out there have > experiences where this was a huge problem? Would the C# > syntax have solved it?
The people who struggle with the breakage problem are those who create reusable libraries that have wide distribution, for example, Microsoft with the .NET Framework and Sun/Java Community with the Java API. Most programmers are able to access all subclasses of their classes. That doesn't mean that it isn't a good idea. It is unlikely that this will happen to any individual, but it is likely that it will happen to some people for any Java release. The breakage can be completely avoided by expressing the programmer's intent more clearly with something like an override keyword. Whether the extra finger typing is worth solving this problem is a judgement call, but it does solve a real problem.
The point about performance is a bit fuzzier. Bjarne Stroustrup brought up the performance cost of virtual methods compared to non-virtual ones in his interview this past week. It does exist in the C++ world, but the world of VMs is a bit murkier. Most modern JVMs in long running processes can inline virtual method calls if they matter to performance, so there's no cost really in that kind of situation.
I do also agree with Anders that people don't usually really design classes to be subclassed. I think Anders' point about that fits with Josh Bloch's comments in Effective Java that we should design for inheritance or disallow it. (Note that this was also coming from someone who works with widely distributed reusable components.) I thought that Anders' most compelling argument for having non-virtual be the default was that people don't usually design for subclassing.
|
|
|
> Without explicit"override" keyword the mis-typed method > will undesireably become a new method.
Which you will find about 10 seconds into your first unit test.
> Also, without override it's must be > very difficult to change base-class method name, since the > compiler cannot help you find out where that method was > overriden in the sub-classes.
No, but the editor can. And if it misses it, the unit tests will point it out.
|
|
|
>However, the original quote doesn't mention which package the subclass resides in.
That is true.The original quote is a blanket statement:"instance methods are virtual by default" Not: "instance methods are somtimes virtual by default"
>A class a.C that extends a.A and overrides x
When C shares the SAME package (a) as A ???? - this is a unique scenario.
>In those (common) cases, then, even methods with default protection are "virtual"
The MOST comon case with default protection is for a child class ( C ) to be in a different package as A (a) and to be *NOT* virtual.
>HotSpot may statically bind them as an optimization?... unless the jar containing a.A is sealed?
Hotspot has NOTHING to do with it. Hotspot or no hotspot.Sealed or unsealed. Instance methods are *NOT* virtual by default. To say otherwise is baffling.
Its really very simple: protected or public - Virtual private or default(package) - NOT virtual (If you say sometimes - go ahead).
> I thought that Anders' most compelling argument for having non-virtual > be the default was that people don't usually design for subclassing.
I am baffled.The most compelling argument for using virtual methods is that they are more intuitive. Polymophism requires dynamic binding.
>The point about performance is a bit fuzzier. Bjarne Stroustrup brought up the performance >cost of virtual methods compared to non-virtual ones in his interview this past week.
Performance costs are negligible when compared to maintenance costs. Build tools are going to be overworked trying to update all your high performance inlined methods.
|
|
|
Bill Venners> I thought that Anders' most compelling argument for having non-virtual be the default was that people don't usually design for subclassing.
I could not disagree more. People that don't design for subclassing probably dont expect thier classes to be subclassed (unexpected children). Users of these Non-virtual (subclass) methods may need Asprin -they may suffer head-aches.
A class with dynamicly bound methods is better suited (designed?)to be subclassed than a class with static (NON-virtual) methods.
A class with default (non virtual) methods probably should be declared final.
|
|
|
Generation of code is its own problem as its a violation of Blanchard's Law.
From a practical standpoint, I find the existence of expllicit virtual/non-virtual control in an OO language to be a frequent barrier to flexibility.
Many are the shortsighted Java developers who have declared methods final that I wanted to override. Ditto for C++ developers who neglect to declare methods virtual that need to be overridden. These problems would go away if all methods were always dispatched dynamically.
I definitely don't buy the efficiency argument. Not when I've got a multiGHz computer in my backpack. IOW, I've got nearly 1000 times the computing power of when I started programming. I'm not going to give efficiency a thought until there's a problem. So why are these language designers burdening us with this clutter?
Wake up and smell the clock rate!
|
|
|
> We have experienced the corner case where unit testing would NOT have caught the problem.
Nevertheless, RCM is right. Unit tests and test-driven development are the real issues here.
With TDD, we no longer have to rely on the mafioso of gigantically complex compilers and language syntax to protect us from type problems, versioning problems, and male pattern baldness. Exhaustive suites of automated unit tests do a much better job of protecting us from arbitrary, radical change, at the application level, than compilers can possibly do.
Languages like Smalltalk and Ruby anticipate this new kind of freedom, where syntax lucidly reveals programmer intent, uncluttered by gobs of "protective" cruft. Protected by their own tests, instead of compilers, programmers can concentrate their efforts on clarity, simplicity, and (as required) extensibility.
|
|
|
I notice that Java 1.5 has @Overrides as part of the annotations extension.
|
|
|
If you find yourself suffering because you changed the signature of a method and overriding methods were no longer overriding, you might want to think about using interfaces rather than concrete superclasses. The compiler would then be able to let you know that the implementing class no longer implemented the interface correctly.
I see a lot of code using concrete inheritance where interfaces would have made far more sense, as well as breaking the brittle dependency chains that can form with deep inheritance trees.
I rarely find I need to override a concrete (non abstract) method, and on those rare occasions I do, changing the inheritance relationship into one of composition usually leads to a cleaner solution, as well as providing a clean extension point for any future modifications.
|
|
|
By making methods final by default, and having to explicitly state which ones can be overridden, you are forcing developers to try and anticipate all possible future uses of the code they write.
It encourages a sort of protectionist attitude that says 'You're not smart enough to understand how my code works, so I'm going to only let you override the bits that I choose to allow'.
I write code to be useful, and if someone sees a use for it that I didn't anticipate, by overriding a method, then thats a good thing and should be lauded, not repressed.
|
|
|
Why do I have to express my intentions in two places? First a virtual, then an override? Wouldn't the override do? It's very annoying when I refactor and move the common part of two methods up to the superclass and have to write both virtual and override. Talk about flow killer! There is an other issue about the non-virtualness: ArrayList l = new ArrayList(); l.Add(new MyBase()); l.Add(new MyDerived());
foreach (MyBase b in l) { l.DoSomething(); }
l.DoSomething() will never call MyDerived.DoSomething() when no keywords are used. That behaviour contradicts what I find intuitive: That it is the instance you call the method on which should decide which implementation to use, not the reference type.
|
|