|
Re: Changing the Zen of Programming
|
Posted: Sep 17, 2002 2:22 PM
|
|
> One conclusion is that that you cite, which allows > everyone to control the libraries to which they will > link (which, btw, we also provide in Java/Jini with > the introduction of preferred classes). But there are > other solutions as well. > Yes, if you link exclusively to known implementations of interfaces with which you've tested your app or system, you may get reliability but it's not a dynamic system.
> One is to insure that the interfaces can be > understood. This often means defining simple > interfaces, which is harder than defining complex > ones that are specified in vague or incomplete ways. > But going to the trouble to really define the > interface is worth the time that it takes, and can go > a long way in making sure that everyone agrees on > what the interface means. > The challenge here is how do you get people to do that? People of various talents will be designing APIs. People will have time pressures in which to produce those designs. It takes talent and time to come up with a simple API. In practice, therefore, complex hard-to-understand, vaguely documented APIs will happen.
> Another is to really restrict yourself to programming > to the interface, and not try to cheat. DLL hell is a > well-known phenomenon, but it was generally caused by > the implementations of the DLLs making changes to the > global environment in ways that were not part of the > interface and not documented. The problem was one of > using non-interface-defined mechanisms to allow > communication between components in a way that was > either more efficient or hidden from various members > of the competition. > I hadn't thought of DLL hell in terms of changes to the global environment. I always imagined vaguely defined interface semantics leading to implementations doing things that were surprising to clients. I suppose the DLL hell problem is made better by Java (and .NET) preventing a client from going around an interface at runtime, but implementations can still do things with the environment via static methods. So once again, it looks like it is up to all those programmers writing implementations of APIs to be good citizens. That's why I think it will be a challenge, because a lot of those programmers will, because of lack of experience, talent, taste, time, and clear API documentation, do bad things in their implementations.
> Reliable systems that are made reliable by testing > the whole of the system are reliable systems that > can't be changed. Systems that can be changed but > include implementations that rely on side-effects > won't ever be reliable. We can build systems that are > both reliable and open to change, but only if we > really practice the discipline of programming to > well-specified interfaces. The fact that we have > failed in this discipline in the past doesn't mean > that it can't be done; merely that we haven't. > Well, it sounds like it is up to designers and implementers of APIs. If designers and programmers do a good job, then dynamic systems in which the parts are made by many different people will work. Perhaps the fact that a higher level of quality is required to make the systems work will force the builders of the system parts to reach that level of quality. The web is an example of a dynamic system. Lots of different manufacturers of web servers and clients take part in the web, but it still seems to work. I do think that testing components against their interfaces will be an important part of the process by which programmers will achieve the required level of quality in their implementations. Nevertheless, I think that given the generally low level of quality of code that I tend to see in most places, it will be a challenge to achieve reliability in dynamic systems in which parts are made by different people who don't know each other.
|
|