|
Re: Where Did All the Beautiful Code Go?
|
Posted: Apr 6, 2006 4:12 AM
|
|
Take as a simple example an index bound check. The index will be the result of some computation, so in order to prevent out of bound errors, you would have to prove that the computation in question cannot, under any circumstances, yield a wrong result
Actually, the proposed approach goes the other way around: it does not have to make sure that the computation does not return an invalid array index value, but that the returned value is correct for using as an array index.
Assume that some caller calls getSomething() and passes the result to another method, which has the specification that the input may not be null. The compiler probably can't foresee how the condition will evaluate, so in order to enforce the requirement, it would have to reject this code. However, the code may be correct. A human programmer may look at the code and say, "why, it's obvious that the method won't return null in the case in question. That stupid compiler is preventing me from getting my work done!"
The approach I proposed actually does the same thing humans do; I actually derived it from observing how me and other programmers realize if a piece of code will actually evaluate to some value. It may sound strange at first, but take a few moments to consider how a human decides that "the method won't return null in this case": what we do is to examine the source code, derive the possible outputs, then compare them with the possible inputs. We do not actually run the algorithm in our heads, but we put all the possible values in sets.
So, in your example, most probably the tool will be able to understand the same thing that a person does, and thus not reject the code.
In fact, some programmers prefer dynamic languages for that very reason, they feel that static checks are too restrictive. A language which pushed compile-time checks even further, along the lines you suggest, would hardly become very popular.
I agree with you. In cases that a programmer wants to simply start coding and discover the errors later, this approach is restrictive. There is not one shoe to fit every foot. But in cases that programming is done at professional level involving thousand dollar contracts (and more), maybe this approach is better.
There is an interesting discussion in Slashot this morning about Eiffel and some people said that they hate contracts.
This isn't in itself a reason to dismiss the concept but the point is you have to make a credible case not only that it is possible, but also that it is worthwile.
Of course. I am not even convinced myself yet. That is why we are discussing this.
No static verification, indeed no methodology in the world, can guarantee correctness, so the question is how to get the best result with respect to the ressources available.
Indeed. The only think you can prove is the absence of errors for the constraints you have set. But it is not possible to prove that the constraints you have set are the correct ones.
Btw I am convinced of the benefits of static type checking, and I don't give much credence to the dynamic crowd's claim that static checking is too restrictive and unchecked code with unit tests is just as stable. The question I'm asking is is how far static checking can go before it becomes really restrictive, and what can realistically be achieved by pushing them further.
I would like to point out, in the context of this Nice example, that the approach presented above maintains the truthfullness of a statement across code (if there is one, of course). This means that once you check that a pointer is not null, then you do not have to check that again. This is not the same as in Nice, where every possible pointer access that is not of type ? must be checked for not being null. It also means that you do not have to have a separate non-nullable pointer type. Here is the analysis of your example:
//input: condition:{true or false} //output: {null or MyObject} MyObject getSomething(boolean condition) { return (condition)? new MyObject() : null; }
//input: m:{myObject} void anotherMethod(MyObject m) { println("received myObject"); }
void main(String[] args) { //(1) //input: true -> condition: {true, false} OK //output: {null, MyObject} var myObject = getSomething(true);
if (myObject == null) { //(2) //myObject is null. //myObject:{null, MyObject} => myObject:{null} //input: myObject:{null} -> myObject:{null} OK System.out.println("error: null MyObject"); return; }
//(3) //myObject is no longer null. //myObject:{null, MyObject} => myObject:{MyObject}
//(4) //input: myObject:{MyObject} -> m:{MyObject} OK anotherMethod(myObject);
//here rest of the code supposes that myObject is not null }
At point (1), the object MyObject is created. The result may be null. We check that the input value to getSomething is a subset of the parameter condition .
At point (2), the block if (myObject == null)... has as input a myObject variable that is null . This is also ok, as myObject can be null . We check that the input value (myObject:{null} ) is a subset of the required value (myObject:{null} ). The static if part assures us about that.
At point (3), we see that myObject can not be null any more, because the branch with myObject == null does not reach this point. Therefore, the set of possible values does no longer have null as its input.
From point (4) and below, the compiler assumes that myObject is not null , and thus happily accepts the code that uses the variable.
So it seems that static verification that is not restrictive and thus frustrating can exist to a certain degree.
Of course the real question is if beautiful code can exist without static verification. I have no answer for that.
|
|