|
Re: Programming in the Mid-Future
|
Posted: Mar 19, 2010 6:52 AM
|
|
> > For a normal statically typed language, it's correct: > > There's nothing to test, because the code won't > compile. > > The code will compile, if you remove the non-existent > types or abstract methods.
So the code won't compile. I'm glad we agree. *Different* code would compile, and in some languages you could play around with a ton of crappy casts etc to sort of simulate how it would be with a dynamically checked language, but that's not the same code, and it is explicitly in the way of what I want: To not have to deal with that aspect of the codebase at that time, instead focusing on the aspect that I am interested in.
> > > But, remove the type declarations and run the same > > e structure code through a dynamically checked > language, > > and there *is* something to test. The type problems > can > > occur on a different code path I am not interested in > > testing right then, for various reasons (e.g, type > > problems due to concurrent editing of code where the > > changes come in from a version control sync, > > It doesn't happen in reality. In properly setup > environments, the main repository gets the code that > compiles correctly,
... and your local changes can include type changes that make the other developers changes create type conflicts. This has happened to me several times over the last weeks; I've been working on a cross-cutting concern that changed the a core type that lots of other things inherited from, including new classes introduced by other developers while I was working on this.
Putting your fingers in your ears and saying "That don't happen" about the things that regularly happen to me with static types isn't a convincing argument. At *best* it shows that you've learned to work around the problems with static types fairly effectively.
You know what? I hardly ever get problems from types when programming in a dynamically typed language. Over the last 15 years, I've only twice seen a type error lead to extended debugging: Once in a Python system somebody else had written, where the fact that a string is also a list of strings had led to problems when somebody meant to pass a list containing a single string and passed the string, another where auto-vivification in Perl (perl makes up variable content when it is used) escalated a typo into a larger problem.
But I don't propose that this means "There are no advantages to having the checking" - it just means that I've worked so much with dynamic languages that I so easily work around the challenges that are there.
I've also worked with static languages, though not as much - and I only mention problems that I actually encounter when working with static languages.
> > > or a type > > change that I'm doing at a type where there's a lot of > > derived classes and I want to fix one of them fully > before > > going on to the next, instead of doing a sweep over all > to > > fix the type issue and then afterwards to fix the > tests). > > You can always isolate that part of code by creating > another project and including the specific classes you are > interested in.
Then you've already significantly interrupted the work I'm doing. It was this interruption that we were trying to get rid of, it was *not having the interruption* that is the advantage. Spending the time to set up a new project and then modifying it for each class is going to be as much of an interruption as taking the dual phase approach.
> > > Wow, this discussion has been turned to a 'dynamic vs > > > static' debate! > > > > There are some people that always want to switch to > > grandiose claims about static typing; so anything that > > even mention anything dynamic turns into static typing > vs > > dynamic checking. > > > > Most of this discussion happens at an inane level: > Static > > typing advocates with little experience with non-static > > languages pretends that static typing has no cost, > dynamic > > typing advocates claiming that everything should be > > dynamic and that has no drawbacks to speak of. Both > > solutions have drawbacks; the question is which is > least > > in a specific situation (including the specific skills > of > > a particular person and the specific aspects of the > > particular problems to be solved and languages to be > > used.) > > There is no drawback for the static typing case.
The first part below assumes that we're discussing static typing on the programming language level. I didn't notice that you mentioned JIT and being dynamic at the source level below before I wrote the below, and from my perspective type inference in the JIT is an implementation detail that's not relevant when I talk about static typing vs dynamic checking; type inference at the JIT level is an implementation detail that don't usually affect anything but performance.
On the topic of programmer level static vs dynamic typing and static typing having some drawbacks, we can approach this from two sides: The theoretical or the practical.
From the theoretical side, we can use the fact that any type system will reject some programs that would execute correctly. This can be demonstrated simply: if p() then a op b, where p() is a possibly-terminating function and a op b is inconsistent. If p terminates, then the program is invalid, if p does not terminate then the program is valid; proving if p (for an arbitrary p) violates the Halting theorem. It is, at least to me, clearly a drawback for a system to reject a valid program. (And in practice fully static systems reject many valid programs.)
From the practical side, it depends on the people and actual languages involved.
I would be willing to postulate that you, with your specific skills, may have no drawback to the static typing case.
Would you be willing to postulate that I, with my specific skills, work less efficiently for some problems in common variants of the static typing case (e.g, Java) than I do in common cases of dynamic typing (e.g, Python, Ruby)?
Are you willing to postulate that the natural way *for me* to think about some problems is through heterogeneous collections, and that these are simple and natural to implement in the dynamic languages I usually use (Perl, Python and Ruby) and hard and unnatural to implement in Java (you need to do a ton of casts)? (As far as I know, it is impossible to directly implement in Haskell due to lack of casts; you'd have to restructure the problem, which is obviously possible but miss the point.)
Would you be willing to postulate that *for me* it is an advantage to be able to take different aspects of correctness in the order I choose, instead of having one particular aspect forced to be dealt with first and then have the rest dealt with afterwards?
If so, we agree that there are disadvantages, the question goes down to what the disadvantages are and when they're larger in either environment.
> > > > > > > Anyway, to bring this more back on topic: What would > the > > prediction for the future of static typing and dynamic > > checking? How would this be better in 25 years? > > Source code will have a dynamic system but the underlying > JIT system will convert it to the static version > automatically.
> > > > > I think we'll get the ability to deal with this mess > > better. > > There is no mess. > > In my ideal world, which I believe we are likely > > to have in 25 years, I'll start out with dynamic > checking, > > allowing easy exploratory programming, quick response > > times, easy to do hacks to get things working, and so > on. > > This works good for programming at a single module to > > o medium size program (with a single person or a small, > > static team). > > > > When working for a larger environment, I'd like to be > able > > to infer types when my module has become somewhat > stable, > > and place these as declarations at the edges of the > small > > module, giving definition to an API and giving people > > something to relate to that is more abstract that then > > actual code (and more direct than documentation). > > Static typing easily allows exploratory programming.
... and also easily gets in the way of exploratory programming, at least some of the ways I like to deal with it (using heterogeneous collections to define a sort of DSL, for instance.)
> > I'd > > expect those types to be highly expressive and flexible > as > > well - something like full dependent types, supporting > > much more complete proofs of my program than the > presently > > common crop of statically typed languages. > > You can't go further than Haskell in proofs. It's math, > and there are physical limitations to what it can be > done.
"Common crop" doesn't really include Haskell, but I'll also say that my impression is that there's a lot of work being done on dependent types etc that makes it possible to go further than Haskell does. You can sort of fake it in Haskell, but it's (as far as I understand) not easy or natural for an average Haskell programmer. (I'm just a beginning Haskell programmer, so I may be misunderstanding.)
> > I expect simple transition between code, tests, types > and > > proofs for the easy cases. I expect to be able to > start > > on any end, and whatever is easy to automatically > derive > > from what I have provided will (if I ask) be derived > from > > the part I provide. Not anywhere near a 100% solution, > > but an 80% for the simple cases. If I write some tests > > for simple code, the system will (often) be able to > derive > > code that pass those tests. If I write some untyped > code, > > and the system will be able to derive types that fit > the > > code (though the types may be too wide), and usually > tests > > that make sure that (some) important aspects of the > code > > still works. If I write some tests, the system will > be > > able to use those to generate some part of the proof > > skeleton, while needing to help the system the > complicated > > parts. I expect the use of code databases to resolve > some > > of this - where the system look up the parts you've > > provided against some sort of template database from > other > > people's code, and say "Hmm, this foo looks very much > like > > that bar" and then offers me some simple analogy to the > > tests or proofs for that bar (or, if it is a more or > less > > direct replacement, to just let me use bar). > > It will never work, because two pieces of code need to be > identical in order to have the same proof. Remember, code > is math.
Large parts of the code we write is very close to identical; you are much less of a unique snowflake than you think.
Given a piece of code, I can write a reasonable set of tests for it, or have a reasonable derivation of what sets of properties I would like to prove for it, and cook up a proof. I see no reason that a sufficiently powerful computer can't do the same. Coq already does parts of this when you construct a proof; you can have it do an exhaustive search for how to complete a portion of the proof. I expect this to significantly improve with a 1000x faster computer to do it, better algorithms and mounts of data to use for heuristics.
> > > > > > > That's where I expect us to get: The benefits of > dynamic > > typing when that is most beneficial. The benefits of > deep > > static typing when that is most beneficial. And the > > ability to maneuver between these in a relatively easy > > fashion, with the computer doing as much as it can of > the > > lifting. > > There are no benefits in dynamic typing that static typing > doesn't have.
Let's just agree that there are no benefits to dynamic typing that you have the skills to exploit.
> > Working with a statically typed language interpreter has > the same benefits as working with a dynamic language: you > can alter the types on the fly, don't exit the current > test session, etc. You can do this with C++ and Haskell, > if you wished.
Doesn't have the same benefits, as shown both as a proof above and from the simple logical proposition "Do you or the computer get to choose what order to do different aspects of your task?"
Working with a REPL for a statically typed language has *some* of the same benefits, but not all of them.
|
|