Summary
We've started discussing Python 3000 for real. There's a new mailing list and a branch. The first point of order is about process; a slew of meta-PEPs are being written (and the goal is to avoid a repeat of Perl 6 :-). But I'm blogging about a feature proposal that evolved dramatically over the past days.
Advertisement
Alex Martelli has been arguing for adaptation since the dawn of time; and at various times he's chided me for not seeing the light. Well am I ever grateful for dragging my feet!
Let me first briefly describe adaptation, reduced to its simplest form. The motivation comes from a common situation where an object wrapper is required (aptly named the Adapter Pattern). PEP 246 proposes a built-in function adapt(X, P) where X is any object and P a Protocol. We intentionally don't define what a protocol is; all we care about that it can be represented by an object. The call adapt(X, P) returns an object constructed from X that satisfies P, or raises an exception if it can't. It uses a global registry which maps types and protocols to adapter functions; we can write this as a dict R = {(T, P): A, ...}. Then, adapt(X, P) computes the adapter A = R[type(X), P] and then returns A(X). There is a registration function register(T, P, A) which simply sets R[T, P] = A. Read Alex's post for a gentler explanation (and lots of things I left out).
As soon as Alex had posted this version of how adaptation works, several people (including myself) independently realized that the global registry (which has always been a sore point to some) is a red herring; the registry could just as well be separated out per protocol! So now we make adapt() and register() methods on the protocol: instead of adapt(X, P) we write P.adapt(X) and instead of register(T, P, A) we write P.register(T, A). The signature of A is unchanged. I call this "second-generation adaptation".
The nice thing about this is that you no longer have a fixed global implementation of exactly what register() and adapt() do. Alex mentions a whole slew of issues he's ignoring but that would need to be addressed for a real-life implementation, such as how to handle adaptation for an object where its type hasn't been registered but some base type has been registered; or the notion of inheritance between protocols (useful when you equate protocols with interfaces, as in Zope and Twisted); or automatic detection of the where an object already implements a protocol/interface (again useful in Zope and Twisted). Several of those extensions make lookup performance an issue, and there are different ways to address that. By having multiple protocol implementations (each implementing the same adapt() and register() APIs) each framework can have its own notion of how adaptation works for its own protocols, without having to deal with a fixed global implementation of adaptation that may or may not do the best thing for a particular framework.
But then Ian Bicking posted a competing idea: instead of adaptation, why don't we use generic functions? In his and Phillip Eby's view these are more powerful than adapters, yet in some sense equivalent. Let me briefly describe generic functions to set the stage.
A generic function, G, is a callable that behaves like a function (taking arguments and returning a value) but whose implementation is extensible and can be spread across different modules. TG contains a registry of implementations which is indexed by a tuple of the combined types of the arguments. Suppose we want G to be callable with two arguments, then the registry would map type pairs to implementation functions. We'd write G.register((T1, T2), F) to indicate that F(X1, X2) is a suitable implementation for G(X1, X2) when type(X1)==T1 and type(X2)==T2. The simplest implementation would just map the arguments to their types (or better, classes), convert to a tuple, and use that as a key in the registry to find the implementation function. If that key is not found, a default implementation is invoked; this is provided when G is first defined and can either provide some fallback action or raise an exception.
A useful implementation of generic functions must also support looking for matches on the base types of the argument types. This is where things get hairy, at least when you have multiple arguments. For example, if you have a solution that is an exact match on the first argument and a base type match on the second, and another that is a base type match on the first argument an exact match on the second; which do you prefer? Phillip Eby's implementation, RuleDispatch (part of PEAK) refuses to guess; if there isn't one dominant solution (whatever that means) it raises an exception. You can always cut the tie by registering a more specific signature.
C++ users will recognize generic functions as a run-time implementation of the strategy used by the C++ compiler to resolve function overloading. Fortunately we're not bound by backwards compatibility with C to repeat C++'s mistakes (which, for example, can cause a float to be preferred over a bool). Lisp or Dylan users (are there any left? :-) and PyPy developers will recognize them as multi-methods.
In order to contrast and compare the two ideas, I posted a very simple version of adaptation and generic functions, applied to the idea of reimplementing the built-in iter() function. I used descriptors for registration, making the signatures slightly different from what I showed above, but the essence is the same.
Now we're ready for the Aha! moment (already implicit in Ian's and Phillip's position), brought home by an altenative version of the Protocol independently developed by Tim Hochberg: P.adapt(X) is just a verbose way of spelling a generic function call G(X)!
Interestingly, it took Alex a little time to like this -- he was thinking of adaptation as more powerful because it can return an object implementing several methods, while doing the same thing with generic functions would required a separate generic function for each method. But of course we can write a generic factory function, which returns an object with multiple methods, just like adapt() can; and the generic function approach wins in the (common) case where we want to adapt to a "point protocol" -- an interface with just one method, which we immediately call in order to obtain the desired result. When using adaptation, this would require each adapter to use a helper class with a single method that does the desired operation; when using a generic function, the generic function can just do the operation. And we haven't even used generic function dispatch on multiple arguments!
I'm not sure where this will eventually lead; but I've already killed PEP 246 (adaptation) and PEP 245 (interfaces) in anticipation of a more "generic" proposal.
I have implemented multiple dispatch for Java with an extended compiler (http://pec.dev.java.net/nonav/frontpage.html). I think they are a great idea, so good luck in putting them into Python.
I looked at many different implementations including using a Tuple as you suggest. I found the best to be to use a series of instance of tests, see:
> I have implemented multiple dispatch for Java with an > extended compiler > (http://pec.dev.java.net/nonav/frontpage.html). I think > they are a great idea, so good luck in putting them into > Python.
Is that what all of this is for? Multiple dispatch?
> Is that what all of this is for? Multiple dispatch?
I may not have understood your question - but here goes?
The purpose of my extended Java compiler is to add support for design patterns into the compiler. You declare a class as conforming to a pattern using a marker interface, e.g. class MySingleton implements Singleton { ... } says that MySingleton is a Singleton and the compiler checks that it is.
Multiple dispatch is one of the patterns I support, others are: No Instance, Static Factory, Instance Factory, Singleton, Immutable, Value, Immutable Value Conversions, and Composite. The compiler is extesible, therefore you can add your own patterns to be enforced.
> @James, > > > Is that what all of this is for? Multiple dispatch? > > I may not have understood your question - but here goes?
I think you may not. I was referring to the original post from Guido. Your post actually cleared up his post (I think.) I was looking at this and trying to figure out what the point was. When you said multiple dispatch, I realized that there isn't any method overloading in Python (I'm still a noob with Python, forgive me if I am incorrect here) and all of that stuff made sense.
> > I have implemented multiple dispatch for Java with an > > extended compiler > > (http://pec.dev.java.net/nonav/frontpage.html). I think > > they are a great idea, so good luck in putting them > into > > Python. > > Is that what all of this is for? Multiple dispatch?
So it seems. It's a fine way to eliminate the whole adaption problem which, taken seriously, would require defining some transformation according to an initial and a final type. Instead we see a formal reduction around some "adaption protocol" expressed by a few combinators. Just abstracting away adaption and transposing the terms long enough, we move back to comfortable plain old OO: multi-methods, factory-functions, (parametric)polymorphism, multiple-dispatch etc. It's like saying that Go4 design pattern can be reduced to inheritance and association, which is nothing but the truth, but still seems to miss some essential points.
> I was looking at this and trying to figure out > what the point was. When you said multiple dispatch, I > realized that there isn't any method overloading in Python > (I'm still a noob with Python, forgive me if I am > incorrect here) and all of that stuff made sense.
Correct. Python has no method overloading in the Java sense.
However, it has a few features that *often* make it unnecessary to do method overloading; in particular default arguments, duck typing, and the occasional explicit type check remove many of the situations where Java requires you to define multiple overloaded versions of the same method.
I'm actually thinking of calling this feature (if/when it gets added to Python) "overloading" -- what do you think of that? It's run-time instead of compile-time, but it's the moral equivalent -- Python does many things at run-time that Java and C++ do at compile time (for example type checking, import, even class/function definition).
> Correct. Python has no method overloading in the Java > sense. > > However, it has a few features that *often* make it > unnecessary to do method overloading; in particular > default arguments, duck typing, and the occasional > explicit type check remove many of the situations where > Java requires you to define multiple overloaded versions > of the same method.
I may have given the impression that there was a judgement in my disbelief. It was really just that when I first read this I felt "like a child that wanders into the middle of a movie". I thought, wow, this is some crazy stuff. Then when I saw "multiple dispatch" it clicked. Just a little inverse cognitive dissonance.
> I'm actually thinking of calling this feature (if/when it > gets added to Python) "overloading" -- what do you think > of that? It's run-time instead of compile-time, but it's > the moral equivalent -- Python does many things at > run-time that Java and C++ do at compile time (for example > type checking, import, even class/function definition).
It had actually occurred to me that Python lacked multiple dispatch and I was a little disappointed because I was looking forward to a using it since Java lacks it.
Will the adapter take the type of the target variable (if there is one) into consideration? That would have some fairly interesting (and possibly evil) implications.
> Will the adapter take the type of the target variable (if > there is one) into consideration? That would have some > fairly interesting (and possibly evil) implications.
Do you mean, in the C++ sense where the desired target type can affect the choice of overloaded implementation? Then the answer is no, for two reasons: (a) Python's variables aren't typed (what's a string now could become a file in an instant), and (b) Python's run-time isn't set up to communicate this information; the information flow is strictly from the inside (expression) out (to the target).
> I'm actually thinking of calling this feature (if/when it > gets added to Python) "overloading" -- what do you think > of that? It's run-time instead of compile-time, but it's > the moral equivalent
I think that if you just call it "overloading" then Python forums will become full of people getting confused about how it differs from Java "overloading" and getting more confused by others who explain the difference badly.
If it really is a different thing then give it a different name or at least a qualifier (e.g. "run-time overloading" or "Pythonic overlading" {though the latter makes it sound quirky}).
> > Will the adapter take the type of the target variable > (if > > there is one) into consideration? That would have some > > fairly interesting (and possibly evil) implications. > > Do you mean, in the C++ sense where the desired target > type can affect the choice of overloaded implementation? > Then the answer is no, for two reasons: (a) Python's > s variables aren't typed (what's a string now could become > a file in an instant),
Of course. That idea doesn't make a bit of sense in Python.
> and (b) Python's run-time isn't set > up to communicate this information; the information flow > is strictly from the inside (expression) out (to the > target).
I pretty much figured that but I'm still getting my head around the design of Python.
Actually, yes, there's still some of us who use Dylan out there, and after the Open Source release of the ex-Harlequin Dylan compiler, there's even something like a future for Dylan.
Nice to see that Python is getting support for Generic Functions. Now if you add macros for support of metasyntactic abstractions, get rid of fundamental ("non-object") types for orthogonality, add type annotations and optimistic type inferencing for performance, you'll end up with a language that pretty much looks like Dylan, but with a bigger user base.
Which isn't a big surprise, given that both Python and Dylan are based on Scheme, just with a different syntax.
Both of these strike me as examples of features where the power they add to the language is insufficient to justify the increased complexity they bring.
A feeping creature, in other words.
Flat View: This topic has 23 replies
on 2 pages
[
12
|
»
]