The first bunch of observations is about the organization of the
conference. This aspect is interesting to me as an organizer of the
PyConTre conference. For a very reasonable free - 100 Euros for early
subscribers, compared to the 90 Euros of PyConTre - we got two full
days of conference in a really nice room, an excellent lunch and a
spectacular dinner - the conference dinner was not included at
PyConTre. In addition we got two tickets for breakfast and
this is the first conference I ever attended offering me the
breakfast: kudos for the idea! We also got a nice T-shirt and bag, and
even printed proceedings with the papers presented at the
conference. All in all, we could not have asked for more.
I would be curious to know how much of the funding
came from the sponsors and how much came from the University.
At the Python conference we could keep the admission ticket low
because we had a lot of sponsors, both Italian and international
(amongst the international sponsors Qt, Google, Riverbank, the Python
Software Foundation and others); at the Lisp Symposium there were
less sponsors but also much less participants (say ~40 against
~400 at PyConTre) so I am sure the conference was much less
expensive.
I actually prefer smaller conferences with a single track, so
that I can attend all the talks; the problem with the Python
conference with three parallel tracks is that I was forced to
miss a lot of very interested talks.
Also, at PyConTre I was not completely free because I had to
serve as a talk manager, being an organizer, whereas at the
Symposium I was just a speaker.
PyConTre had a couple of things more than the Symposium: a
translation service and a video service. The translation
service does not make sense in the context of the Symposium,
where the official language is English, but the video service
would be useful. In particular, I cannot give an URL to the
video of any of the talks, so if you could not attend, that
talk is lost and there is no memory of it. The organizers
actually collected the slides, but it is not the same
thing as a video.
Another advantage of PyConTre were the dates: the event
happened during the weekend and not during working days
so that it was much easier to attend for people working
in the enterprise.
There was only one minor glitch in the organization, i.e. getting the
wireless connection to work was hard. The problem was mainly
bureaucratic, since because the University connection was crypted, we
needed certificates and some nontrivial settings of the wireless.
After spending some time with the help of the local organizers to
configure my wireless to use the digital certificate, I still could
not connect since my system did not support Enterprise level
authentication, but only Home WPA connection. It would have been
simpler to use another provider instead of the University network, or
simply not to offer the service, since at the end only few people could
use it. At PyConTre we just
used the Hotel Internet connection, which was expensive and slow, but did
not give so much trouble, since we simply got a card with a login and password.
The first day opened with Scott McKay talk about what he learned in
the last 25 years or so, working with Lisp and Dylan. Scott started
his career at Symbolics, the creators of the legendary Lisp Machines,
was involved with the creation of the Dylan programming language
(a.k.a. Lisp without parenthesis) and he is now working at ITA
Software (they produce software for buying and reserving Airline
tickets), spending his time on a ORM layer between Common Lisp and
traditional databases. The first half of his talk was about the
mistakes they did in the early days, which were the usual ones
(excessive cleverness, building on shaky foundations, underestimating
the difficulties of problems which turned out to be intractable).
It is interesting to notice how such mistakes are performed now
and again, and it seem that no matter how many times people are
warned against them, they still persist in repeating them.
Consider for instance the Python community, which has always had a
strong bias against cleverness, and a very pragmatic no non-sense
attitude. Still, we see all the usual mistakes of overcomplexity to be
performed in the major Python projects, like Zope, or SQLAlchemy, or
most Web frameworks. There must be something that brings clever people
to abuse their cleverness. Fortunately, the Python core is mostly free
from such abuses, since the core developers have learned their lesson,
but lots of Pythonistas out there are still wanting.
I remember a specific advice from Scott: don't use caches;
if your application is slow, no amount of cache will ever speed it up.
He repeated that at least four or five times and I remember that because
at my own company I have seen my collegues fighting with caches for months,
wasting an enormous amount of time and effort for nothing, since at
the end they were forced to turn off the cache. I wish they had attended
Scott's seminar before even considering using software caches.
The other catch phrase from Scott's talk was any bozo can write code.
He told us the story of how David Moon uttered this sentence to him in a
public conference and how it realized at the end that Moon was right.
The point was that a good specification is more important than code,
and that defending a poor specification with working code is a lost cause.
The story was interesting, but unfortunaly there are no shortcuts; I
do not believe any programmer can write a good specification without having
written any code first. Good specs always emerge at the end: this is
unfortunate, but there is no other way.
Of course, some general rules
apply and if you assume that your initial specification will be wrong
anyway, you can work in such a way to be prepared to the inevitable change.
The trick is in being able to find out the mistakes in the original design
before it is too late to change it, and this is not easy, especially
in large projects with multiple programmers. I have no pearls of wisdom
to offer, except the famous quote by Brian Kernighan, about keeping
the code (but I would say it applies to the design too) simple:
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
(Scott did not quote Kernighan, but he was basically saying the same thing).
Notice that this quote applies as well to refactoring
(s/debugging/refactoring/g).
The second half of the talk was about the future of Common Lisp, about
what we should do now, having learned from our mistakes. This part was
less convincing, since Scott basically advocated repeating one of his
own past mistakes, i.e. dropping Common Lisp and starting again from
scratch with a new language, which would basically be Dylan with
parenthesis, good support for concurrency and with the ability to run
on the Java and .NET platforms. When he was pointed out that Clojure
is already doing what he wants, he said "Well, I don't like the square
brackets" :-(
Frankly, that was quite disappointing. He went on explaining than in
his opinion the name resolution mechanism in Clojure is too complex
and a few technical aspects which did not convince him, so that he
sees Clojure as a valid candidate for the future Common Lisp but not
yet as the Real Thing. The impression I got is that he would only
accept as new Common Lisp a language designed by himself, but then
nobody else would use it, as nobody ever used Dylan.
Also, the hypothetical new language he was describing (Uncommon Lisp)
was looking very much like Scheme (single namespace, with a module
system instead of a package system, with simple core, easy to
understand and to explain to newbies, more consistent): when he was
asked why he was not considering Scheme he said "Yes, Scheme would be
fine but I do not like the community". Honestly, I do not have big
confidence about the future of Common Lisp if this is the attitude.
The other provocative speaker after Scott McKay was Mark Tarver. He is the
author of a new language called Qi which to me looks more similar to ML
than to Common Lisp.
The relevance to Common Lisp is that Qi is implemented in Common Lisp and that
Mark Tarver has a strong Common Lisp background. Apparently the position of
Tarver is that Lisp is dead as a specific phenotype (i.e. Common Lisp) but it
continues to live and it has a great future as a genotype. In his words Lisp
should not be seen as a specific language, but as a set
of genetic characteristics which have virally infected all modern dynamic
languages. His main
interested was to port Qi to all such Lisp-like languages, by writing code
generators (Qi->Python, Qi->Ruby, etc). I don't know what more to say, since
I have not checked Qi, but my gut feeling is against code generation, since
debugging code automatically generated is always a big PITA. The main point
of Tarver however, that the Common Lisp should look at the newer languages
and find some way to inter-operate with them, is a sensible one.
Both McKay and Tarver basically said that it is time to move on from
Common Lisp and that we should start writing a new language - from
scratch, as McKay advocated, or on top of Common Lisp, as Tarver advocated.
Their position was very much isolated: everybody else was more
conservative, very much against restarting from a new language (actually I do
not see how restarting from a new language is different than killing Common
Lisp) and favoring incremental refactoring and improvement of the
current language and specification. The most extreme position was the
one of Pascal Costanza: his proposal for improving Common Lisp was to
start a Web site (called CDR) collecting documents and specifications
to fill the gaps left in the Standard. I say that Pascal's position is
extreme because in his view the CDR project should just collect the
specifications but
the specs do not need to be accompanied by a concrete implementation;
there is no formal process to grant any particular status to a proposal,
i.e. there are no "approved" or "recommended" or "rejected" proposals;
any proposal just lies there and some specific Common Lisp implementation
might decide to implement it or might not.
In short, one sends his proposal to the CDR site and nothing happens.
That does not strike me as a very effective way to improve Common Lisp,
not as a the best road for a bright future. Pascal said that the CDR process
was inspired by the SRFI process of the Scheme community, but I see
it as different since:
the SRFIs are accompanied by portable implementations, so you may
directly use them as libraries, even if your chosen implementation
does not come with the SRFI you want;
the idea is that the Scheme committee should look at the existing SRFI
and extracts ideas from them to insert them in the standard of the
language;
that did not work at all in practice, since the R6RS editors disregarded
existing good SRFIs to reinvent things from scratch in an inferior way :-(
In short, the Scheme community is not the right place to look when it
comes to mechanisms to improve the standard, and in any case the CDR
mechanism looks even worse than the SRFI mechanism.
There were people with sensible positions, like
Nikodemus Siivola, which basically just proposed to focus the efforts on
writing more portable libraries: this is what the Alexandria project is
about. This is a start, but it looks very much insufficient to me.
All things considered, I came out from the conference with a pessimistic
impression about the future of Common Lisp. I hope I am wrong. In this
moment my advice to a young programmer wanting to start with
a Lisp-like language would be "forget about Common Lisp, look at
Clojure if you feel enterprise-oriented or at Scheme if you
feel research-oriented".
The second half of the talk was about the future of Common Lisp, about what we should do now, having learned from our mistakes. This part was less convincing, since Scott basically advocated repeating one of his own past mistakes, i.e. dropping Common Lisp and starting again from scratch with a new language, which would basically be Dylan with parenthesis, good support for concurrency and with the ability to run on the Java and .NET platforms. When he was pointed out that Clojure is already doing what he wants, he said "Well, I don't like the square brackets" :-(
Frankly, that was quite disappointing. He went on explaining than in his opinion the name resolution mechanism in Clojure is too complex and a few technical aspects which did not convince him, so that he sees Clojure as a valid candidate for the future Common Lisp but not yet as the Real Thing. The impression I got is that he would only accept as new Common Lisp a language designed by himself, but then nobody else would use it, as nobody ever used Dylan.
The reason nobody used Dylan was because Apple ditched it completely, Java had a lot of resources behind it, and Java didn't have all of the developer antipathy that it has now.
Scott McKay's comments are a bit bizarre too. If I were to guess, I would say that because of his Dylan background and because of Clojure eschewing OO, he would prefer a more OO lisp on the jvm/clr.
I don't care what CLers say about the issue though. S-expressions are a problem for adoption. It just is. You can say what you want about clojure, but if it had a python-like syntax it's adoption would be magnitudes greater than it will ever get with s-expressions.
You description of CDR is slightly wrong (but this is probably because I haven't described it well enough).
It is true that the CDR process does not require CDR documents to be accompanied by portable implementations, but it also does not forbid them. A couple of the already existing CDRs actually come with reference implementations, although not necessarily provided as part of the CDR document, but rather in the form of extensions of existing Common Lisp implementations.
It is important to understand the cultural background here a little bit, and how it differs from that of Scheme. If you take a look at the SRFIs, you see that many/most of them provide definitions and implementations for what is already provided as part of the ANSI Common Lisp standard. In other words, SRFI seems to play the role of doing the necessary groundwork on utility libraries that every Schemer/Lisper will want to use anyway. In contrast, CDR does not need to provide such fundamental libraries (or at least not to the same extent). Instead, it must be open towards more coarse-grained, larger extensions to Common Lisp as well.
For example, when designing the CDR process, one of the test cases was to allow the CLOS MOP specification be part of CDR (which is not part of ANSI Common Lisp). If we had designed the process in such a way that reference implementations must be part of each CDR, it would have been almost impossible to do so, because a typical CLOS MOP implementation is both a rather big piece of code, and needs to integrate with a Common Lisp implementation at some very low levels that are not specified in the ANSI standard, so a really true portable implementation is actually not possible.
So, yes, CDR is designed after SRFI, but in such a way that it allows for both large chunks of specifications and small bits and pieces at the same time. The requirements for submitting a CDR documents are therefore intentionally set very low. We trust the community to use the process in the ways that suit each CDR submission best.
Yes, indeed, s-expressions are a problem for adoption. But for Lispers, that's a question of considering a trade off, and making a decision where none of the two options are optimal.
S-expressions have strong technical advantages (including being very appropriate for macros and other metaprogramming techniques, easier support for certain source code editing tasks, etc.). A programming language with wider adoption, on the other hand, may give you a larger pool of libraries and projects to choose from and contribute to. Some Lispers indeed make the decision to go for wider adoption, after seeing the pros and cons on both sides. However, a considerable number of Lispers value the technical advantages of a programmable syntax much higher than what any wide adoption could ever bring to your productivity. So they bite the sour apple and stick with the less popular language.
It's good that the world has room for both choices.
> So, yes, CDR is designed after SRFI, but in such a way > that it allows for both large chunks of specifications and > small bits and pieces at the same time. The requirements > for submitting a CDR documents are therefore intentionally > set very low. We trust the community to use the process in > the ways that suit each CDR submission best. > > I hope this clarifies things a bit. ;)
Actually, I'm a little dismayed that my suggestions about Common Lisp came across as "let's do the same thing all over" or "let's start with Dylan and add back parentheses", because neither of those things is what I meant. And my quibbles with Clojure are that it still feels more like a test bed for things we don't understand yet; I could be wrong about Clojure, but that's still the sense I get from it.
I also didn't say "I don't like the Scheme community". I did say that, historically, the Scheme community has been about research and that the bias in the Scheme community has been about "purity of essence".
What I hoped people would hear is, that we as a community need to turn our back on the old language(s) we are attached to, and figure out what a new language would look like from scratch. We can then figure out how to quickly implement that based on things we understand. I think that we are badly underestimating our own abilities, if we really truly think this is "too hard".
What do I think about today's Lisps? - Common Lisp is too big and complicated; and despite its size, it is missing important things - Dylan threw the baby out with the bathwater, losing important things like data-as-code, powerful macros, and the MOP - Scheme isn't "objects all the way down", but it's no accident that my description of "Uncommon Lisp" feels a lot like Scheme crossed with Dylan - Clojure still feels too much like a test-bed; if Clojure is not my starting point, it's partly because of this, and partly because I have not written any programs in it
All of these languages have many strong points and we should learn from each of them, but that does not mean that any one of them is particularly a good place to start form.
For what it's worth, I don't think "Uncommon Lisp" would kill Common Lisp any more effectively than Clojure will. Common Lisp is badly hobbled by having no implementations on JVM/CLR, and badly hobbled by not having a plausible concurrency model that supports modern processors. These two major missing things are more than enough to further marginalize the language, with or without Uncommon Lisp or Clojure or ...
> Actually, I'm a little dismayed that my suggestions about > Common Lisp came across as "let's do the same thing all > over" or "let's start with Dylan and add back > parentheses", because neither of those things is what I > meant. And my quibbles with Clojure are that it still > feels more like a test bed for things we don't understand > yet; I could be wrong about Clojure, but that's still the > sense I get from it. > I also didn't say "I don't like the Scheme community". I > did say that, historically, the Scheme community has been > about research and that the bias in the Scheme community > has been about "purity of essence".
Sorry if I misrepresented your position and put in your mouth words you did not say. Still, this is the impression I've got from your talk; I assume others may have got a similar impression. There is a question I wanted to ask you at the conference: do you see the future of (Un)Common Lisp as a language controlled by a committee or as a language controlled by as BDFL, a la Clojure?
Scott, what about PLOT http://lambda-the-ultimate.org/node/3253 on the JVM/CLR? It supposedly solves the Macro problem and I believe it's objects all the way down?
Michele, I don't think you "misrepresented" me; I am just disappointed that I did not adequately convey exactly what I think. My bad, not yours.
As for "Uncommon Lisp", I see it as a collaborative effort controlled by a very small group of people who come from several different places. I don't think a one-person effort can work in a short period of time; but neither will a large group of people. I'm thinking 3 to 5 people is a good size.
Dick, I think Dave Moon would be (in fact, already is) the first person to say that PLOT is a hobby and that he doesn't have time to turn it into a full-strength implementation. His macro system is truly ingenious, but it's also a test-bed for some of his ideas. Yes, it's objects all the way, etc. Yes, there are things to be learned from it (e.g., the basic kind of thing is a class that can hold repeated slots). But again, I don't think it's "the" starting point.
> Common Lisp is badly hobbled by having no implementations > s on JVM/CLR, and badly hobbled by not having a plausible > concurrency model that supports modern processors.
ArmedBear Common Lisp runs on the JVM and as been progressing rapidly lately. In fact, ABCL 0.15.0 was released just yesterday.