I had no intention of committing to the time, money and travel necessary to meet three times a year, for a week each time, but the organizational meeting was in Washington DC and I was living in Pennsylvania at the time so I thought "what the heck, I'll drive a few hours and check it out."
What I found at the C++ committee meeting were most of the smartest people in the C++ community, gathered together in one place, available to answer my questions. I quickly realized this was far better than I could ever find in any graduate school program. And if you factor in the opportunity costs of graduate school, a far better deal financially as well.
I was hooked, and kept attending for about 8 years. The committee continued after I wandered away; the standard hadn't been completed yet but Java had appeared by that time and some others were also drifting off (It's the problem with being a technological stimulus junkie -- I do delve deep, but I'm always looking for more productivity leverage so it's not too hard to distract me with promising language features).
Each time we met, I would show up with a list of the thorniest C++ questions that had accumulated for me in the interim. I would usually have them all clarified within a couple of days. That, and being exposed to upcoming features early, was the most valuable short-term benefit of being on the committee.
In the long term, watching the addition of language features into the C++ puzzle was deep learning. It's easy now to monday-morning quarterback and say that C++ sucks and was badly designed, and many people do so without any understanding of the constraints under which it was designed. Whether or not it was entirely legitimate, Stroustrup's constraint was that a C program should compile with either trivial or (preferably) no changes under C++. This provided an easy evolution path for C programmers, but it was a big limitation and accounts for virtually every difficult feature that people complain about. But because those features are hard to understand, many people jump to the conclusion that C++ was badly designed, which is far from the truth.
Java fed this perception with its cavalier attitude about language design. I've written about this in Thinking in Java and in many weblogs, so longtime followers already know that Java tweaked me the wrong way from the start, because of the dismissive attitude of Gosling and the language designers. To be honest, my first "encounter" with Gosling left a bad taste in my mouth -- it was many years before, when I first began using Unix at one of the first companies I worked (Fluke, which makes electronic test equipment; I was doing embedded systems programming). One of the other software engineers was coaching me and had guided me towards emacs. But the only tool available in the company at that time was the commercial (Unipress) version of Gosling Emacs. And if you did something wrong, the program would insult you by calling you a turkey and filling the screen with garbage. This, in a commercial product for which the company had paid fair amounts of cash. Needless to say, as soon as Gnu emacs became stable the company switched over to that (I've met Richard Stallman. He's crazy, sure. But he's wicked smart, and smart enough to know that when you are in trouble, you need help, lots of it, and not insults).
I have no idea how much this formative experience with Gosling influenced my later feelings about his work, but the fact that the attitude about C++ was "we looked at it and it sucked so we decided to whip out a language of our own" didn't help. Especially when I began to tease it apart in the process of writing Thinking in Java and discovered, time after time, features and libraries where the decisions were slapdash -- indeed, most of these had to be repaired, sometimes after years of programmer suffering. And on numerous occasions Gosling admitted that they had to cut corners to hurry and get it out or else the internet revolution would have passed them by.
So the reason I'm giving this keynote is that I find it very helpful to understand why language features exist. If they're just handed to you on a platter by a college professor, you tend to develop a mythology around the language and to say "there's some really important reason for this language feature that the smart people who created the language understand and that I don't, so I'll just take it on faith." And at some point, faith-based acceptance of language features is a liability; it prevents you from being able to analyze and understand what's going on. In this keynote, I look at a number of features and examine how they are implemented in different languages, and why.
Here's an example: object creation. In C, you declare variables and the compiler creates stack space for you (uninitialized, containing garbage unless you initialize it). But if you want to do it dynamically, you must use the malloc() and free() standard library functions, and carefully perform all the initialization and cleanup by hand. If you forget, you have memory leaks and similar disasters, which happened frequently.
Because malloc() and free() were "only" library functions, and confusing and scary at that, they often didn't get taught in basic programming classes like they should have. And when programmers needed to allocate lots of memory, instead of learning about and dealing with these functions they would often (I kid you not) just allocate huge arrays of global variables, more than they ever thought they'd need. The program seemed to work, and no one would ever exceed those bounds anyway -- so when it did happen, years later, the program would break and some poor sod would have to go in and puzzle it out.
Stroustrup decided that dynamic allocation needed to be easier and safer -- it needed to be brought into the core of the language and not relegated to library functions. And it needed to be coupled with the same guaranteed initialization and cleanup that constructors and destructors provide for all objects.
The problem was the same millstone that dogged all C++ decisions: backward compatibility with C. Ideally, stack allocation of objects could simply have been discarded. But C compatibility required stack allocation, so there needed to be some way to distinguish heap objects from stack objects. To solve this problem, the new keyword was appropriated from Smalltalk. To create a stack object, you simply declare it, as in Cat x; or, with arguments, Cat x("mittens");. To create a heap object, you use new, as in new Cat; or new Cat("mittens");. Given the constraints, this is an elegant and consistent solution.
Enter Java, after deciding that everything C++ is badly done and overly complex. The irony here is that Java could and did make the decision to throw away stack allocation (pointedly ignoring the debacle of primitives, which I've addressed elsewhere). And since all objects are allocated on the heap, there's no need to distinguish between stack and heap allocation. They could easily have said Cat x = Cat() or Cat x = Cat("mittens"). Or even better, incorporated type inference to eliminate the repetition (but that -- and other features like closures -- would have taken "too long" so we are stuck with the mediocre version of Java instead; type inference has been discussed but I will lay odds it won't happen. And shouldn't, given the problems in adding new features to Java).
Guido Van Rossum (creator of Python) took a minimalist approach -- the oft-lambasted use of whitespace is an example of how clean he wanted the language. Since the new keyword wasn't necessary, he left it out, so you say x = Cat("mittens"). Ruby could have also used this approach, but one of Ruby's main constraints is that it follows Smalltalk as much as possible, so in Ruby you say x = Cat.new("mittens") (here's a nice introduction to Ruby). But Java made a point of dissing the C++ way of doing things, so the inclusion of the new keyword is a mystery. My guess, after studying decisions made in the rest of the language, is that it just never occurred to them that they could get away without it.
So that's what I mean about language archaeology. I have a list of other features for similar analysis during the keynote. I hope that people will come away with a better perspective on language design, and a more critical thought process when they are learning programming languages.
Ordinary programs manipulate data. Most of the time, ordinary programming is all you need. But sometimes you find yourself writing the same kind of code over and over, thus violating the most fundamental principle in programming: DRY ("Don't Repeat Yourself"). And yet, there doesn't seem to be any way in your programming language to fix the problem directly, so you are left duplicating code, knowing that your project continues to scatter this duplicated code throughout, and that if you need to change anything you're going to have to find every piece of duplicated code and fix it and test it over again. And on top of that, it's just plain inelegant.
This is where metaprogramming comes in. Metaprograms manipulate programs. Metaprogramming is code that modifies other code. So when you find yourself duplicating code, you can write a metaprogram and apply that instead, and you're back to following DRY.
Because metaprogramming is so clever and feels so powerful, it's easy to look for applications everywhere. It's worth repeating that most of the time, you don't need it. But when you do, it's amazingly useful.
This is why it keeps springing up, even in languages that really fight against code modification. In C++, which has no run-time model (everything compiles to raw code, another legacy of C compatibility), template metaprogramming appeared. Because it's such a struggle, template metaprogramming was very complex and almost no one figured out how to do it.
Java does have a runtime model and even a way to perform dynamic modifications on code, but the static type checking of the language is so onerous that raw metaprogramming was almost as hard to do as in C++ (I show these techniques in Thinking in Java). Clearly the need kept reasserting itself, because first we saw Aspect-Oriented Programming (AOP), which turned out to be a bit too complex for most programmers. Somehow, (and I'm kind of amazed that such a useful feature got through so late in Java's evolution), annotations got added, and these are really a metaprogramming construct -- annotations can even be used to modify code during compilation, much like C++ template metaprogramming (this form is more complex, but you don't need it as often). So far, annotations seem to be fairly successful, and have effectively replaced AOP.
Ruby takes a different approach, appropriate to Ruby's philosophy. Like Smalltalk, everything in Ruby is fungible (this is the very thing that makes dynamic languages so scary to folks who are used to static languages). I have no depth in Ruby metaprogramming, but as I understand it, it is just part of the language, with no extra constructs required. In combination with the optional-parameter syntax, it makes Ruby very attractive for creating Domain Specific Languages (DSLs), an example of which is Ruby on Rails.
Python also felt the pressure to support metaprogramming, which began to appear in 2.x versions of the language in the form of metaclasses. Metaclasses turned out to be (like AOP in Java) too much of a mental shift to allow easy use by mainstream Python programmers, so in the most recent versions of Python, decorators were added. These are similar in syntax to Java annotations, but more powerful in use, primarily because Python is as fungible as Ruby. What's interesting is that the intermediate step of actually applying the decorator turns out to be less of an intrusion, (as it might initially appear) but rather a beneficial annotation that makes the code more understandable. Indeed, one of the main problems with metaclasses (other than the complexity) was that you couldn't easily see by looking at the code what it does; you had to know that the magic had been folded in because of the metaclass. Whereas, with decorators, it is clear that metaprogramming actions are being applied to a function or class.
Although we haven't seen an explosion in the creation of DSLs in Python as we have in Ruby (it's certainly possible to create DSLs in Python), decorators provide an interesting alternative: instead of creating an entirely new language syntax, you can create a decorated version of Python. Although this might not be tightly targeted to domain users as DSLs are, it has the benefit of being more easily understandable to someone who is already familiar with Python (rather than learning a new syntax for each DSL).
Python decorators (both function and class decorators) have almost completely replaced the need for metaclasses, but there is still one obscure situation where metaclasses are still necessary, and this is when you must perform special activities as part of object creation, before initialization occurs. Thankfully, this situation tends to be a rather small edge case, so most people can get by using decorators for metaprogramming.
In the EuroPython presentation, I will be introducing metaprogramming with decorators, as well as demonstrating the special case where metaclasses are still necessary.