Summary:
At the JavaOne 2010 conference in San Francisco, Stephen Colebourne, member of technical staff at OpenGamma and project lead of the Joda Time open source API, gave a talk entitled "The Next Big JVM Language." In this interview, he reveals what he thinks the next big language should be.
The ability to add new comments in this discussion is temporarily disabled.
Most recent reply: September 24, 2011 10:36 AM by
|
> > At that point you need to ask the question "do I want > > better developers or more developers?" > > I've worked in small, medium, large, and very large > development organizations. In all cases, managers vote > for more bodies.
It takes education to convince managers that greater productivity, quality and efficiency can be achieved through hiring a smaller number of strong developers. It's not an overnight task, but it is doable. We've had great success in this at my current company.
|
|
|
"What I'm trying to say is that we should stick with C++, especially C++0x, and forget about the rest."
C++ is one of the worst languages ever designed. I'll pass. Java has plenty of flaws to be sure but C++ is a flat out abomination. In fact, I can't think of any language of note that got OO worse than C++. It's that bad. It's practically designed to encourage horrible coding practices.
|
|
|
> > > At that point you need to ask the question "do I want > > > better developers or more developers?" > > > > I've worked in small, medium, large, and very large > > development organizations. In all cases, managers vote > > for more bodies. > > It takes education to convince managers that greater > productivity, quality and efficiency can be achieved > through hiring a smaller number of strong developers. > It's not an overnight task, but it is doable.
> We've had > great success in this at my current company.
You're very lucky; bureaucracy sets in quickly. Is your current company hiring? :)
|
|
|
> It takes education to convince managers that greater > productivity, quality and efficiency can be achieved > through hiring a smaller number of strong developers. > It's not an overnight task, but it is doable. We've had > d great success in this at my current company.
Are there enough strong developers to go around? Raise the requirement a bit further to very strong and I think the answer is almost certainly no. So perhaps there is a real business case for tools and systems which can be used effectively by mere average developers.
|
|
|
> You're very lucky; bureaucracy sets in quickly. Is your > current company hiring? :) Always :). http://www.overstock.com/careers
|
|
|
> > You're very lucky; bureaucracy sets in quickly. Is > your > > current company hiring? :) > > Always :). http://www.overstock.com/careerstouche'
|
|
|
> Are there enough strong developers to go around? Raise the > requirement a bit further to very strong and I > think the answer is almost certainly no.
There's definitely not an overabundance. This might not be a bad thing, though. It keeps us from growing too quickly, and it gives the company an additional incentive to pay close attention to the quality of life for our staff to ensure that we don't loose the developers we already have :).
> So perhaps there is a real business case for tools and systems which can be > used effectively by mere average developers.
I think there's always a strong business case to be made for better tools, even when talking about very strong developers. The IDE is one such tool, but so is the language you program in. One thing I like about Scala is some of the things it does (such as case classes) to reduce reliance on tools in the first place.
|
|
|
> Are there enough strong developers to go around? Raise the > requirement a bit further to very strong and I > think the answer is almost certainly no. So perhaps there > is a real business case for tools and systems which can be > used effectively by mere average developers.
I think a static view of developers is incorrect. The mistake I see is that management will take you logic above and decide they only want average developers. This means there is no room for growth. The better developers will have lots of reasons to leave as soon as they have passed the average mark (e.g. money and job satisfaction.) This tends to leave you people who don't really know what they are doing or care to learn i.e. a sub-standard developers. This usual symptoms of this are hiring expensive senior technical people to micromanage the developers. In other words you end up with the same strong developers you decided you didn't want but force them to develop in Word. The upshot is higher cost, lower productivity, and lower quality.
I've also seen a lot of confusion about tools that increase productivity and tools that make things easier. These are two different concepts and they are often at odds with each other often they are assumed to go together. It's strange that this is the case because we don't think this way about physical tools. You don't get purchase a backhoe because you think your employees are too dumb to use shovels.
This isn't to say that there are not tools (or processes) that both increase productivity and simplify work. These kinds of tools are what cause revolutionary change. But these tend to be the exception, not the rule. As a rule of thumb, if you want to maximize productivity, you will usually have to give up on some simplicity.
|
|
|
> This isn't to say that there are not tools (or processes) > that both increase productivity and simplify work. These > kinds of tools are what cause revolutionary change. But > these tend to be the exception, not the rule. As a rule > of thumb, if you want to maximize productivity, you will > usually have to give up on some simplicity.
I argue that the Great Leap Forward is paradigm shift. Some, not all, think that OO was such. Others, not so much; particularly if one looks underneath the covers of major java web frameworks. Not much OO there; in the classic sense of data/method seamlessly embroidered together. See: Graham.
Two shifts: the graphics rendering engine and the relational database. No game developer would not use an engine, but many coders ignore the RDBMS in favor of continuing to write file based approaches. The difference: the rendering engine doesn't replace coding with something else (rather it swaps low level syntax with something "higher"), while the RDBMS does.
It can also be argued (I sure do) that there was such a revolution in the 80's-90's, and it had a name: 4GL languages (yeah, I know what L stands for). Didn't work out so well. All but a couple (if that many) went away. Coders decided they didn't want to be constrained to the semantic choices of the 4GL authors, by and large.
Just look at java web development. I wrote servlets as shown in Hunter's first edition, as did many others. No one does that any longer. Are the web apps any better? Who knows? They aren't written in java, they're written in "Struts" or whatever. You're beholden on the choices made by them. Same for RoR. Same for web frameworks in all other languages. The number of them grows like bedbugs, because some group doesn't like the choices made.
These are all code generators, to a first approximation. That level of tool goes back to the 1960's; COBOL (already assumed to be very high level language) code generation was au courant for a long time; still is some places. There's always more boilerplate than meat, it seems.
Shouldn't we all be writing assembler or C (the portable assembler)? Where does boilerplate stop and meat start?
Yet, the suggestion that code can and should be generated from the data it's supposed to interact with is still heretical (well, unless the input to the generator is some mammoth xml file :) ) most places. This still baffles me. In the end, it's the data which is important to the (human) client, not the code. Moreover, the data will outlast the code; moving mainframe applications (in COBOL or PL/1 mostly) to java on "servers" is a thriving sub-industry. If these applications had been built from the start on a standardized (sql) database engine, swapping the client code from ABC to XYZ is a matter of running the catalog through a different generator.
If the goal of "increasing productivity" is eliminating "busy work", "boilerplate code", and such, then why not embrace UML and build our systems from diagrams? And so on. The line between increasing productivity of all developers and job killing for me and mine is ever present. If I had a nickel for every time I argued with my java brethren over whether to put a constraint once in the catalog, or strewn throughout their source files; I'd be rich.
And if I'd won, the codebase would be a fraction the size, and agnostic about the language/style/framework/team which built the client. Oops.
|
|
|
I commented about this over at http://www.artima.com/weblogs/viewpost.jsp?thread=306337I think the "beauty" or "efficiency" of a language, while important, (and fun to debate here) will not be the key for the next big language, especially a functional style graft onto Java or the JVM, like Scala (and others) attempt. The big interest in FP is to take advantage of multiple cores and processors. And the big bang for the buck in cores/$ is in GPU computing. The language that not only adds neat academics love them closures, but also says "I'll take your closure over a Collection and run it on thousands of threads and processors on that $100 GPU card, and I'll even do most of the ugly legwork setting it up" will be the one to beat.
|
|
|
> The big interest in FP is to take advantage of multiple > cores and processors. And the big bang for the buck in > cores/$ is in GPU computing.
I still wonder how this enthusiasm comports with Amdahl's Law? Outside of servers, how many truly parallel/concurrent problems are there in client/standalone applications? Or will we see applications be invented to fit the multi-core machine, in other words, we just write server code everywhere?
|
|
|
> The big interest in FP is to take advantage of multiple cores and processors.
I keep seeing this claim left and right but I have yet to see some concrete evidence.
Has anyone ever attempted to compare a program written in traditional OO and in FP style, then launch it on several cores/machines and compare the results?
Immutable collections are usually slower than mutable ones so I'm really beginning to wonder if the claim that FP improves performance in the presence of multiple cores has any substance.
|
|
|
> Immutable collections are usually slower than mutable ones Unless you have to keep lots of near identical copies. This is quite common in my work (combinatorial optimization).
|
|
|
> > The big interest in FP is to take advantage of multiple > > cores and processors. And the big bang for the buck in > > cores/$ is in GPU computing. > > I still wonder how this enthusiasm comports with Amdahl's > Law? Outside of servers, how many truly > parallel/concurrent problems are there in > client/standalone applications?
My day job is vehicle routing /combinatorial optimization, and one of my hobbies is photography. Both involve some embarassingly parallel problems, and others which scale to at least modest numbers of processors. Currently the heavy lifting in both cases is done on the client machine, although that might change.
|
|
|
> I still wonder how this enthusiasm comports with Amdahl's > Law? Outside of servers, how many truly > parallel/concurrent problems are there in > client/standalone applications?
Actually there are lots of parallel tasks in client or standalone applications.
You need to keep your UI responsive, so that should have a dedicated thread. Longer running work items started from the UI should run in another thread. Network interaction/downloads should also be on a background threads, as should interaction with local storage (which might very well be networked itself).
So even there you get a lot of benefit from immutability and other functional properties to avoid one task corrupting another tasks data.
Communication between the thread might be realized through Software Transactional Memory, where updates may be run multiple times if a conflict occurs. That can only be done if your update functions are purely functional (no side-effects).
I'm sure there are more benefits that could be listed.
The problem here is that these things require a change in how developers think, and making that change is quite a bit harder than changing software itself.
|
|