Summary:
At the JavaOne 2010 conference in San Francisco, Stephen Colebourne, member of technical staff at OpenGamma and project lead of the Joda Time open source API, gave a talk entitled "The Next Big JVM Language." In this interview, he reveals what he thinks the next big language should be.
The ability to add new comments in this discussion is temporarily disabled.
Most recent reply: September 24, 2011 10:36 AM by
|
> > I still wonder how this enthusiasm comports with > Amdahl's > > Law? Outside of servers, how many truly > > parallel/concurrent problems are there in > > client/standalone applications? > > Actually there are lots of parallel tasks in client or > standalone applications. > > You need to keep your UI responsive, so that should have a > dedicated thread. > Longer running work items started from the UI should run > in another thread. > Network interaction/downloads should also be on a > background threads, as should interaction with local > storage (which might very well be networked itself). > > So even there you get a lot of benefit from immutability > and other functional properties to avoid one task > corrupting another tasks data. > > Communication between the thread might be realized through > Software Transactional Memory, where updates may be run > multiple times if a conflict occurs. That can only be done > if your update functions are purely functional (no > side-effects). > > I'm sure there are more benefits that could be listed. > > The problem here is that these things require a change in > how developers think, and making that change is quite a > bit harder than changing software itself.
Another way to do concurrency/parallelism is to use the Active Object pattern. This pattern also fits will with object-oriented designs.
|
|
|
> > I still wonder how this enthusiasm comports with > Amdahl's > > Law? Outside of servers, how many truly > > parallel/concurrent problems are there in > > client/standalone applications? > > Actually there are lots of parallel tasks in client or > standalone applications. > > You need to keep your UI responsive, so that should have a > dedicated thread. > Longer running work items started from the UI should run > in another thread. > Network interaction/downloads should also be on a > background threads, as should interaction with local > storage (which might very well be networked itself).
In real life, I don't see that happening with applications. Asking for a download, OK, I'll buy that.
But having tasks which are run at the behest of the user, but which the user is indifferent to when the task is completed, not so much. That indifference is key. With servers, each thread is run to satisfy (approximately) a user request, independent of other users; so there's no violation of Amdahl. With a true client/standalone application, either the user is working linearly (it's been found that humans are not all that adept at multi-tasking) in which case s/he'll want to know the result of step A before going on to step B, or s/he's running something like a VB/Access application; printing a G/L report while posting AR. But that's just a server, albeit single user.
> > So even there you get a lot of benefit from immutability > and other functional properties to avoid one task > corrupting another tasks data. > > Communication between the thread might be realized through > Software Transactional Memory, where updates may be run > multiple times if a conflict occurs. That can only be done > if your update functions are purely functional (no > side-effects). > > I'm sure there are more benefits that could be listed. > > The problem here is that these things require a change in > how developers think, and making that change is quite a > bit harder than changing software itself.
I argue that it's much more about how humans think. Again, we don't really multi-task efficiently. Expecting that we'll all of a sudden multi-task while interacting with applications smacks of Commander Data. Highly parallel/concurrent machines have existed in recent memory (well, for some of us), for not obscene amounts of money, and they failed simply because no one could figure out problems for them to solve. The web doesn't really change that, and the web is the only meaningful change to the world since those machines.
Still seems to me to be a solution in search of a problem. The simple way to provide an answer is to sketch how Word would be made better if it were written in an FP. Do that, and people will be convinced.
|
|
|
> The big interest in FP is to take advantage of multiple > cores and processors.
Just to be clear; it's not an advantage. Stable or decreasing clocks (which hasn't quite happened yet, really) on a core are seen as a problem for existing client/standalone apps. They'll just run slower than they used to. The Gates-ian motto "write for the next generation of processor, it'll be fast enough" no longer worked (well, that's been the assumption). It's a lemon that has to be turned into lemonade.
Much the same as the birth of the Wintel monopoly: M$ and Intel have been symbiotic (or parasitic, depending on one's point of view), M$ creating ever more bloated software to suck up the cycles that Intel was putting into each new processor. Linux made this quite clear, by not playing that game.
So, Intel/AMD/ARM/etc. need to create a class of applications which can suck up all those concurrent cycles; the old way doesn't work. One way, for commercial type applications of course, is to reduce the client machine to a pixelated VT-100 (which is happy with a 6502), and put those fabulous multi-core/processor machines to work as database servers. But that won't move enough units. Hmmm.
What we'll likely end up with doing is ignore Amdahl, move to the FP of the month for a while until the java/FP emerges, slice up our inherently linear code (because it serves an inherently linear human brain: see the writings of Nick Carr, for example) into bits and pieces to fit the new hardware. The resulting applications won't be any better or faster, but shinier. At least to the coders, anyway.
|
|
|
> M$ creating ever more bloated software to suck up the cycles that Intel was putting into each new processor.
Do you seriously believe that Microsoft is intentionally bloating their software to force people to keep buying more powerful Intel chips?
-- Cédric
|
|
|
I don't seem to have many inherently linear tasks which are CPU bound and where the time taken on current processors is long enough to notice. All the slow CPU bound tasks I have are capable of at least some use of concurrency. Some may not scale well beyond perhaps 4 cores, while others easily scale to hundreds. The rest of my slow tasks are either disk or network bound.
|
|
|
> > M$ creating ever more bloated software to suck up the > cycles that Intel was putting into each new processor. > > Do you seriously believe that Microsoft is intentionally > bloating their software to force people to keep buying > more powerful Intel chips? >
They used to believe that most software was bought with new machines; software that would run on older hardware was more likely to be 'borrowed' rather than paid for. So designing for the net generation of hardware was an early, crude form of copy restriction. I heard this explained by a senior Microsoftee from the the platform at a public conference (mid '90s).
|
|
|
> > M$ creating ever more bloated software to suck up the > cycles that Intel was putting into each new processor. > > Do you seriously believe that Microsoft is intentionally > bloating their software to force people to keep buying > more powerful Intel chips? > > -- > Cédric
Umm. Yes. And I was hardly the first one to see that.
|
|
|
> Umm. Yes. And I was hardly the first one to see that.
You mean you saw the source code and you found sleep loops there or you just like to believe conspiracy theories?
-- Cédric
|
|
|
> > > M$ creating ever more bloated software to suck up the > > cycles that Intel was putting into each new processor. > > > > Do you seriously believe that Microsoft is > intentionally > > bloating their software to force people to keep buying > > more powerful Intel chips? > > > > -- > > Cédric > > Umm. Yes. And I was hardly the first one to see that.
Seems to me that if MS was trying to make it's products bloated, they would be a lot more bloated than they are. Bloating is one of those things you have to constantly fight and resist. It doesn't take effort.
|
|
|
"What I'm trying to say is that we should stick with C++, especially C++0x, and forget about the rest." Interesting. I programmed a fair bit in C++ last century. Today I did my first serious coding in C++ in this century, writing a class, after years of Java. Wow. Suddenly you have to worry about .h files, #IFDEF, #INCLUDE for the header files, declaring functions, in effect, in two different places (the header and the .cpp file). And if you mess up some closing semicolon or }, the preprocessor goes nuts and you get weird errors in some .h file (the next one) that you haven't even looked at. Not to mention whether to use foo.bar or foo->bar or where to put the * on a declaration. The only thing I liked was the shortcut for ints if (!error) which is handy in chains of lots of function calls that return negative numbers on failures, e.g. int error = call_1();
if (!error)
error = call_2();
...
if (!error)
error = call_N();
return error;
Of course, functions returning funny ints on errors are non-ideal, an exception is better, as well as some form of finally... :-( Not to mention all the *real* faults of C++. Now, maybe C++0x fixes some of this, I don't know much about it.
|
|
|
> Seems to me that if MS was trying to make it's products > bloated, they would be a lot more bloated than they are. > Bloating is one of those things you have to constantly > y fight and resist. It doesn't take effort.
Their view was that fighting bloat wasn't money well spent. So it wasn't a matter of adding bloat deliberately, merely not bothering to fight the natural bloat.
|
|
|
> Their view was that fighting bloat wasn't money well > spent. So it wasn't a matter of adding bloat deliberately, > merely not bothering to fight the natural bloat.
My understanding is that this changed for Windows 7. After Vista, they realized they couldn't continue that way forever.
|
|
|
|
> > Seems to me that if MS was trying to make it's products > > bloated, they would be a lot more bloated than they > are. > > Bloating is one of those things you have to constantly > > y fight and resist. It doesn't take effort. > > Their view was that fighting bloat wasn't money well > spent. So it wasn't a matter of adding bloat deliberately, > merely not bothering to fight the natural bloat.
I'll grant the distinction, but maintain, from the Wintel point of view, it makes no difference.
|
|
|
Robert- maybe I missed it in my perusal, but the article you cite has zero information about MS deliberately bloating their SW to help Intel.
Please support your claim.
|
|