Summary
Almost every program we write today will execute in a concurrent computing environment. But to what degree do developers really have to be aware that their programs run on concurrent hardware?
Advertisement
In a recent pair of SD Times articles, Alan Zeichick and Larry O'Brien explore two sides of a "threading" maturity model. Zeichick (Presenting the Threading Maturity Model) takes an organizational approach, while O'Brien (Following the Maturity Model Thread) advocates the issue from an individual developer's vantage point. Their different approaches notwithstanding, the articles share a common theme: a call for a more principled approach to dealing with the coming age of concurrency, allegedly a sea change as significant as the advent of object-orientation was two decades ago.
Zeichick and O'Brien each evokes the Capability Maturity Model (CMM) developed at Carnegie Mellon's Software Engineering Institute in the mid-1980s, charting a path for organizations to embrace the best software development practices. Starting with a state of tabula rasa, a thread maturity model assumes an increasing awareness of concurrency from a development organization and the individual developer, respectively. Going from the point of having only cursory awareness of concurrency issues, the fullest maturity level assumes that developers are capable of producing code that takes into account the full spectrum of cores available on modern CPUs.
Knowing as much as possible about threading and concurrent programming cannot harm a developer or a project team. Writing as I am this blog post on hardware with a multi-core CPU, the arrival of concurrent computing environments into every area of software can hardly be denied. But Zeichick and O'Brien's articles bring up an interesting question: To what degree do developers really have to be aware of the fact that their programs run on concurrent hardware?
Enterprise developers have been writing for concurrent environments all their professional lives—few application servers limit execution to a single thread when handling incoming requests. Similarly, a developer writing code with a database back-end is working in a highly concurrent environment, too: database servers were the first class of software to explicitly allow multiple concurrent requests (indeed, much of the concurrent programming field, such as the theory of locking, is rooted in the work of early database researchers).
Yet, most enterprise developers would consider themselves in the initial stage of a threading maturity model ala Zeichick and O'Brien: they are aware that threads exists, know that threads are important to learn about and that threads, if not properly used, can cause trouble, but don't think of themselves as concurrency experts.
Nor do they have to: Most developers don't have to explicitly program with concurrency in mind to benefit from highly concurrent environments. Concurrency in lower-level software, such as database servers and application servers, have allowed developers to continue writing essentially serial software—Web application controllers, for instance—many "instances" or threads of which are then executed in parallel by the server framework.
Serial Program, Parallel Subsystem
In his ground-breaking work on cluster computing, In Search of Clusters, IBM Distinguished Engineer Greg Pfister called that sort of parallelism Serial Program, Parallel Subsystem, or SPPS, parallelism. SPPS parallelism allows a developer to feed a serial program—a Web controller, a database query, or a data mining algorithm implementation—to a parallel subsystem—such as a Web application server, a database server, or a massively parallel supercomputer—and that parallel subsystem will ensure the maximum concurrency for the serial program.
Pfister, who had worked on huge clusters prior to writing his book, claimed that the vast majority of massively parallel computation was performed in the SPPS manner. One reason for that was the great practicality of SPPS parallelism: The parallel subsystem is likely much smarter about concurrency than most developers would be, and is able to thus take better advantage of available resources. Equally important, developers can keep writing the simplest sequential program that gets the job done, and delegate parallelism to a specialized component.
In the SPPS world, developers don't need much of a threading maturity model. At best, awareness of concurrency suffices, as does trust in the underlying parallel subsystem. Because the underlying subsystem is most likely a black box accessed only through a well-defined interface—few developers would want to hack their database's source code, if that code is available at all—there is no choice but to trust that the parallel subsystem does the right thing in terms of concurrency.
Contrast that with systems that require explicit awareness of concurrency. My favorite example is the Swing threading API, something O'Brien alludes to in his article: Even for the simplest application, you need to be keenly aware of what thread your code executes in, with mistakes leading to amateurish application errors. Yet, even seasoned Swing developers don't always do the right thing: How many developers are aware, for instance, that they should not create and show UI components in a main() method? Instead, Swing wants developers to have all GUI updates to be scheduled on the event handling thread: even as simple an operation as textField.setText() must explicitly be pushed onto the event handling queue.
What do we get in return for that tedium? Ajax applications, which for the most part execute in a single thread, don't seem to require developers to know much about threading: you can register listeners, while the browser acts as the parallel subsystem that dispatches requests and notifies a sequential Ajax application of the results. The deceptively simple threading model has nevertheless allowed some fairly sophisticated Ajax applications, with excellent usability. Flex (and Flash), similarly, follows the SPPS model by facilitating event handler registrations from a sequential program. Judging from the vast array of Ajax and Flex/Flash applications, Flex's SPPS model allows highly usable applications.
Master Threading or Delegate It?
Contrasting Swing's parallelism with, say, Ajax and Flex, may not be fair, since Swing exposes the full power of the JVM to a developer. But for the vast majority of applications, wouldn't an Ajax-style SPPS model be more convenient? More generally, instead of developers striving for a high level of threading maturity, should we strive for more SPPS-style concurrency?
To be sure, the latest JDK concurrency features already point to the direction of delegating concurrency to an executor framework. But mastering the concurrency APIs is not the same as fully understanding how to design and architect highly concurrent applications. Concurrency left to specialists in an application domain, such as databases or application servers or UI toolkits, is likely a better path to benefitting from the abundance of concurrent hardware than what developers building higher-level applications could achieve. Instead of pursuing a threading maturity model, wouldn't enterprise developers be more effective in relying on such parallel subsystems and continue writing essentially sequential programs?
Zeichick and O'Brien seem to think that high levels of threading wisdom is the way to concurrency bliss. Do you agree with their thesis? Or do you think that delegating concurrency to increasingly sophisticated parallel subsystems, while allowing developers to stay with mainly sequential programs, is the way to the future?
I think you are broadly right that SPPS is generally desirable and has been employed for a long time. However, there's more to enterprise applications than greenfield database apps with web frontends. Much of what enterprise developers do falls in the broad category of information integration or application integration.
When you connect different systems and data sources like ERP systems, content management systems, XML databases, web services, various RDBMS, file systems, messaging systems, long running batch processes and their output, etc, you get not one but many SPPS models, wich basically amounts to no SPPS at all.
Now you could still say, well, start up one thread or process make a connection to all these systems, do what you need to do and then close everything down again. Unfortunately this causes huge performance and resource usage issues. It is compatible only with batch processing. You cannot do anything real time with reasonable response times, you cannot do anything event driven, and, above all, it doesn't help you to ensure consistency of numbers across data sources.
Of course there have been many approaches to tackle integration issues with their own frameworks and SPPS models. Yet this field is fragmented and there are lots of proprietary components involved.
Much more could be said, but my conclusion is that this area is quite varied in terms of concurrency models and anyone designing a system like that must have deep and broad knowledge of concurrency.
Making each object a thread with a job queue and posting computation jobs to that queue will open the door to massively parallel programming, as well as eliminate deadlocks and the need for the 'synchronized' keyword.
Achilleas, With my limited object oriented experience that's the model that makes most sense to me i.e. Erlang/Scala/Actors/Message Passing. But does it mean its the one that will dominate? This is a big debate at the moment with transactional memory and functional languages/monads being other contenders. Maybe SPPS will take care of a lot of things i.e. just leave all the parallelism to the database or app server. But the argument is that extra speed will come in the future by adding more cores so in theory a high proportion of developers will have to take this into account if they are interested in increasing the performance of thier applications.
I think it is unfair to compare Swing with Ajax. I do not have a good understanding of the Javascript threading model, but I have seen many badly written scripts that do hang the page or break the presentation. That said, if we are going to wrap everything in SwingWorkers then why not do it automatically?
Erlang/Scala style message passing is actually pretty easy to implement in Java (at least for thread based actors). Maybe it could be added to the next version of util-concurrent?
The serial-program, threaded subsystem approach can take us a long way, I agree. It would take us farther if we start moving to asynchronous interfaces to those subsystems. This would still be an educational ("maturity") issue for developers, but one more tractable than getting everyone up to speed on fully concurrent programming techniques.
I think the "maturity model" approach is a mistake, because I'm not at all confident that Java-style shared-context threads are the right approach in the first place. The Erlang-style shared-nothing approach frankly looks more promising. Let's not go declaring standards while the jury is still out.
I'll expand a bit on comments I made to earlier threads on the subject.
Multi-threading is hard. I first began to understand why it wasn't as simple as synchronize when I read Allen Holub's articles. The crux, from a java perspective, is as he states: threading in java is procedural/functional, not object implemented. Those interested can read his articles or his book.
Since it's hard, java is devolving into corporate COBOL, and such systems are best served by a RDBMS (not XML stores, ugh); conduct the thought experiment --> what is the most sensible thing to do?
The client machine, multi-core but lower clock, will be slower on sequential processing. Does it make sense (if it ever did, op cit "The Bank of Allen") to 'off load' logic to the client machine? Unless client coders become generally as adept as RDBMS builders at the art, no. Especially in java; I'd assert that if you have to be Doug freaking Lea to figure it out; not gonna happen.
So, what to do what to do????
In 1969/1970 Dr. Codd laid it out: put the data and its constraints in a central place. IIRC, the 360/70s of the time were multi-processor. The RDBMS builders have been perfecting their art since then. 80 processor *nix machines running Postgres/DB2/Oracle; fiber channels; the whole kit and kaboodle for a fraction of the $$ one would pay as lease on a z-machine.
So, back to the future: the X-term's revenge. Fact is, that's where we've been going for some time, but the kiddies won't fess up to it. They think AJAX is new technology; and want to convince the PHBs that they have something new to offer. VT-100/unix database systems (character mode interface, natch) from 1990 got more done with a lot less then today's Web Services, minus the pixel dust of course.
How to do this? There are some below the radar (and I strongly suspect stealth works ones that their creators won't talk about) frameworks which generate UI code from the database catalog. It just makes sense.
The only contrarian argument comes from the missing Jim Gray. And I don't buy his argument; which is that we have to retreat to sequential processing since the relative speed of processor/memory vis-a-vis hard I/O has increased. This is a strawman, in that the historical cost of random processing continues to decline. There's not enough financial reason to foul the bed for such a retrograde idea. Anyway, the RDBMS pros know that sequential I/O can be replicated to within a few percentage points with existing physical storage methods. It's a matter of acknowledging that I/O is Block level not Row level, that Only the Database Buffers Data, that memory is real cheap, that 64-bit addressing is here. The result is that such a system can keep the Business Logic program from starving with an intelligent physical database design.
And is it really more difficult to code gen from the catalog (reasonably standardized in SQL-92; more so since) than it is from myriad XML Business Language dialects? I assert that it is actually easier. No parser; no XSLT processor; one Dialect; and so on. It DOES mean that your average OO Coder will have no clue; it will mean ceding a good deal of control to the Data Pros. The Coders won't have much to do except Push the Code Gen Button. I'll cede that the MDA, Executable UML meme has a certain fascination; but ends up the same place with an extra layer or two.
It means recognizing one of the Dirty Secrets of (business type) systems: it is foolish to implement data constraints more than One Place; or put another way, why would it make sense to have more (or less) stringent data edits on the Client Machine than the Database? Ultimately, the database has to guarantee data integrity; unless one still thinks that the COBOL/VSAM demarcation of smart program <> dumb data should still be The One True Way. This is the Way of MySql zealots, "I don't want the database enforcing integrity". Bah. Ignorants too lazy to write bespoke file I/O; that's all.
Certainly, the OO hand waving amounts to that; "we must marry the function to the data", etc. At the same time, and often from the same folk, we hear "the future belongs to Declarative Systems". A 5NF database *is* just that; so is a Prolog system for that matter.
To sum: there's too much to gain from putting the data and logic centrally in a 128 way *nix database box. That was always true. Now that multi-core makes client side programs slower (well, relatively), accept Codd as your saviour.
> Achilleas, > With my limited object oriented experience that's the > model that makes most sense to me i.e. > Erlang/Scala/Actors/Message Passing. But does it mean its > the one that will dominate? This is a big debate at the > moment with transactional memory and functional > languages/monads being other contenders.
Just a footnote about contenders. Seems like lazy evaluation and exploitation of implicit parallelism by non-strict languages is already ruled out.
About the "Threading Maturity Model". This is a strange idea which is quite easy to be ridiculed. Why not also an MMMM - the "Memory Management Maturity Model"? Since we all still use C++ the deficiency of the language by having no garbage collector has to be compensated by organizational means i.e. by more quality management, training, audits etc.
It is a shame that programmers even think about it.
I typically always vote for whatever approach is most maintainable in the long term. When the programmer (or his successor) comes back to the code 3 months or more after it was last touched, what "style" promotes the greatest ease of maintenance? True genius is making the complex simple.