Summary
In January's performance news, the slumbering issue of badly performing strict Math finally bit Sun in public.
Advertisement
In January's Java Performance Tuning newsletter there are several interesting news items. Of course we had the usual range of tool vendor and benchmark announcements; what I consider to be "background news", because they are usually there each month. (Naturally we only list the performance announcements, and even there only the interesting ones.) Apart from those announcements, there was the 9 language benchmark which didn't show Java as overall the fastest language only because Sun shot itself in the performance foot by insisting on not addressing the "strict" Math performance for so many years. They've known about it for years, and even designed more efficient support into the core classes, but neglected to turn on support for the efficient mode. Of course, Sun has limited resources, so some things will always slip by them. But in this case they clearly understood they were making Math functions inefficient, they added the underlying support for an efficient alternative, and didn't enable it even though quite a few comments have pointed it out. Which is pretty annoying.
For me, the most interesting of items in the news was the server side discussion which floated the idea of pushing NIO support into J2EE. What a wealth of information. NIO select support costing 5% to 30% overhead compared to the blocked multi-thread model came as a real surprise to me. Though maybe it shouldn't since a blocked thread mostly only takes up non-CPU resources, while the Select multiplexing model explicitly exchanges those per-thread resources with active socket set management. I think the 5% mark is probably the more accurate though, because excellent as a product Jetty is, I've noticed several inefficiencies that it has, along with many webservers (see this old study if you want to understand how even minor effects can dramatically affect the scalability of webservers). In fact we use Jetty in our performance training classes, and profiling the socket transfers is quite instructive for our students. And that's not to say I wouldn't use it in production. On the contrary, if it fitted the functional requirements I'd certainly test its performance against the other possible solutions.
Other than that, you might also want to check out the fail fast article if you haven't noticed the ConcurrentModificationException possibilities inherent in using the List iterators, and our other columns are also quite interesting:
"What is a mysterious field called modCount doing in the List Iterator classes? How can we efficiently handle iterating over collections when another thread may be updating the collection at the same time? Read on ..."
"Yet more proof that the engineers at Sun fully realized that optimizers such as HotSpot are sensitive to coding style and with that realization, built HotSpot with good coding practices in mind "
"what we want to do here is to use our transactional replicated cache to actually keep the cached entity beans in memory. This will greatly improve performance"