Summary
Sounds a bit extreme? Some companies seem to live and breathe benchmarks, responses to studies, and responses to responses. Isnt it time we realised what benchmarks are really about and stopped it all getting out of hand.
Advertisement
Firstly, Ill point out here and now that this blog is based on this thread on TSS about Websphere Vs .Net.
This is the first thing that always gets on my nerves - the term Vs. when in the context of benchmarking. Why is it there? Its not some fight at Caesars Palace
Anyway, in all of the opinions and flame bait, there was one golden, gleaming treasure of a comment - Every good programmer knows that the only benchmark that holds water is his own.. We all have to have our own benchmarks, the one that says, Yeah Im happy with choosing X over Y, or whatever, but it wasnt always like this; we didnt have to be always self-reliant when comparing products, did we?
When I first started out on this journey into programming, years ago, benchmarks actually meant something; they may have been small in scope but there were plenty of parties involved and they were fair enough comparisons. But nowadays, benchmarks are considered in these huge studies that compare only two or three products. And the scope of these studies is often vast, concerning products with huge number of features and configuration options.And most of the objectivity and rational critique gets lost in the sheer size of the thing.
Now, every other week we have someone publish a benchmark about something, and it all turns into a war, people quoting benchmarks like they are sticks to hit each other with. You can always tell when an analyst company does its job, and its benchmarks are good people accept the findings, and get on with making their products better that is most people, bar the vocal zealot few.
One thing I realised is that everyone selectively uses benchmarks to prove their own point of view. Benchmarks are after all statistics, and, you know, you can prove anything with statistics. Oh .Net is better than J2EE and its more productive, so why are so many Java community tools being ported to .Net? You see what I did there its called the apples-to-oranges effect. Produce a claim and defend your position with a counter-claim that has nothing to do with the claim youre disputing. Now if you ask me do I know Java is better than .Net? Overall, I dont know. There are areas that are better for Java as Im sure there are for .Net. But this comes back to a scoping issue; trying to say is J2EE better than .Net is such a big question, its like saying is day better than night?
I stopped even contemplating Java and .Net benchmarks and studies, the day I, by chance, picked up a copy of MicroMart and I actually got an unbiased, critique of a product; the same morning I had been reading the first TMC comparison for J2EE and .NET. The product was only a PC, and did I have to pay for the study, yes it was about £1.80.
If youre going to provide studies and benchmarks, make them much more focused and relevant, and broaden the participation. So instead of looking at J2EE Vs. .Net, lets look at Java RMI/Jini ERI as opposed .Net Remoting; JavaMail Vs. Exchange; JMS Vs. MSMQ, etc, etc.. Make them relevant, not this vendor paid-for marketing gimmick. Benchmarks are for techies, marketing is for Business Decision Makers - lets not try to turn one into the other