Many brilliant developers are working on improving the current implementation of Ruby and on creating alternatives. I was curious about their current respective speeds, so I installed and ran some benchmarks for the most popular implementations. In this article, I’m sharing the results for the community to see.
Disclaimer
Don’t read too much into this and don’t draw any final conclusions. Each of these exciting projects have their own reason for being, as well as different pros and cons, which are not considered in this post. They each have a different level of stability and completeness. Furthermore, some of them haven’t been optimized for speed yet. Take this post for what it is: an interesting experiment;
The results may entirely change in the next 3, 6, 12 months… I’ll be back!
The scope of the benchmarks is limited because they can’t stress every single feature of each implementation. It’s just a sensible set of benchmarks that give us a general idea of where we are in terms of speed;
These tests were run on my machine, your mileage may vary;
Benchmark Environment
My tests were conducted on an AMD Athlon™ 64 3500+ processor, with 1 GB of RAM.
The operating system that was used for all – but Ruby.NET – is Ubuntu 6.10 (for x86). Ruby.NET currently runs on Microsoft Windows only, therefore I’ve used Vista with the .NET Framework 2.0 and have also run Ruby 1.8.5-p12 on Windows as a means of having a more direct comparison with Ruby.NET.
Ruby 1.9, JRuby, Rubinius and Cardinal were all installed using their respective latest development versions from trunk.
Tests used
The 41 tests used to benchmark the various Ruby implementations can be found within the benchmark folder in the repository of Ruby 1.9. The following is a list of the tests with a direct link to the source code for each of them:
The following table shows the execution time expressed in seconds for Ruby 1.8.5 on Linux, Ruby 1.8.5 on Windows, Ruby 1.9 (Yarv/Rite) on Linux, JRuby on Linux, Gardens Point Ruby.NET on Windows, Rubinius on Linux and finally Cardinal on Linux.
LEGEND:
A blue bold font indicates that the given Ruby implementation was faster than the current stable, mainstream one was (Ruby 1.8.5 on Linux);
The baby blue background indicates that the given Ruby implementation was the fastest of the lot for the given test;
‘Error’ indicates an abnormal interruption of the program. ‘Too long’ instead, is an indication that the execution took longer than 15 minutes and was manually interrupted;
Average and Median values take in consideration only working tests (they exclude ‘Too long’ programs as well).
Below is a chart which shows the average and median values, visually:
You may also be interested in visualizing a direct comparison of how many times a given implementation was faster or slower than Ruby 1.8.5 on Linux:
Of course, the bold green values indicate a positive performance, so for example Cardinal was 4 times faster than Ruby 1.8.5 on Linux for the test vm1_swap, but it was also 18 times slower for so_matrix (therefore in red).
I won’t provide too many personal considerations but rather let you enjoy the numbers. Generally speaking though, Ruby on Windows was about 1.5 times slower than on Linux. Yarv (merged in the development version of Ruby) is clearly the fastest by a long shot. This is good news (there are hopes for a fast Ruby 2.0), and it is not an unexpected result.
Ruby.NET and JRuby had similar performances and were able to execute most of the tests. It is clear though that they will need to focus on improving their individual speeds in the coming future, in order to be ready for prime time.
Cardinal wasn’t able to complete most tests, and was extremely slow in some others. However on a few occasions, it also showed decent results (beating Ruby 1.8.5 in 3 tests). Rubinius was extremely slow too but correctly handled a larger amount of tests than Cardinal was able to (and it was significantly faster in executing so_sieve.rb).
I’d like to conclude by saying that all the people involved with these projects are doing an amazing job. And while some implementations show that they are in an early stage of development, it is in no way detrimental of the great effort and work done by their developers, nor attempts to predict their future success or failure. So once again, great job guys, all of this is nothing short of exciting!