Summary:
Ruby Queue software package lowers the barriers scientists need to overcome in order to realize the power of Linux clusters. It provides an extremely simple, economic, and easy-to-understand tool that harnesses the power of many CPUs while simultaneously allowing researchers to shift their focus away from the mundane details of complicated distributed computing systems and back to the task of actually doing science. The tool set is designed with a K.I.S.S, research-focused, philosophy that enables any ordinary (non-root) user to set up a zero-admin Linux cluster in 10 minutes or less. It is currently being used successfully in such diverse fields as bio-chemical research at the University of Toronto, geo-mechanical modeling at IGEOSS, and studying the nighttime lights of the world at the National Geophysical Data Center.
The ability to add new comments in this discussion is temporarily disabled.
Most recent reply: November 6, 2005 3:46 PM by
Zach
|
Ruby Queue software package lowers the barriers scientists need to overcome in order to realize the power of Linux clusters. The toolset is designed with a K.I.S.S, research focused, philosophy that enables any ordinary (non-root) user to set up a zero-admin Linux cluster in 10 minutes or less. Read this Artima article by Ara T. Howard http://www.artima.com/rubycs/articles/rubyqueue.htmlWhat do you think about Linux clustering with Ruby?
|
|
|
Great Article. I am currently working on some clustering on my home network. I have a mixed platform environment though. I am considering adapting Ruby Queue to Work on a mixed platform environment. I let you know how my progress goes.
|
|
|
Though it will probably be a few months before I begin. I have some other projects on the que with higher priority. Though I did love your article and the thought of what could be implimented with Ruby Queues.
|
|
|
sounds great. my environment is extremely limited in that i can't get in/out on any ports. making rq be cross platform would be straightforward. essentially you'd want a drb listener that took requests to insert/delete/update jobs and jobrunners would simply connect to it. it would actually be much easier than the nfs approach i had to take due to limited networking. if any other are interested in such a project let me know - other ideas i've talked with people about include a totally decentralized approach where each node was both a job runner and job submitter: each node would manage it's own work queue and return the status of jobs to whomever sent that particular job. it could be quite powerful.
|
|
|
We have an approach for running large batch job chunks that only involves Drb and a database, no complex NFS setup, we're considering open sourcing it or perhaps selling licenses. Contact me if interested.
|
|
|
hi-
sounds very interesting. does your system work with ssl authentication or through ssh tunnels? if you aren't using nfs, where does the code that is run live - must it be installed locally on all compute nodes? is the database a single point of failure in the setup?
it sounds interesting - i've got a prototype of something similar myself but haven't gotten it running with ssl and resovled the issue of job collection. in any case i'd love to check out the code and/or talk with you about it. feel free to contact me on or offline - ara.t.howard@gmail.com.
regards.
-a
|
|
|
> hi- > > sounds very interesting. does your system work with ssl > authentication or through ssh tunnels? if you aren't > using nfs, where does the code that is run live - must it > be installed locally on all compute nodes? is the > database a single point of failure in the setup?
With urirequire recently released, it doesn't seem long off that a 'dbrequire' would be in the works. All code could live in the database. I don't know what Mark's set up is...but your article has got me thinking... Thanks Ara!
|
|