This post originated from an RSS feed registered with Ruby Buzz
by Phil Tomson.
Original Post: The next computing paradigm?
Feed Title: Thoughtfiz
Feed URL: http://wiki.railsplayground.net//xml/rss/feed.xml
Feed Description: Thoughtfiz: lots of tiny thought bubbles, mostly Ruby-related.
I’ve been reading Steve Yegge’s blog lately. His latest post entitled: Moore’s Law is crap raises interesting questions about the future of software development and computing in general. He makes the case that programming languages need to be developed that can more adequately model parallelism.
I agree with Steve: in order for us to break out of this current local minimum in technology we need to move to higher levels of parallelism. I think the way we’re going to get there is FPGA computing in which we map algorithms directly into hardware (the FPGA fabric).
However, that’s going to require a big shift in mindset among software engineers.
There are products and projects out there that aim to ‘help’ software engineers make that jump. Most of them use some variant of C, for example Impulse C (a commercial product) and FpgaC (an open source project) aim to allow software engineers to take their C code and map it into FPGA hardware. Initially I thought this might be a good approach. But the more I think about it I’m not so sure. I think by building on C the user is still encouraged to think in the old serial paradigm. (And then there’s the fact that C is not the most productive language for software development out there). In addition, these C based approaches tend to add a lot of pragmas that start to make the code look, well, ugly. I’m coming to the conclusion that they’re trying to use C for something it was never intended to do (of course, they know that) and that it’s time for something completely different.
I’m beginning to think that perhaps we need to move the other way: Software engineers need to start learning about Hardware Description Languages (HDLs) and dataflow programming. Now, as a former hardware designer, I suppose that’s easy for me to say ;-) But if we are to have a completely new computing architecture that requires a new way of thinking, then we probably need to use languages that are designed for the purpose of dealing with parallelism as HDLs are. The most widely used HDLs are VHDL and Verilog. Are they perfect? No, far from it. Of the two, I prefer VHDL because it takes more of a ‘software engineering’ approach in that you can define new types and operators on those types. However, if you’re coming from some of the more dynamic, OO programming languges (like Ruby) you’re going to feel pretty confined as there really isn’t any sort of object oriention, for example (well, it’s kind of there, but rather primitive).
I’m hoping to have the next version of RHDL released pretty soon (yeah, I know, I’ve been saying that for a few weeks now… just a few more issues to iron out ). RHDL is a good way for software engineers to experiement with HDLs and dataflow programming. However, the next project (which I need to start on soon) is Inline::HDL which will allow you to get the best of both worlds: use HDLs for what they’re good for and use a general purpose programming language (Ruby) for what it’s good for (and create the bridge between the two automatically). Or, another way to look at it is that it will be hardware acceleration for Ruby. Stay tuned. [Hey, I start working in a couple of weeks. That will be good for my budget, but bad for my Inline::HDL development time ]