Don Box asks a couple of Ruby questions:
- What's the performance penalty for passing/calling a block vs. executing the code "directly" on the current frame? Does the cost go up/down if the block doesn't reference any symbols in the enclosing frame? I know the answers for the CLR and C#, but I'm not sure my intuitions from that environment apply here.
- Do people wind up using blocks to model simple CPS-like idioms and if so, how does the runtime's stack management hold up?
I can't answer specifically for Ruby, but I can give my impression from its close cousin, Smalltalk. I created a simple class that ran one of two tests:
testDirect: n
| val |
val := Time millisecondsToRun: [n timesRepeat: [1000 factorial]].
Transcript show: 'Direct for ', n printString, ' repetitions: ', val printString; cr
testBlock: n
| val |
val := Time millisecondsToRun: [n timesRepeat: [myBlock value]].
Transcript show: 'Direct for ', n printString, ' repetitions: ', val printString; cr
Then I ran each test 1000 times, to get something representative. Over 1000 runs, I got 3100 ms for the direct run, and 3300 for the block (with a variance of about 100 ms per run). Upping that to 10,000 repetitions each, the difference between the two dropped to an irrelevant 60 ms (i.e., noise). So in Smalltalk at least, there aren't performance reasons to avoid blocks.
Which leaves it to being a "does it make sense from a design standpoint". Blocks are used quite frequently in Smalltalk in places where other languages have built in operators (i.e., various iteration methods in the Collection hierarchy). It's never caused an issue with the runtime stack that I know of - and the fact that Seaside (which carries full blown context stacks around) holds up tells me that there shouldn't be a problem.
That may not help for Ruby, but it does point out that a system using Blocks can be implemented efficiently.