> Of course, some of those implementations are not very
> optimal. Or maybe it is that implementation across
> languages is uniform, which can be an advantage,
> disadvantage or neutral, depending on the
> language/platform. For example the Fibonacci example
> seems to only test which language is best at recursion,
> but it certainly is not the best approach for solving the
> problem in many languages (see
>
http://www.artima.com/forums/flat.jsp?forum=32&thread=4996&> essage=16761&redirect=true&hilite=true&q=Fibonacci).
> What's more interesting to me is that some problems seem
> m to work better in some languages, because the
> sub-optimal implementation, but when implemented more
> optimally for each language, different winners will
> emerge. For example, with a better Fibonacci
> implementation, you quickly get to numbers that you cannot
> handle in C/C++ without creating or finding a BigInt class
> (with the better implementation than recursive one, you
> quickly get out of the
unsigned long range) and you
> find that Python works with them much more efficiently
> than Java.
I remember the whole fibonacci debate. And it was at the simplicity and efficiency of this example in that discussion
fibs = [0,1,1]
def smartFib(n):
for i in range(len(fibs),n+1):
fibs.append( fibs[-1] + fibs[-2] )
return fibs[n]
that I went wow. And this is where I got the notion that in learning any language (still new to Python and still enjoying the learning process :-) that it is very helpful to know how things in it
really work. I've always thought blanket statements in matters of development were bad and this, more than any other single example and discussion I've seen to date, really made it hit home. There are general principles that are good to follow and that can be applied across a wide swath of problems, but the need to dig deep into a problem and get the best solution to your particular instance sometimes requires bending the rules. Or figuring out how to best apply those principles using the tools available, which may in some ways contradict the conventional wisdom.
I think everybody learns the fibonacci recursion example and thinks that how it should always be written because that's how you first learn it. Never mind that two classes later you learn about how recursion can be a performance nightmare and you can use a stack to easily remove tail recursion. Everybody goes back to the CS101 recursive version when talking about fibonacci numbers, even though everbody learned in CS309 that it probably isn't the best, most efficient way to implement it.
I found something similar with creating primes in Python. I ran across this link
http://www.networkcomputing.com/unixworld/tutorial/005/005.html and liked the prime number example, so I figured I would try it on 1,000,000. Well, it finishes, but it's not very fast (or even half-fast), so I tinkered with it for a few minutes and came up with
def MyPrime(limit):
limit = limit + 1
PrimeDict = {2:'Y'}
#initialize prime dictionary
for x in range(3,limit):
PrimeDict[x] = 'Y'
RetList = []
#tag all non-prime numbers
for x in range(2,limit):
if PrimeDict[x] <> 'N':
RetList.append(x)
for y in range(2*x,limit,x):
PrimeDict[y] = 'N'
RetList.sort()
return RetList
which does goes up to 1,000,000 and returns all 78498 primes between 2 and 1,000,000 in about 4 seconds. I felt happy :-) I'm no phd like the author of the article, but I like tinkering with Primes too. Mine just happens to be a lot more memory intensive where his is more CPU intensive. At least that's what my performance monitor tells me.
Anyway, I agree with you that it gets down to using the best tool for the job. That's why I like having a lot of tools in my toolbelt. It's even better if you really know how to use the tool.
I inadvertently typed
>>> x = primes.MyPrime(10001000)
in the python interpreter and it came back after a few minutes.
>>> x[-1]
10000993
I never knew 10,000,993 was a prime number (can't say I ever really cared...) and that there were 664640 prime numbers
>>> len(x)
664640
up to 10,001,000. And I love the negative slicing in python sequences. That language feature ROCKS! It's such a silly little thing when you think about it, but I love it.
I think you have to look at the shootout links with a grain of salt, mostly because of the reason sited in your link to the fibonacci debate. Optimal solutions tend to look different from one language to the next, and in some cases the language is a good fit for the problem and in others you look at the solution and go huh? I think if you really want to get a feel for a language, you need to look at optimal solutions for problems in them. That'll tell you what the language can really do. This is why I never was a big perl fan. There are probably as many optimal solutions for a problem as there are good perl developers.
This is also why .NET perplexes me a bit. Why bother developing a CLR and bolting different languages onto it? You have to in some cases shoehorn your language into the runtime and in the process destroy some of the things that are good and right about it. I don't know if the Python on .NET initiative has really gone anywhere, but I remember reading about some of the initial attempts and the concessions you have to make to get Python running right in the CLR causes some serious problems with it. Last I knew the best bet was to follow active perl and have a bridge so you can access the runtime from Python and vice versa. Ah, I digress...
My condolences to any fellow Red Sox fans out there. Go Marlins!