Comment by rectang
4 years ago
Pro tip: the classic recursive approach to implementing the Fibonacci sequence exhibits eye-wateringly poor performance.
4 years ago
Pro tip: the classic recursive approach to implementing the Fibonacci sequence exhibits eye-wateringly poor performance.
I've always hated how that's used as an example of recursive programming in early programming education. Invariably, students try a large n, get a stack overflow and conclude that recursion is inefficient, leads to mysterious errors, and must be avoided.
Fibonacci is generally more likely to take forever than stack overflow.
...which makes it a classic exercise for teaching memoization!
And now your memory usage will grow eye-wateringly large. Instead, convert the algorithm to be iterative or at least tail-recursive and it will be faster than both the naive and memoized versions!
Or use the closed-form solution.
2 replies →
Looped algorithms generally outperform recursive algorithms.
Goto based algorithms generally outperform looped algorithms.
(They're just more general, you can always implement a loop based algorithm using gotos, but not the other way around)
Not exactly. This is more an artefact of language design.
If you convert everything to continuation passing style, then every function call, tail call, recursion, etc. is just as expensive (and expressive) as a GOTO. This is, incidentally, the main "trick" or lightbulb moment in the classic Cheney on the MTA paper by Henry Baker [1].
Now if we're talking specifically in C, then absolutely! But while this intuition holds for C, it's not always true and doesn't have to be always true.
[1] https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.54...
1 reply →
Recursion is a form of looping. I think it's more clear if you say imperative algorithms.
In what sense is recursion a form of looping?
I'm thinking of the memory model for each - one mutates some variables (at least some sort of index, unless you're doing an infinite loop) in a single frame, while the other accumulates function calls and stack frames until you bottom out, then return values are "folded" back up.
Semantically, from a caller's perspective, they can achieve the exact same thing, but aside from that they seem radically different to me - is there anything else that could lead us to say that recursion is a form of looping?
3 replies →
does that hold for languages/compilers with tail call optimisation?
This isn't really related to your question, but I don't think tail calls could help for Fibonacci since f(n) branches to two calls, f(n-1) and f(n-2). And each of those branches into 2. So it can't be done in a finite stack area with naive recursion.
The compiler would either have to memoize, or be extremely clever and start at the base case (0, 1) and then transform the code to use the 2x2 matrix exponentiation. I wouldn't have been suprised if GHC haskell was that clever, but even with -O2 "print $ fibb 10000" isn't terminating.
4 replies →
Tail calling is basically looping and why I said generally.
At least for Racket, which does have TCO, the for-loop is really a macro for a tail recursive loop. I'm not sure about other languages with more aggressive optimization though.
And don't cause stack overflows in normal operation. I've run into that before and now avoid recursion.