← Back to context

Comment by beaconstudios

4 years ago

does that hold for languages/compilers with tail call optimisation?

This isn't really related to your question, but I don't think tail calls could help for Fibonacci since f(n) branches to two calls, f(n-1) and f(n-2). And each of those branches into 2. So it can't be done in a finite stack area with naive recursion.

The compiler would either have to memoize, or be extremely clever and start at the base case (0, 1) and then transform the code to use the 2x2 matrix exponentiation. I wouldn't have been suprised if GHC haskell was that clever, but even with -O2 "print $ fibb 10000" isn't terminating.

  • Wouldn't this be the recursion version with tail calls?

      (define/contract (fib n)
        (-> nonnegative-integer? nonnegative-integer?)
        (let fib-recur ([i 1] [curr 1] [prev 0])
          (cond [(= i n) curr]
                [else (fib-recur (+ i 1) 
                                 (+ curr prev) 
                                 curr)])))

    • That's pretty much what I meant by the matrix exponentiation method - applying the map (a, b) -> (a+b, a) repeatedly. Your function definitely uses tail calls, but I was just trying to say more than tail call optimization is needed to transform the trivial recursive version

      fibb 0 = 0

      fibb 1 = 1

      fibb n = + (fibb (- n 1)) (fibb (- n 2))

      into that, or any O(n) version.

  • Haskell is not that clever (or at least it was not a year ago). The memoized version is much much faster.

  • If you replaced the recursion with loops but implemented the same algorithm you'd just have to manage a stack on the heap. I don't think that would be faster.

At least for Racket, which does have TCO, the for-loop is really a macro for a tail recursive loop. I'm not sure about other languages with more aggressive optimization though.