← Back to context

Comment by jbluepolarbear

4 years ago

Looped algorithms generally outperform recursive algorithms.

Goto based algorithms generally outperform looped algorithms.

(They're just more general, you can always implement a loop based algorithm using gotos, but not the other way around)

  • Not exactly. This is more an artefact of language design.

    If you convert everything to continuation passing style, then every function call, tail call, recursion, etc. is just as expensive (and expressive) as a GOTO. This is, incidentally, the main "trick" or lightbulb moment in the classic Cheney on the MTA paper by Henry Baker [1].

    Now if we're talking specifically in C, then absolutely! But while this intuition holds for C, it's not always true and doesn't have to be always true.

    [1] https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.54...

Recursion is a form of looping. I think it's more clear if you say imperative algorithms.

  • In what sense is recursion a form of looping?

    I'm thinking of the memory model for each - one mutates some variables (at least some sort of index, unless you're doing an infinite loop) in a single frame, while the other accumulates function calls and stack frames until you bottom out, then return values are "folded" back up.

    Semantically, from a caller's perspective, they can achieve the exact same thing, but aside from that they seem radically different to me - is there anything else that could lead us to say that recursion is a form of looping?

    • > In computer science, a loop is a programming structure that repeats a sequence of instructions until a specific condition is met.

      That's the general definition at least I've always been most aware of. I don't want to claim it is the most common one, cause I don't really have numbers and who is the authority on comp-sci definitons? But I do feel it is at least a somewhat common academic definition for looping.

      That would mean that recursion is a form of looping. The distinction between recursion and imperative loops would then be a matter of implementation and exposed programmer interface. Similarly like others have said, goto's can also be used to implement similar loops.

      And in that regard, there are variants of recursion as well, such as tail-call recursion which has a similar memory model as what you described and differs only to an imperative loop in the programming interface it exposes.

      2 replies →

does that hold for languages/compilers with tail call optimisation?

  • This isn't really related to your question, but I don't think tail calls could help for Fibonacci since f(n) branches to two calls, f(n-1) and f(n-2). And each of those branches into 2. So it can't be done in a finite stack area with naive recursion.

    The compiler would either have to memoize, or be extremely clever and start at the base case (0, 1) and then transform the code to use the 2x2 matrix exponentiation. I wouldn't have been suprised if GHC haskell was that clever, but even with -O2 "print $ fibb 10000" isn't terminating.

    • Wouldn't this be the recursion version with tail calls?

        (define/contract (fib n)
          (-> nonnegative-integer? nonnegative-integer?)
          (let fib-recur ([i 1] [curr 1] [prev 0])
            (cond [(= i n) curr]
                  [else (fib-recur (+ i 1) 
                                   (+ curr prev) 
                                   curr)])))

      1 reply →

    • Haskell is not that clever (or at least it was not a year ago). The memoized version is much much faster.

    • If you replaced the recursion with loops but implemented the same algorithm you'd just have to manage a stack on the heap. I don't think that would be faster.

  • At least for Racket, which does have TCO, the for-loop is really a macro for a tail recursive loop. I'm not sure about other languages with more aggressive optimization though.

And don't cause stack overflows in normal operation. I've run into that before and now avoid recursion.