← Back to context

Comment by eru

14 hours ago

> Practically the day after I learned about tail recursion in CS class, I learned that almost all recursive calls can be translated to iteration, that in many cases the iterative version is easier to scan, is as fast if not faster, and that they can usually handle much much larger inputs than recursion due to avoiding stack overflow.

What do you mean by easier to scan? I find (most) loops [0] hard to read, because they typically involve mutable state.

When properly implemented, tail calls are as fast as gotos and loops, and don't blow up any stack. (Not all languages are implemented with a stack in any case.)

However you have one point:

Most of the time, we don't use recursion directly in our programs even in a language like Haskell or Scheme. Instead we define a 'combinator' that encapsulates the specific, limited recursion scheme that we want, and then use that one. This is very similar to how people generally don't use goto directly in their programs.

You might be familiar with the combinators 'map', 'filter' and perhaps 'reduce'/'foldr'. You could re-interpret the various common loop types as such recursion combinators that weaker languages have to bake into their language, because they are not strong enough to express them as a user-defined library. And indeed, Haskell has Control.Monad.Loops (https://hackage.haskell.org/package/monad-loops-0.4.3/docs/C...) which gives you exactly the common loop types as a library.

Some examples of less common combinators are eg 'unfold' or various combinators to traverse trees or graphs.

[0] The foreach-loop over some sequence is less headache inducing than eg a while-loop.