Comment by neilv
5 years ago
Scheme has had tail calls as a basic idiom (some call it Proper Implementation of Tail Calls; others call it Tail Call Optimization) forever, and I keep them in mind any time I'm implementing anything nontrivial in Scheme.
There's no special syntax -- there's just the notion of tail positions, from which tail calls can occur. For example, both arms of an `if` form are tail position, if the `if` itself is. And if you introduce a sequencing block in one of those arms, the last position in the block is tail position. (A specialized IDE like DrRacket can indicate tail positions, and DrRacket will even hover arrows, tracing the tail positions back to the top of the code.)
When implementing a state machine, for example, a state transition is simply a function application (call) to the new state (optionally with arguments), where the the called function represents a state. It's satisfying when you realize how elegant something that used to be messy is, and you can imagine it being very fast (even if your Scheme implementation isn't quite up to getting as fast as a C or Rust compiler that can recognize and special-case tail calls.)
(For the state machine example, compare to more conventional approaches in C ,of "state ID" variables, `switch` statements on those, and record-keeping for additional state. Or doing it in data, with state ID being index into arrays, again with any additional recordkeeping. Or lots of function calls when you didn't really need the time overhead and stack usage of function calls with full returns.)
Any functional programming language shall have TCO since statements are forbidden (everything is a function).
Or you could avoid (semantically) having a stack at all, as Haskell does.
I don't see how that could be done... Is it a result of laziness?
2 replies →
Recursive descent parsers are, in a way, state machines as well. Writing something like a CSV or JSON parser in scheme is so much nicer than having to do it the old fashioned way.