Comment by zelphirkalt

1 day ago

I have my doubts about any CS class/lecture, that teaches, that the "iterative version is easier to scan". Might just be the bias or inexperience of the lecturer. By not I find recursive to be often easier to read than some for loop with its loop header and counter that I need to think about and update in my mind. And then the loop usually in many languages does not even have a return value, because it is not an expression, but a statement. Meeehhh, very meehhh in many languages. Not all, but many.

I think maybe in languages like Ruby or Smalltalk a loop can be more readable, because of how they structure it as messages to objects, rather than special keywords in the language.

> I have my doubts about any CS class/lecture, that teaches, that the "iterative version is easier to scan".

Well then you’re in luck because that was not an academic statement but a professional opinion - mine.

You can’t optimize the patterns you can’t see because they’re obscured by accidental complexity. And iteration is a frequent trick I use to surface deeper pattern s.

  • Things like DFS add a lot of noise in the way of seeing the pattern IMO, but then again if you want explicit stack management and that's the pattern you want to see I suppose the iterative versions are clearer.

    • I think this is in the same space as 'imperative shell, functional core'. Because what ends up happening is that in a tree or DAG you keep swapping back and forth between decision and action, and once the graph hits about 100 nodes only people who are very clever and very invested still understand it.

      A big win I mentioned elsewhere involved splitting the search from the action, which resulted in a more Dynamic Programming solution that avoided the need for cache and dealing with complex reentrancy issues. I'm sure there's another name for this but I've always just called this Plan and Execute. Plans are an easy 'teach a man to fish' situation. Sit at one breakpoint and determine if the bug is in the execution, the scan, or the input data. You don't need to involve me unless you think you found a bug or a missing feature. And because you have the full task list at the beginning you can also make decisions about parallelism that are mostly independent from the data structures involved.

      It's not so much enabling things that were impossible as enabling things that were improbable. Nobody was ever going to fix that code in its previous state. Now I had, and there were options for people to take it further if they chose.