Comment by williamdclt

1 day ago

I'll agree that explicit loops are easier to debug, but that comes at the cost of being harder to write _and_ read (need to keep state in my head) _and_ being more bug-prone (because mutability).

I think it's a bad trade-off, most languages out there are moving away from it

There's actually one more interesting plus for the for loops that's not quite obvious in the beginning: the for-loops allow to do perform a single memory pass instead of multiple. If you're processing a large enough list it does make a significant difference because memory accesses are relatively expensive (the difference is not insignificant, the loop can be made e.g. 10x more performant by optimising memory accesses alone).

So for a large loop the code like

for i, value := source { result[i] = value * 2 + 1 }

Would be 2x faster than a loop like

for i, value := source { intermediate[i] = value * 2 }

for i, value := intermediate { result[i] = value + 1 }

  • Depending on your iterator implementation (or, lackthere of), the functional boils down to your first example.

    For example, Rust iterators are lazily evaluated with early-exits (when filtering data), thus it's your first form but as optimized as possible. OTOH python's map/filter/etc may very well return a full list each time, like with your intermediate. [EDIT] python returns generators, so it's sane.

    I would say that any sane language allowing functional-style data manipulation will have them as fast as manual for-loops. (that's why Rust bugs you with .iter()/.collect())

  • This is a very valid point. Loops also let you play with the iteration itself for performance, deciding to skip n steps if a condition is met for example.

    I always encounter these upsides once every few years when preparing leetcode interviews, where this kind of optimization is needed for achieving acceptable results.

    In daily life, however, most of these chunks of data to transform fall in one of these categories:

    - small size, where readability and maintainability matters much more than performance

    - living in a db, and being filtered/reshaped by the query rather than code

    - being chunked for atomic processing in a queue or similar (usual when importing a big chunk of data).

    - the operation itself is a standard algorithm that you just consume from a standard library that handless the loop internally.

    Much like trees and recursion, most of us don’t flex that muscle often. Your mileage might vary depending of domain of course.

    • There's also that rust does a _lot_ of compiler optimizations on map/filter/reduce and it's trivially parallelizable in many cases.