Comment by hnedeotes

4 years ago

I'm not against using for loops when what you need is an actual loop. The thing is most of the times, previously, for loops where actually doing something for which there are concepts that express exactly what was being done - though not in all languages.

For instance, map - I know that it will return a new collection of exactly the same number of items the iterable being iterated has. When used correctly it shouldn't produce any side-effects outside the mapping of each element.

In some languages now you have for x in y which in my opinion is quite ok as well, but still to change the collection it has to mutate it, and it's not immediate what it will do.

If I see a reduce I know it will iterate again a definite number of times, and that it will return something else than the original iterable (usually), reducing a given collection into something else.

On the other hand forEach should tell me that we're only interested in side-effects.

When these things are used with their semantic context in mind, it becomes slightly easier to grasp immediately what is the scope of what they're doing.

On the other hand, with a for (especially the common, old school one) loop you really never know.

I also don't understand what is complex about the functional counterparts - for (initialise_var, condition, post/pre action) can only be simpler in my mind due to familiarity as it can have a lot of small nuances that impact how the iteration goes - although to be honest, most of the times it isn't complex either - but does seem slightly more complex and with less contextual information about the intent behind the code.

For me, code with reduce is less readable than a loop. With loop everything is obvious, but with reduce you need to know what arguments in a callback mean (I don't remember), and then think how the data are transformed. It's an awful choice in my opinion. Good old loop is so much better.

  • I disagree entirely. In most imperative programming languages, you can shove any sort of logic inside a loop, more loops, more branches, creating new objects, its all fair game.

    Fold and map in functional languages are often much more restrictive in a sense. For example, with lists, you reduce down a collection [a]->a to a single object, or produce another collection with a map [a]->[a]. So map and fold etc are much more restrictive. That's what makes it clearer.

  • if you are used to imperative programming, then yes.

    But in a for loop anything can happen- from a map to a reduce to a mix, to whatever convoluted logic the dev comes up with.

    • Technically you can implement map as reduce ;)

      But yes - for me

          (defn factorial [n]
            (reduce * (range 1 (inc n))))
      

      is slightly more readable than

          def factorial(n):
              result = 1
              for i in range(2,n+1):
                 result *= i
              return result
      

      I mean in this case the name kinda makes it obvious anyway :)

      If the operation is conceptually accumulating something over the whole collection and if it's idiomatic in the language I'm using - I will use reduce. Same with map-y and filter-y operations.

      But if I have to do some mental gymnastics to make the operation fit reduce - for loop it is. Or generator expression in case of python.

    • Indeed. I rarely encounter basic loops in code reviews now, so seeing one is definitely a small alert to do an extra thorough review of that part.

    • And it is usually very easy and straightforward to see what is going on inside.

  • It can definitively happen, but I think more times than not the others are more readable.

    To be honest this seems to be a familiarity thing > but with reduce you need to know what arguments in a callback mean

    If I didn't know for it would be mind boggling what those 3 things, separated by semicolons, are doing It doesn't look like anything in the usual language(s) they're implemented. It's the same with switch.

    The only thing both of them have, for and switch, and helps, is that languages that offer it and aren't FP usually use the same *C* form across all, whereas reduce's args and the callback args vary a bit more between languages, and specially between mutable and immutable langs.

    I still prefer most of the time the functional specific counterparts.

When used correctly it shouldn't produce any side-effects outside the mapping of each element.

But that's just a social convention. There's nothing stopping you from doing other things during your map or reduce.

In practice, the only difference between Map, Reduce and a For loop is that the first two return things. So depending on whether you want to end up with an array containing one item for each pass through the loop, "something else", or nothing, you'll use Map, Reduce or forEach.

You can still increment your global counters, launch the missiles or cause any side effects you like. "using it correctly" and not doing that is just a convention that you happen to prefer.

  • That is true (less so in FP languages though), but the for loop doesn't either - indeed I do prefer it most of the times, I think its a reasonable expectation to provide the most intention revealing constructs when possible, it's also easier to spot "code smells" when using those. The exceptions I make is when there's significant speed concerns/gains, when what you're doing is an actual loop, when the readability by using a loop is improved.

    (and I haven't read the article so not even sure I agree with the example there, this was more in general terms)

Yeah, I'd much rather have something like

  congruence_classes m l = map (\x -> ((x ==) . (`mod` m)) l) [0..m-1]

than

  def congruence_classes(m, l):
      sets = []
      for i in range(m):
          sets += [[]]
      for v in l:
          sets[v % m] += [v]
      return sets

For-in is very neat and nice but it still takes two loops and mutation to get there. Simple things are sometimes better as one-line maps. Provability is higher on functional maps too.

Same one-liner in (slightly uglier) Python:

  def congruent_sets(m, l):
    return list(map(lambda x: list(filter(lambda v: v % m == x, l)), range(m)))

  • The one liner is far less readable and under the hood it actually is worse: for each value in [0, m] you're iterating l and filtering it, so it's a O(n^2) code now instead of O(n). That mistake would be far easier to notice if you had written the exact same algorithm with loops: one would see a loop inside a loop and O(n^2) alarms should be ringing already.

    Ironically, it's a great example of why readability is so much more important than conciseness and one liners.

    • I agree and despite beeing a fan (kind of a convert from OO) of FP I am often wondering about readability of FP code.

      One idea I have is, that often FP code is not modularized and violates the SOLID principle in doing several things in one line.

      there are seldom named subfunctions where the name describe the purpose of the functions- take lamdas as an example: I have to parse the lamda code to learn what it does. Even simple filtering might be improved (kinda C#):

      var e = l.Filter(e => e.StartsWith("Comment"));

      vs.

      var e = l.Filter(ElementIsAComment);

      or even using an extension method:

      var e = l.FindComments();

      sorry I could not come up with a better example- I hope you get my point...

    • True, it is computationally worse, though it's O(nm) so applying m at compile time to form a practical use as I used it will turn it into to O(n) in practice.

      But that much is immediately obvious since it's mapping a filter, that is, has a loop within a loop.

      I did consider the second one to also take quadratic time though. I forgot that in python getting list elements by index is O(1) instead of O(n) which is what I'm personally used to with lists.

      It's also true that you can replace the filter with

        [ v | v <- l, v `mod` m == x ]
      

      but that's not as much fun as

      (x ==) . (`mod` m)

      I just love how it looks and it doesn't personally seem any less clear to me, maybe a bit more verbose.

      3 replies →

  • Why not just use a list comprehension?

      def congruent_sets(m, l):
        return [[v for v in l if v % m == i] for i in range(m)]

> For instance, map - I know that it will return a new collection of exactly the same number of items the iterable being iterated has.

Unless you're using Perl - "Each element of LIST may produce zero, one, or more elements in the generated list".