← Back to context

Comment by William_BB

6 months ago

E.g. `std::ranges::for_each`, where lambda captures a bunch of variables by reference. Like I would hope the compiler optimizes this to be the same as a regular loop. But can I be certain, when compared to a good old for loop?

To be fair std::ranges seems like the biggest mistake the committee allowed into the language recently.

Effectively other than for rewriting older iterators based algorithms to using new ranges iterators I just don't use std::ranges... Likely the compiler cannot optimise it as well (yet) and all the edge cases are not workes out yet. I also find it to be quite difficult to reason about vs older iterator based algorithm's.

for each would take a lambda and call the lambda for each iterator pair, if the compiler can optimise it it becomes a loop, if it can't it becomes a function call in a loop which probably isn't much worse... If for some reason the lambda needs to allocate per iteration it's going to be a performance nightmare.

Would it really be much harder to take that lambda, move it to a templated function that takes an iterator and call it the old fashioned way?

  • Yeah, the std::ranges implementation is a bit of a mess. The inability to start clean without regard for backward compatibility reasons limits what is possible. I think most people see how you could implement comparable functionality with nicer properties from a clean sheet of paper. It is the curse of being an old language.

    • There are sane approaches to dealing with this - e.g. epochs.

      This wasn’t proven by the time c++11 was ready, but for c++20 and beyond it’s a shame they didn’t go with this.

Just ban ranges lib, it is hot garbage anyway. The compilers are able to optimize lambdas fairly well nowadays(when inlined), I wouldn't be that concerned.