Comment by christophilus

4 years ago

I’ve seen it at least as many times, too. Most of the time, the optimization is pretty obvious. Adding an index to a query, or using basic dynamic programming techniques, or changing data structures to optimize a loop’s lookups.

I can’t think of a counter example, actually (where a brutally slow system I was working on wasn’t fairly easily optimized into adequate performance).

It is nice when a program can be significantly sped up by a local change like that but this is not always the case.

To go truly fast, you need to unleash the full potential of the hardware and doing it can require re-architecting the system from the ground up. For example, both postgres and clickhouse can do `select sum(field1) from table group by field2`, but clickhouse will be 100x faster and no amount of microoptimizations in postgres will change that.

  • No argument from me. I’m just pointing out that it’s wrong to assert that programmers are general incorrect when they say something can be optimized. Many times in my career, optimizations were trivial to implement, and the impact was massive. There have been a few times when the optimization was impossible without a major rearchitecture, but those times were rare.

Yah, was going to say something like this. I've fixed these problems a few times, and I don't really think any of them were particularly hard. That is because if your the first person to look at something with an eye to performance there is almost always some low hanging fruit that will gain a lot of perf. Being the 10th person to look at it, or attack something that is widely viewed as algorithmically at its limit OTOH is a different problem.

I'm not even sure it takes an "experienced" engineer, one of my first linux patches was simply to remove a goto, which dropped an exponential factor from something, and changed it from "slow" to imperceptible.