Comment by beaconstudios

4 years ago

No, spending time optimising areas of the code that will never become bottlenecks is the waste of time.

But if you know the performance of an algorithm up front, you don't have to spend any time optimizing it in the first place. You just know what to do, because you know the performance.

For instance: suppose you are building a CRUD app on a SQL database. Do you (a) add indexes for important queries as you go? or (b) ignore indexes and later profile and see what queries are slow. No, of course you just make the indexes in the first place. Having to do the latter would mean that instead of having a fast app out of the gate, you have an app that gets slower over time and requires additional dev time to debug and improve. Profiling and fixing performance problems is a massive waste of everyone's time if the problem could have been dodged when the code was being written.

It's different if the optimization is significant engineering effort. Then, yes, put them off till it's needed. But most aren't, in my experience: most optimizations are totally simple, in hindsight, and the code should have been written that way in the first place.

  • Of course you index hot columns up front in that case, but I think where we disagree is that you want to generalise "optimise up front" into a rule, do or don't; I consider whether it's applicable in the circumstance. C programs tend to use a lot of system calls, and are also usually easily rapidly testable with large data. So rather than profile every individual std function I call, I'll just profile the very resource intensive paths with different scales of data and see if anything pops off. If R* had profiled their JSON parser with a 1gb file, they would've found this bug.

    I don't disagree unilaterally with "optimise up front"; I disagree with unilateralism.

    • > I disagree with unilateralism

      I mean, that's my point too. There's a camp of people who will say "don't prematurely optimize! profile and tune the hotspots later" as a blanket rule and I think that's dumb. And I thought you were espousing that.

      1 reply →

This is how you get software where everything is just fast enough to be tolerable but still annoyingly slow.

  • No, not paying attention to performance at all is how that happens. Optimising all your code in advance is just being frivolous with your time.

Bugfixing isn't optimisation

  • The line between bug fixing and faffing around is context based, and there are efficient and inefficient ways to both fix bugs, and faff around. Profiling every stdlib function is probably both inefficient and faffing around unless your circumstances dictate its a worthwhile and effective (there aren't better alternatives to reach the goal) effort.