Comment by zmmmmm

12 years ago

I feel torn about the "premature optimization" stuff. On the one hand it's clearly true that we're terrible at predicting bottlenecks and it makes no real sense to second guess them at any fine level of detail. On the other hand, I think about products that I love to use and they are all fast. I think about the first iPhone and how utterly crucial that first user experience was. Did they achieve that by just writing giant gobs of code and then circling back afterwards to fix a few hotspots? I think about how Chrome was such a delight after FireFox got slower and slower and more bogged down. Clearly, Chrome was written from the ground up to avoid that. I am not so sure you can really optimise for performance after the fact, at least not universally. There are times when you have to build it into the very fabric of a project right from the start. Once a project has a giant mass of moderately slow code, there's nothing you can do except rewrite it to get better performance.

Yes, that is exactly what Apple did with the iPhone. They wrote the interface then tested it over and over again and made optimisations on every iteration.. This was before it was released to the general public.

The article is talking about optimising before you can prove where the problems are... Apple had excellent testing which showed where a lot of issues were. Some issues may well have not been discovered until a wider audience had access though.

Testing can happen before you release a product you know? You new fangled startup MVP types only think good testing happens on paying customers. Fuck you guys.

To me premature optimization goes far beyond just speed: it's a judgment call, and it affects everything we do as programmers.

Should I spend more time here making this variable readable? Should I structure this script to be maintainable, or is it going to be of no use in 2+ weeks?

Sometimes the answer is yes; sometimes the answer is no. The key is not to pre-maturely optimize. You have to use your own good judgment and an iterative problem solving process to figure out what the metaphorical bottle neck is and fix it (I say metaphorical because, again, it's not just about speed).

  • Yes - I think this is what I was trying to express. I guess I think the message about premature optimization is much more subtle than usually accounted for. It's about decisions to introduce complexity, to sacrifice design integrity, etc. in favour of performance.

    But it is not about devaluing having a careful discipline and intuitive sense of the performance of code and a rigour about how you think about performance in your work. All those things are still incredibly important.

I think there's some truth to this. The assumption that there will be low hanging fruit in the profiler when you eventually start worrying about performance isn't always true.

Also, the more you worry about writing performant code, the easier it will be for you to write it that way in the first place without making sacrifices in code readability or maintainability. It's a myth that high performance code is always more complex and error prone. If performance is always something put off as something to address later, you can end up with as you describe 'a giant mass of moderately slow code'.

What I do agree with is that picking the correct data structures has more of an impact that algorithm noodling, though that has it's place.

I think the take away is not "do not optimise at all ever" it's about not just optimising your code, but optimise the time you spend optimising it. Do not spend 2 weeks micro-optimising a loop that, in reality, has no effect on the end user's experience, and ultimately makes the code base just much more difficult to maintain.

What you want to do is try to get most of your FINITE and EXPENSIVE development time on optimising things that actually matter to user experience.

Are you sure you aren't just trying to convince yourself that C++ is the right choice?

  • It's completely orthogonal to that. It's not about what language you are using but about how you write the code in that language. The "premature optimization" thing can easily be intepreted as "it's OK to be lazy" and I think that's a misinterpretation.

    Just making up an example, often I need to return multiple values from a function in languages that don't directly support that. The pure & efficient way might be to make a new data structure and return that. The "lazy" way is to just stuff them in a map / dictionary / hashtable and return it instead. The cost of those key-value lookups is enormous compared to a direct field lookup, but I can rationalize it as avoiding "premature optimziation". But if you end up with a whole code base that is doing this, eventually the whole thing is operating an order of magnitude slower than it should be. (it's also going to be a nightmare to maintain and refactor, but that's another story ...).

    • It's not okay to be lazy. But it's wise to prioritize architecture over performance until you have numbers to show you where you should put necessary optimizations. In my experience, optimized code is almost always harder to work with, so there better be a good reason to write it that way.

      It's a lot easier to optimize well-architected code than to re-architect optimized code.

    • If you find yourself regularly having to force that kind of behaviour out of a language that doesn't support it, you're using the wrong language or you have a bad design.

      1 reply →