← Back to context

Comment by fl0ki

5 hours ago

I only agree if you have a bounded dataset size that you know will never grow. If it can grow in future (and if you're not sure, you should assume it can), not only will many data structures and algorithms scale poorly along the way, but they will grow to dominate the bottleneck as well. By the time it no longer meets requirements and you get a trouble ticket, you're now under time pressure to develop, qualify, and deploy a new solution. You're much more likely to encounter regressions when doing this under time pressure.

If you've been monitoring properly, you buy yourself time before it becomes a problem as such, but in my experience most developers who don't anticipate load scaling also don't monitor properly.

I've seen a "senior software engineer with 20 years of industry experience" put code into production that ended up needing 30 minute timeouts for a HTTP response only 2 years after initial deployment. That is not a typo, 30 minutes. I had to take over and rewrite their "simple" code to stop the VP-level escalations our org received because of this engineering philosophy.

> You're much more likely to encounter regressions when doing this under time pressure.

There is nothing to suggest you should wait to optimize under pressure, only that you should optimize only after you have measured. Benchmark tests are still best written during the development cycle, not while running hot in production.

Starting with the naive solution helps quickly ensure that your API is sensible and that your testing/benchmarking is in good shape before you start poking at the hard bits where you are much more likely to screw things up, all while offering a baseline score to prove that your optimizations are actually necessary and an improvement.