← Back to context

Comment by kevincox

5 hours ago

I actually disagree with Rule 3! While numbers are usually small being fast on small cases generally isn't as important as performing acceptably on large cases. So I prefer to take the better big-O so that it doesn't slow down unacceptably on real-world edge-case stresses. (The type of workloads that the devs often don't experience but your big customers will.)

Of course there is a balance to this, the engineering time to implement both options is an important consideration. But given both algorithms are relatively easy to implement I will default to the one that is faster at large sizes even if it is slower at common sizes. I do suspect that there is an implicit assumption that "fancy" algorithms take longer and are harder to implement. But in many cases both algorithms are in the standard library and just need to be selected. If this post focused on "fancy" in terms of actual time to implement rather than speed for common sizes I would be more inclined to agree with it.

I wrote an article about this a while back: https://kevincox.ca/2023/05/09/less-than-quadratic/

Rule 3 was true 1989, back then computers were so slow and had barely any ram so most things you did only was reasonable for small number of inputs. Today we almost always have large amounts of inputs so its different.

  • This very much depends on where you work... and basically isn't true for most people. It's extremely true for some people.

    • Rule 3 is still very much real. Fancy fast algorithms often have other trade-offs. The best algorithm for the job is the one that meets all requirements well... Big-O is one aspect, data is another, determinism of the underlying things that are needed (dynamic memory allocation, etc) can be another.

      It is important to remember that the art of sw engineering (like all engineering) lives in a balance between all these different requirements; not just in OPTIMIZE BIG-O.

      1 reply →