Comment by andromeduck

8 years ago

Big O is about doing the big picture high level analysis without wasting time on every little detail like the complexity of addition and multiplication. High performance implementation on the other hand is all about twiddling the constant factors in your algorithm(s) for your specific hardware architecture/topology.

See:

http://kaldewey.com/pubs/FAST__SIGMOD10.pdf

The parent's claim implies that on a machine with an abstract cache, the complexity of hashmaps could be proven to be something else. You may be able to recover the right behaivor by modeling a machine with infinite memory, divided up into an infinite tower of caches where each level was P times bigger and Q times slower.

You definitely have to take the time complexity of addition/multiplication into account when doing big O analysis. But in most cases they are O(1). It is the constant factors you don’t worry about.

  • Yeah but the same could be said about memory accesses. You can always pessimize and assume it misses LLC at which point becomes a constant factor.

    • Yes you also have to take memory access into account if you are doing a proper big O analysis, but not for the reasons you implied. I don’t think you are thinking about time complexity correctly. The question is not how long does a memory access take, but how does it scale as the number of bits to be fetched increases. The naive answer for a Von Neumann arch is O(N) where N is the number of bits, although the answers here regarding the speed of light bounds are interesting.

      For addition and multiplication the current best time complexity is some long term with logs and other terms I can’t remember, but it is not constant.

      However, in most cases such as sorting involving memory accesses or mult/addition we are using a fixed bit size, so we can correctly think of them as being O(1).