Comment by timerol

8 years ago

The article in question: https://news.ycombinator.com/item?id=12383012

It's fun to think about, but the presentation played fast and loose with a few concepts. Sparked some lively discussion because of that. I really like the idea of needing to count random memory accesses as costing more than sequential accesses when analyzing algorithms in a big-O-type notation.

back when HDDs were a thing, and sequential accesses were much, much faster than random accesses.

So people developed algorithms that maximised locality of reference, like B-trees. An interesting line of work in this vein is that of cache-oblivious algorithms :)