← Back to context

Comment by eru

2 years ago

> I mean I do vaguely recall that there are sometimes ways to guarantee close to optimal asymptotic complexity by running all possible programs in parallel [...]

Yes, that's a neat parlour trick, but it's utterly impractical in practice. It blew me away the first time I learned about it.

(Though it would have been even more impressive to me a few decades ago, because in the meantime I learned enough of the building blocks to immediately work out the algorithm, once I heard someone mention that it's possible. Still: very, very neat.)

> With caching you can guarantee 90% of the performance of the best possible algorithm by increasing the RAM tenfold.

Sorry, I thought you were just comparing caching schemes. And I also thought our prototypical example was reading eg from disk so you have to pay that cost on a cache hit, and we were talking about burning CPU cycles to decide on a better cache eviction strategy (but even an optimal strategy will have to hit the disk, when the cache is too small).

It seems like you want to talk about a situation where you use the cache to avoid recomputing some result with CPU cycles. Eg like investigating the Collatz conjecture. That's also a very interesting application, but not what I had in mind originally.