← Back to context

Comment by whatever1

18 days ago

Lookup tables with precalculated things for the win!

In fact I don’t think we would need processors anymore if we were centrally storing all of the operations ever done in our processors.

Now fast retrieval is another problem for another thread.

Reminds me of when I started working on storage systems as a young man and once suggested pre-computing every 4KB block once and just using pointers to the correct block as data is written, until someone pointed out that the number of unique 4KB blocks (2^32768) far exceeds the number of atoms in the universe.

  • It seems like you weren’t really that far off from implementing it, you just need a 4 KB pointer to point to the right block. And in fact, that is what all storage systems do!

  • Reminds me of when I imagined brute-forcing every possible small picture as simply 256 shades of gray for each pixel x (640 x 480 = 307200 pixels) = 78 million possible pictures.

    Actually I don't have any intuition for why that's wrong, except that if we catenate the rows into one long row then the picture can be considered as a number 307200 digits long in base 256, and then I see that it could represent 256^307200 possible different values. Which is a lot: https://www.wolframalpha.com/input?i=256%5E307200

    • 78 million is how many pixels would be in 256 different pictures with 307200 pixels each. You're only counting each pixel once for each possible value, but you actually need to count each possible value on each pixel once per possible combinations of all of the other pixels.

      The number of possible pictures is indeed 256^307200, which is an unfathomably larger number than 78 million. (256 possible values for the first pixel * 256 possible values for the second pixel * 256 possi...).

    • Yeah I had a similar thought back in the 90s and made a program to iterate through all possible images at a fairly low res, I left it running while I was at school and got home after many hours to find it had hardly got past the first row of pixels! This was a huge eye-opener about how big a possibility-space digital images really exist in!

      1 reply →

    • i think at some point you should have realized that there are obviously more than 78 million possible greyscale 640x480 pictures. theres a lot of intuitive examples but just think of this:

      https://images.lsnglobal.com/ZFSJiK61WTql9okXV1N5XyGtCEc=/fi...

      if there were only 78 million possible pictures, how could that portrait be so recongizably one specific person? wouldnt that mean that your entire picture space wouldnt even be able to fit a single portrait of everyone in Germany?

      4 replies →

    • I had friend who had the same idea to do it for pixel fonts with only two colors and 16x16 canvas. It was still 2^256. Watching that thing run and trying to estimate when it would finish made me understand encryption.

  • The other problem is that (if we take literally the absurd proposal of computing "every possible block" up front) you're not actually saving any space by doing this, since your "pointers" would be the same size as the blocks they point to.

    • If you don't do _actually_ every single block then you have Huffman Coding [1].

      I imagine if you have a good idea of the data incoming you could probably do a similar encoding scheme where you use 7 bits to point to a ~512 bit blob and the 8th bit means the next 512 couldn't be compressed.

      [1]: https://en.wikipedia.org/wiki/Huffman_coding

  • In some contexts, dictionary encoding (which is what you're suggesting, approximately) can actually work great. For example common values or null values (which is a common type of common value). It's just less efficient to try to do it with /every/ block. You have to make it "worth it", which is a factor of the frequency of occurrence of the value. Shorter values give you a worse compression ratio on one hand, but on the other hand it's often likelier that you'll find it in the data so it makes up for it, to a point.

    There are other similar lightweight encoding schemes like RLE and delta and frame of reference encoding which all are good for different data distributions.

  • The idea is not too far off. You could compute a hash on an existing data block. Store the hash and data block mapping. Now you can use the hash in anywhere that data block resides, i.e. any duplicate data blocks can use the same hash. That's how storage deduplication works in the nutshell.

  • The other problem is to address all possible 4098 byte blocks, you need a 4098 byte address. I suppose we would expect the actual number of blocks computed and reused to be a sparse subset.

    Alternately, have you considered 8 byte blocks?

    If your block pointers are 8-byte addresses, you don't need to count on block sparsity, in fact, you don't even need to have the actual blocks.

    A pointer type, that implements self-read and writes, with null allocations and deletes, is easy to implement incredibly efficiently in any decent type system. A true zero-cost abstraction, if I have ever seen one!

    (On a more serious note, a memory heap and CPU that cooperated to interpret pointers with the top bit set, as a 63-bit linear-access/write self-storage "pointer", is an interesting thought.

  • If some blocks are highly repetitive, this may make sense.

    It's basically how deduplication works in ZFS. And that's why it only makes sense when you store a lot of repetitive data, e.g. VM images.

  • We know for a fact that when we disable the cache of the processors their performance plummets, so the question is how much of computation is brand new computation (never seen before)?

    • While true, a small technical nitpick is that the cache also contains data that’s previously been loaded and reused, not just as a result of a previous computation (eg your executable program itself or a file being processed are examples)

You’re not wrong

Using an LLM and caching eg FAQs can save a lot of token credits

AI is basically solving a search problem and the models are just approximations of the data - like linear regression or fourier transforms.

The training is basically your precalculation. The key is that it precalculates a model with billions of parameters, not overfitting with an exact random set of answers hehe

  • > Using an LLM and caching eg FAQs can save a lot of token credits

    Do LLM providers use caches for FAQs, without changing the number of tokens billed to customer?

    • No, why would they. You are supposed to maintain that cache.

      What I really want to know is about caching the large prefixes for prompts. Do they let you manage this somehow? What about llama and deepseek?

> if we were centrally storing all of the operations

Community-scale caching? That's basically what pre-compiled software distributions are. And one idea for addressing the programming language design balk "that would be a nice feature, but it's not known how to compile it efficiently, so you can't have it", is highly-parallel cloud compilation, paired with a community-scale compiler cache. You might not mind if something takes say a day to resolve, if the community only needs it run once per release.

https://conwaylife.com/wiki/HashLife is an algorithm for doing basically this in Conway’s Game of Life, which is Turing complete. I remember my first impression being complete confusion: here’s a tick-by-tick simulation too varied and complex to encapsulate in a formula, and you’re telling me I can just skip way into its future?

  • If I read that page correctly, it does this for areas with empty space between them?

    Makes sense. Say you have a pattern (surrounded by empty space) that 'flickers': A-B-A-B-A... etc. Then as long as nothing intrudes, nth generation is the same pattern as in n+1000,000th generation. Similar for patterns that do a 3-cycle, 4-cycle etc.

    All you'd need is a) a way to detect repeating patterns, and b) do some kind of collision detection between areas/patterns (there's a thing called 'lightspeed' in Life, that helps).

    • I don’t fully understand the algorithm, but no, to my understanding it’s much more general than that. In each tick a cell’s state is solely determined by its immediate neighbors, which means the simulation has a “speed of light” of 1 cell/second: to look N ticks into the future, you need only consider cells within N cells of the area you’re computing, no matter what’s outside that. So, for example, if you want to skip a 10x10 area 100 ticks into the future, you consider a 210x210 area centered on your 10x10, compute it once, then in the future use that 210x210 area as a lookup key for the 10x10 100 ticks into the future. I think HashLife is also somehow doing this on multiple scales at once, and some other tricks.

> In fact I don’t think we would need processors anymore if we were centrally storing all of the operations ever done in our processors.

On my way to memoize your search history.