← Back to context

Comment by gcr

2 years ago

How is it similar?

There’s been a lot of study going the other direction - using neural networks to aid the entropy prediction in classical compression algorithms - but I’m not seeing the conceptual link between how transformer/attention models work internally and how gzip works internally beyond “similar words are easy to compress”

I’m not seeing it because GPT representations are just vectors of fixed, not varying, size

An LLM or, well, any statistical model, is about prediction. As in, given some preceding input, what comes next?

One way to measure the accuracy of the model, as in it's "intelligence", is to use the predictions to turn input into all the differences from the prediction; if it's good at predicting then there will be fewer differences and it will compress it.

So seeing how well your model can compress some really big chunk of text is a very good objective measure of it's strength and compare it to the strength of others?

So a competition is born! :)

  • Good summary.

    The LLM vs a static tree has some interesting oppositions. With a static tree as emitted by a compression alg will probably many times beat an LLM. As it has full knowledge of the whole stream (or in gzips version that window). So it can do things where it can look back and say 'hm the tree I spit out was not that good let me build a better one'. Where as an LLM does not really have that before hand knowledge. Using a pre-cooked LZW tree for all inputs would be more akin to using an LLM.

    • I would envisage the LLM is allowed to train on each and every input token. So, to begin with, it knows nothing; but to predict the very last token, it has internalised the whole preceding stream.

      Now I wouldn't expect it to be particularly competitive in enwik8 or enwik9, but the question would be: is there any max-model-size and input-length for which it would right now pull ahead and become the best known or at least competitive predictor?

      1 reply →

gzip uses LZ and Huffman coding and not arithmetic coding with a predictor, so yes, these are not similar.