Comment by comex

14 days ago

So how does the 10M token context size actually work?

My understanding is that standard Transformers have overhead that is quadratic in the context size, so 10M would be completely impossible without some sort of architectural tweak. This is not the first model to have a huge context size, e.g. Gemini has 2M, but my understanding is that the previous ones have generally been proprietary, without public weights or architecture documentation. This one has public weights. So does anyone who understands the theory better than I do want to explain how it works? :)

With some architectural modifications, such as FlashAttention and Ring Attention, we never need to "materialise" the NxN matrix, so the memory constraints have not been a real issue for a couple of years now. As for the processing, I suppose that models operating with larger context windows will impose some kind of block sparsity on the attention weights, so they won't have to do the compute for NxN weights either.

A less obvious, but in the limit more serious problem with such large contexts is the training data. There aren't that many documents with 10M tokens to give to the model at test time, let alone for training. The creators of the IBM granite model series had to use synthetic data to scale even to 128k tokens during training. Overall this looks more like a marketing statement to me.

Gemini likely uses something based on RingAttention to achieve its long context sizes. This requires massive inference clusters, and can't be the same approach llama4 is using. Very curious how llama4 achieves its context length.

Standard Transformer KV caches are empirically quite sparse. I wonder if they've made some fix along those lines

It’s quadratic if you implement the transformer naiively, but if you add a KV cache it’s linear compute at the cost of correspondingly linear growth in memory.

  • This is false. The const of producing a single token is linear but the cost of producing an entire sequence of length N is O(N^2) still (which is always what we meant when we talked about quadratic cost not the cost of a single token).