← Back to context

Comment by lostmsu

6 months ago

This is wrong, intermediate activations are preserved when going forward.

Within a single forward pass, but not from one emitted token to another.

  • What? No. The intermediate hidden states are preserved from one token to another. A token that is 100k tokens into the future will be able to look into the information of the present token's hidden state through the attention mechanism. This is why the KV cache is so big.

    • KV cache is just that: a cache.

      The inference logic of an LLM remains the same. There is no difference in outcomes between recalculating everything and caching. The only difference is in the amount of memory and computation required to do it.

      1 reply →