Comment by AntiUSAbah

4 hours ago

So for the context to work well, you need some attention mechanism which makes sure that details are not getting lost due to context amount.

or lets say it differently: The LLM gets trained on static data but also on the capability of handling context in itself.

Kimi introduced this https://github.com/MoonshotAI/Attention-Residuals but i'm pretty sure closed labs like Google had something like this for a while.

The attention residuals paper uses attention across layers for the same token, in addition to the usual case of attention across tokens within the same layer, but it doesn't do anything to address the "lost in too much context" problem. At least the number of layers is currently still low enough that there's probably no equivalent "lost in too many layers" problem yet.