← Back to context Comment by korbip 16 days ago This was done already here as well: https://arxiv.org/abs/2507.04239 1 comment korbip Reply cubefox 16 days ago Sounds interesting, but...> these models dominate both exponential attention and linear attention at long-context trainingThere is no exponential attention; standard attention is quadratic. Strange mistake.
cubefox 16 days ago Sounds interesting, but...> these models dominate both exponential attention and linear attention at long-context trainingThere is no exponential attention; standard attention is quadratic. Strange mistake.
Sounds interesting, but...
> these models dominate both exponential attention and linear attention at long-context training
There is no exponential attention; standard attention is quadratic. Strange mistake.