Comment by WarmWash
1 day ago
An easy way to make coding benchmarks viable again is to initialize the models with 200k of distracting or unrelated tokens in their context. Or even just run the tests sequentially in the same context and see how far the model gets before it unwinds.
These benchmarks are always greenfield, but people want a model that can deal with a rotted context.
No comments yet
Contribute on Hacker News ↗