Comment by crooked-v
2 years ago
I suspect this is related to whatever tricks they're doing for the (supposed) longer context window. People have noted severe accuracy loss for content in the middle of the context, which to me suggests some kind of summarization step is going on in the background instead of text actually being fed to the model verbatim.
No comments yet
Contribute on Hacker News ↗