← Back to context

Comment by ai-christianson

17 days ago

> * Huge context window

But how well does it actually handle that context window? E.g. a lot of models support 200K context, but the LLM can only really work with ~80K or so of it before it starts to get confused.

it works REALLY well. I have used it to dump many references codes and then help me write a new modules etc. I have gone up to 200k tokens I think with no problems in recall.

  • Awesome. Models that can usefully leverage such large context windows are rare at this point.

    Something like this opens up a lot of use cases.

I'm sure someone will do a haystack test, but from my casual testing it seems pretty good

It works okay out to roughly 20-40k tokens. Once the window gets larger than that, it degrades significantly. You can needle in the haystack out to that distance, but asking it for multiple things from the document leads to hallucinations for me.

Ironic, but GPT4o works better for me at longer contexts <128k than Gemini 2.0 flash. And out to 1m is just hopeless, even though you can do it.

My experience is that Gemini works relatively well on larger contexts. Not perfect, but more reliable.