← Back to context

Comment by Havoc

17 days ago

Been toying with the flash model. Not the top model, but think it'll see plenty use due to the details. Wins on things other than top of benchmark logs

* Generous free tier

* Huge context window

* Lite version feels basically instant

However

* Lite model seems more prone to repeating itself / looping

* Very confusing naming e.g. {model}-latest worked for 1.5 but now its {model}-001? The lite has a date appended, the non-lite does not. Then there is exp and thinking exp...which has a date. wut?

> * Huge context window

But how well does it actually handle that context window? E.g. a lot of models support 200K context, but the LLM can only really work with ~80K or so of it before it starts to get confused.

  • it works REALLY well. I have used it to dump many references codes and then help me write a new modules etc. I have gone up to 200k tokens I think with no problems in recall.

    • Awesome. Models that can usefully leverage such large context windows are rare at this point.

      Something like this opens up a lot of use cases.

  • I'm sure someone will do a haystack test, but from my casual testing it seems pretty good

  • It works okay out to roughly 20-40k tokens. Once the window gets larger than that, it degrades significantly. You can needle in the haystack out to that distance, but asking it for multiple things from the document leads to hallucinations for me.

    Ironic, but GPT4o works better for me at longer contexts <128k than Gemini 2.0 flash. And out to 1m is just hopeless, even though you can do it.

  • My experience is that Gemini works relatively well on larger contexts. Not perfect, but more reliable.