← Back to context

Comment by gizmodo59

1 day ago

My take on long context for many frontier models is not about support but the accuracy drops drastically as you increase the context. Even if a model claims to support 10M context, reality is it doesn’t perform well when you saturate. Curious to hear others perspective on this

This is my experience with Gemini. Yes, I really can put an entire codebase and all the docs and pre-dev discussions and all the inter-engineer chat logs in there.

I still see the model becoming more intoxicated as turn count gets high.

  • I use repomix to pack a full repository as an xml file and it works wonders. System prompt is very simple:

    please don't add any comments in the code unless explicitly asked to, including the ones that state what you changed. do not modify/remove any existing comments as long as they are valid. also output the full files that are changed (not the untouched ones), and no placeholders like "no change here" etc. do not output the xml parts in the output.xml file. focus on the individual files. before and after outputting code, write which file it would be and the path (not as a comment in the code but instead, before and after outputting code).

    Attached is a 400k token xml file, being the output of:

    https://pastebin.com/raw/SH6JHteg

    Main prompt is a general description of the feature needed and PDF exports from figma.

    All done for free in aistudio and I consistently get better results than the people using claude code.

Agreed. That said, in general a 1M context model has a larger usable window than a 260k context model.