← Back to context

Comment by eurekin

2 days ago

If you're interested in the large codebase... The best I found so far are extended context models. Using newest Nemotron3 nano, you can put a 1m tokens (about 3 ish megabytes of text) of pure code dump (I use repomix --style markdown) and ask around. That's been one of the biggest wow moments I had with LLMs so far. Much better experience than any RAG I used