Comment by Incipient
2 months ago
Was more of a general comment - I'm surprised there is significant variation between any of the frontier models?
However, vscode with various python frameworks/libraries; dash, fastapi, pandas, etc. Typically passing the 4-5 relevant files in as context.
Developing via docker so I haven't found a nice way for agents to work.
> I'm surprised there is significant variation between any of the frontier models?
This comment of mine is a bit dated, but even the same model can have significant variation if you change the prompt by just a few words.
https://news.ycombinator.com/item?id=42506554
I would suggest using an agentic system like Cline, so that the LLM can wander through the codebase by itself and do research and build a "mental model" and then set up an implementation plan. The you iterate in that and hand it off for implementation. This flow works significantly better than what you're describing.
> LLM can wander through the codebase by itself and do research and build a "mental model"
It can't really do that due to context length limitations.
It doesn't need the entire codebase, it just needs the call map, the function signatures, etc. It doesn't have to include everything in a call - but having access to all of it means it can pick what seems relevant.
7 replies →
1k LOC is perfectly fine, I did not experience issues with Claude with most (not all) projects around ~1k LOC.
3 replies →
I guess people are talking about different kinds of projects here in terms of project size.