← Back to context

Comment by Incipient

2 months ago

Was more of a general comment - I'm surprised there is significant variation between any of the frontier models?

However, vscode with various python frameworks/libraries; dash, fastapi, pandas, etc. Typically passing the 4-5 relevant files in as context.

Developing via docker so I haven't found a nice way for agents to work.

I would suggest using an agentic system like Cline, so that the LLM can wander through the codebase by itself and do research and build a "mental model" and then set up an implementation plan. The you iterate in that and hand it off for implementation. This flow works significantly better than what you're describing.

  • > LLM can wander through the codebase by itself and do research and build a "mental model"

    It can't really do that due to context length limitations.

    • It doesn't need the entire codebase, it just needs the call map, the function signatures, etc. It doesn't have to include everything in a call - but having access to all of it means it can pick what seems relevant.

      7 replies →

    • I guess people are talking about different kinds of projects here in terms of project size.