Comment by fragmede
2 months ago
Which LLM are you using? what LLM tool are you using? What's your tech stack that you're generating code for? Without sharing anything you can't, what prompts are you using?
2 months ago
Which LLM are you using? what LLM tool are you using? What's your tech stack that you're generating code for? Without sharing anything you can't, what prompts are you using?
Was more of a general comment - I'm surprised there is significant variation between any of the frontier models?
However, vscode with various python frameworks/libraries; dash, fastapi, pandas, etc. Typically passing the 4-5 relevant files in as context.
Developing via docker so I haven't found a nice way for agents to work.
> I'm surprised there is significant variation between any of the frontier models?
This comment of mine is a bit dated, but even the same model can have significant variation if you change the prompt by just a few words.
https://news.ycombinator.com/item?id=42506554
I would suggest using an agentic system like Cline, so that the LLM can wander through the codebase by itself and do research and build a "mental model" and then set up an implementation plan. The you iterate in that and hand it off for implementation. This flow works significantly better than what you're describing.
> LLM can wander through the codebase by itself and do research and build a "mental model"
It can't really do that due to context length limitations.
13 replies →