Comment by throw-12-16

2 days ago

There is no way this is economical.

Burn through your token limit in agent mode just to thrash around a few more times trying to identify where the agent "misunderstood" the prompt.

The only time LLM's work as coding agents for me is tightly scoped prompts with a small isolated context.

Just throwing an entire codebase into an LLM in an agentic loop seems like a fools errand.