Comment by jedberg
1 month ago
> If you ever work with LLMs you know that they quite frequently give up.
If you try to single shot something perhaps. But with multiple shots, or an agent swarm where one agent tells another to try again, it'll keep going until it has a working solution.
Yeah exactly this is a scope problem, actual input/output size is always limited> I am 100% sure CC etc are using multiple LLM calls for each response, even though from the response streaming it looks like just one.