← Back to context

Comment by Marha01

4 hours ago

> Maybe when I can have the same interaction as with my fellow humans, where I can describe the issue (which is not the problem) and they can go solve it and provide either a sound plan to make the issue disappear.

I don't know what LLMs are you using, but frontier models do this regularly for me in programming.

Without prodding it along and giving it “hints”? And monitoring it like a baby trying their first steps? If yes, please give me the name of the model so I can try it too.

  • Yes, mostly without those things. I regularly use Claude Opus 4.6/4.7, Gemini 3.1 Pro and GPT-5.4/5.5. For diagnosing and planning, I always use the highest thinking setting, perhaps with the exception of GPT, where xHigh is pretty costly and slow, so I tend to use High unless the problem is really hard. After the plan is done, for implementation I often use cheaper models, like Sonnet 4.6.