Comment by skydhash
10 hours ago
I wouldn’t say it’s a general definition, but the consensus (according to my opinion) is that intelligence is being able to define problems (not just experience them), discern the root cause, and then solve that.
Where it fails is generally the first step. It’s kinda like the old saying “you have to ask the right question”. In all problem solving matters, the definition of problem is the first step. It may not be the hardest (we have problems that are well defined, but unresolved), but not being able to do it is often a clear indication of not being able to do the rest.
> What would convince you that you're wrong?
Maybe when I can have the same interaction as with my fellow humans, where I can describe the issue (which is not the problem) and they can go solve it and provide either a sound plan to make the issue disappear. Issue here refer to unpleasantness or frustrating situation.
Until then, I see them as tools. Often to speed up my writing pace (generic code and generic presentation), or as a weird database where what goes in have a high probability to appear.
> Maybe when I can have the same interaction as with my fellow humans, where I can describe the issue (which is not the problem) and they can go solve it and provide either a sound plan to make the issue disappear.
I don't know what LLMs are you using, but frontier models do this regularly for me in programming.
Without prodding it along and giving it “hints”? And monitoring it like a baby trying their first steps? If yes, please give me the name of the model so I can try it too.
Yes, mostly without those things. I regularly use Claude Opus 4.6/4.7, Gemini 3.1 Pro and GPT-5.4/5.5. For diagnosing and planning, I always use the highest thinking setting, perhaps with the exception of GPT, where xHigh is pretty costly and slow, so I tend to use High unless the problem is really hard. After the plan is done, for implementation I often use cheaper models, like Sonnet 4.6.