Comment by handoflixue
6 hours ago
Okay, but why isn't it "intelligence"? What part of the definition does it fail? What would convince you that you're wrong?
6 hours ago
Okay, but why isn't it "intelligence"? What part of the definition does it fail? What would convince you that you're wrong?
I wouldn’t say it’s a general definition, but the consensus (according to my opinion) is that intelligence is being able to define problems (not just experience them), discern the root cause, and then solve that.
Where it fails is generally the first step. It’s kinda like the old saying “you have to ask the right question”. In all problem solving matters, the definition of problem is the first step. It may not be the hardest (we have problems that are well defined, but unresolved), but not being able to do it is often a clear indication of not being able to do the rest.
> What would convince you that you're wrong?
Maybe when I can have the same interaction as with my fellow humans, where I can describe the issue (which is not the problem) and they can go solve it and provide either a sound plan to make the issue disappear. Issue here refer to unpleasantness or frustrating situation.
Until then, I see them as tools. Often to speed up my writing pace (generic code and generic presentation), or as a weird database where what goes in have a high probability to appear.
> Maybe when I can have the same interaction as with my fellow humans, where I can describe the issue (which is not the problem) and they can go solve it and provide either a sound plan to make the issue disappear.
I don't know what LLMs are you using, but frontier models do this regularly for me in programming.
Without prodding it along and giving it “hints”? And monitoring it like a baby trying their first steps? If yes, please give me the name of the model so I can try it too.
1 reply →