Comment by stavros

1 year ago

If I tell GPT-4 to print something, it understands it needs to check if my printer is turned on first and turn it on if it's not, so, yes?

Also, if the generated text contains reasoning, what's your definition of "understanding"? Is it "must be made of the same stuff brains are"?

LLMs fail at so many reasoning tasks (not unlike humans to be fair) that they are either incapable or really poor at reasoning. As far as reasoning machines go, I suspect LLMs will be a dead end.

Reasoning here meaning, for example, given a certain situation or issue described being able to answer questions about implications, applications, and outcome of such a situation. In my experience things quickly degenerate into technobabble for non-trivial issues (also not unlike humans).

  • If you're contending that LLMs are incapable of reasoning, you're saying that there's no reasoning task that an LLM can do. Is that what you're saying? Because I can easily find an example to prove you wrong.

    • It could be that all reasoning displayed is showing existing information - so there would be no reasoning, but that aside, what I meant is being able to reason in any consistent way. Like a machine that only sometimes gets an addition right isn't really capable of addition.

      2 replies →