Comment by Findecanor

1 year ago

Even if the generated text contains reasoning, could the LLM understand and apply it?

If I tell GPT-4 to print something, it understands it needs to check if my printer is turned on first and turn it on if it's not, so, yes?

Also, if the generated text contains reasoning, what's your definition of "understanding"? Is it "must be made of the same stuff brains are"?

  • LLMs fail at so many reasoning tasks (not unlike humans to be fair) that they are either incapable or really poor at reasoning. As far as reasoning machines go, I suspect LLMs will be a dead end.

    Reasoning here meaning, for example, given a certain situation or issue described being able to answer questions about implications, applications, and outcome of such a situation. In my experience things quickly degenerate into technobabble for non-trivial issues (also not unlike humans).

    • If you're contending that LLMs are incapable of reasoning, you're saying that there's no reasoning task that an LLM can do. Is that what you're saying? Because I can easily find an example to prove you wrong.

      3 replies →