← Back to context

Comment by RandomLensman

1 year ago

LLMs fail at so many reasoning tasks (not unlike humans to be fair) that they are either incapable or really poor at reasoning. As far as reasoning machines go, I suspect LLMs will be a dead end.

Reasoning here meaning, for example, given a certain situation or issue described being able to answer questions about implications, applications, and outcome of such a situation. In my experience things quickly degenerate into technobabble for non-trivial issues (also not unlike humans).

If you're contending that LLMs are incapable of reasoning, you're saying that there's no reasoning task that an LLM can do. Is that what you're saying? Because I can easily find an example to prove you wrong.

  • It could be that all reasoning displayed is showing existing information - so there would be no reasoning, but that aside, what I meant is being able to reason in any consistent way. Like a machine that only sometimes gets an addition right isn't really capable of addition.

    • The former is easy to test, just make up your own puzzles and see if it can solve them.

      "Incapable of reasoning" doesn't mean "only solves some logic puzzles". Hell, GPT-4 is better at reasoning than a large number of people. Would you say that a good percentage of humans are poor at reasoning too?

      1 reply →