Comment by zadwang

7 hours ago

The simpler and I think correct conclusion is that the LLM simply does not reason in our sense of the word. It mimics the reasoning pattern and try to get it right but could not.

What do you make of human failures to reason then?

  • Humans who fail to reason correctly with similar frequency aren't good at solving that task, same as LLMs. For the N-th time, "LLM is as good at this task as a human who's bad at it" isn't a good selling point.