Comment by zadwang
14 days ago
The simpler and I think correct conclusion is that the LLM simply does not reason in our sense of the word. It mimics the reasoning pattern and try to get it right but could not.
14 days ago
The simpler and I think correct conclusion is that the LLM simply does not reason in our sense of the word. It mimics the reasoning pattern and try to get it right but could not.
What do you make of human failures to reason then?
Humans who fail to reason correctly with similar frequency aren't good at solving that task, same as LLMs. For the N-th time, "LLM is as good at this task as a human who's bad at it" isn't a good selling point.
You didn't claim that such humans fail to "reason in our sense of the word". Why are you not holding them up to the same standard?
1 reply →
[dead]