← Back to context

Comment by jibal

5 days ago

> Plenty of humans can't do arithmetic. Can they also not reason.

I just pointed out that this isn't valid reasoning ... it's a fallacy of denial of the antecedent. No one is arguing that because LLMs can't do arithmetic, therefore they can't reason. After all, zamalek said that he can't quickly multiply large numbers in his head, but he isn't saying that therefore he can't reason.

> Reasoning isn't a binary switch. It's a multidimensional continuum.

Indeed, and a lot of humans are very bad at it, as is clear from the comments I'm responding to.

> AI can clearly reason to some extent

The claim was about LLMs, not AI. This is like if someone said that chihuahuas are little and someone responded by saying that dogs are tall to some extent.

LLMs do not reason ... they do syntactic pattern matching. The appearance of reasoning is because of all the reasoning by humans that is implicit in the training data.

I've had this argument too many times ... it never goes anywhere. So I won't respond again ... over and out.

Indeed, and a lot of humans are very bad at it, as is clear from the comments I'm responding to.

This is your idea of "conversing curiously" and "editing out swipes," I suppose.

I've had this argument too many times ... it never goes anywhere. So I won't respond again ... over and out.

A real reasoning entity might pause for self-examination here. Maybe run its chain of thought for a few more iterations, or spend some tokens calling research tools. Just to probe the apparent mismatch between its own priors and those of "a lot of humans," most of whom are not, in fact, morons.