Comment by logicchains
6 days ago
>I think the main hesitancy is due to rampant anthropomorphism. These models cannot reason, they pattern match language tokens and generate emergent behaviour as a result
This is rampant human chauvinism. There's absolutely no empirical basis for the statement that these models "cannot reason", it's just pseudoscientific woo thrown around by people who want to feel that humans are somehow special. By pretty much every empirical measure of "reasoning" or intelligence we have, SOTA LLMs are better at it than the average human.
> This is rampant human chauvinism
What in the accelerationist hell?
There's nothing accelerationist about recognising that making unfalsifiable statements about LLMs lacking intelligence or reasoning ability serves zero purpose except stroking the speaker's ego. Such people are never willing to give a clear criteria for what would constitute proof of machine reasoning for them, which shows their belief isn't based on science or reason.
I’ve used these AI tools for multiple hours a day for months. Not seeing the reasoning party honestly. I see the heuristics part.
I guess your work doesn't involve any maths then, because then you'd see they're capable of solving maths problems that require a non-trivial amount of reasoning steps.
Just the other day I needed to code some interlocked indices. It wasn't particularly hard but I didn't want to context switch and think so instead I asked gpt 4o. After a back and worth for 4 or 5 times, where it gave wrong answers I finally decided to just take a pen and paper and do it by hand. I have a hard time believing that these models are reasoning, because if they are they are very poor at it.
1 reply →