Comment by conartist6
6 days ago
I think the biggest hint that the models aren't reasoning is that they can't explain their reasoning. Researchers have shown for explained that how a model solves a simple math problem and how it claims to have solved it after the fact have no real correlation. In other words there was only the appearance of reasoning.
People can't explain their reasoning either. People do a parallel construction of logical arguments for a conclusion they already reached intuitively in a way they have no clue how it happened. "The idea just popped into my head while showering" to our credit, if this post-hoc rationalization fails we are able to change our opinion to some degree.
Interestingly people have to be trained in logic and identifying fallacies because logic is not a native capability of our mind. We aren’t even that good at it once trained and many humans (don’t forget a 100 IQ is median) can not be trained.
Reasoning appears to actually be more accurately described as “awareness,” or some process that exists along side thought where agency and subconscious processes occur. It’s by construction unobservable by our conscious mind, which is why we have so much trouble explaining it. It’s not intuition - it’s awareness.
Yeah, surprisingly I think the differences are less in the mechanism used for thought and more in the experience of being a person alive in a body. A person can become an idea. An LLM always forgets everything. It cannot "care"
[dead]
Is this true though? I've suggested things that it pushed back on. Feels very much like a dev. It doesn't just dumbly do what I tell it.
Sure but it isn't reasoning that it should push back. It isn't even "pushing" which would require an intent to change you which it lacks