Comment by solid_fuel
11 hours ago
~~Slate Star Codex~~ Astral Codex Ten, the original article, was making the argument that "model intelligence" is on an s-curve and from there it was drawing the conclusion that the curve will likely continue and models will reach human level intelligence or beyond.
I am making that argument that how we measure model intelligence is flawed, and we are actually measuring something that is closer to "reasoning" than "intelligence". If you want evidence, we'll need a different form of tests, but how about I just gesture at the fact that GPT supposedly outscored PhDs on a broad range of subjects at least a year ago and to date is not replacing PhD jobs.
We see this pattern of high scores on tests but mediocre performance in the real world all over the place. From that, I draw the conclusion that it can reason like a PhD, but it can't think like a PhD.
So, we may see an s-curve on the measure of model reasoning but that doesn't imply they will overtake us or even match us on measures of intelligence.
As to your other questions:
> LLMs do self reflection and introspection in context,
> Why do you feel self reflection and introspection are a fundamental limitation here? Models reason over tokens they have emitted and also with their own sense and learned behavior already. Are you just talking about continual learning?
I disagree that models are reflecting and introspecting in a way equivalent to human intelligence here. They can reason over tokens which have been emitted, but by the same measure they cannot reason over tokens which have not been emitted. It's hard to make this point without drawing some diagrams, but I believe that human intelligence has internal loops, where many ideas may be turned over simultaneously before an action is taken. In comparison, an LLM might "feel uncertain" about a token before emitting it, but once it is emitted that uncertainty and the other near neighbor options are lost and the LLM is locked into the track that was set by the top-choice token. I think this is where hallucinations arise from, amongst other issues.
Context isn't sufficient for an internal reasoning loop because the tokens that compose context lose a lot of the information the network itself generated when picking those tokens. They occupy a much lower dimensional space than the "internal reasoning" processes of the transformer do.
>> Humans and LLMs do not share the same form of reasoning, and general human-like intelligence will not arise from the current architecture of LLMs.
> Very strong statement, any theoretical or experimental basis for this?
It's just my theory, but this is what I have been gesturing at. You already know about RNNs so I'll put it in those terms: the core of an intelligent network should be an RNN, not a transformer, but we fundamentally cannot train a network like that to work like an LLM because backprop doesn't work when there is infinite recursion and without being able to bootstrap off of the knowledge and reasoning baked into human text, there's no sufficient source of training material beyond being embodied.
---
EDIT:
I missed this, which I also want to reply to:
> Why does it matter if AI systems will develop equivalent reasoning mechanisms as humans? In fact it may be much better not to.
I actually agree that it may be better if they did not develop equivalent reasoning, but I don't see a world in which machines replace human labor without being intellectually equivalent.
As I think about it though, "dumb" machines which can following reasoning but not think like humans are a rather scary proposition, honestly. Seems like a tool that would be wielded without restraint by those in power to control those who aren't.
No comments yet
Contribute on Hacker News ↗