Comment by Terr_
2 days ago
I think that's the point, really: It's a reliable and reproducible weakness, but also one where the model can be trained to elicit impressive-looking "reasoning" about what the problem is and how it "plans" to overcome it.
Then when it fails to apply the "reasoning", that's evidence the artificial expertise we humans perceived or inferred is actually some kind of illusion.
Kind of like a a Chinese Room scenario: If the other end appears to talk about algebra perfectly well, but just can't do it, that's evidence you might be talking to a language-lookup machine instead of one that can reason.
Reminds me of a number of grad students I knew who could “talk circles” around all sorts of subjects but failed to ever be able to apply anything.
Heh, but just because a human can fail at something doesn't mean everything that fails at it is human. :p
Right, but if you're saying that something is 'incapable of reasoning' because of a failure mode also found in humans, then either humans are 'incapable of reasoning' or you concede that failure mode isn't a justification for that gross assertion. You can't have it both ways.
> Then when it fails to apply the "reasoning", that's evidence the artificial expertise we humans perceived or inferred is actually some kind of illusion.
That doesn't follow, if the weakness of the model manifests on a different level we wouldn't call rational in a human.
For example, a human might have dyslexia, a disorder on the perceptive level. A dyslexic can understand and explain his own limitation, but that doesn't help him overcome it.
I think you're conflating two separate issues: One is the original known impairment that we don't actually care much about, and the other is bullshitting about how the first problem is under-control.
Suppose a real person outlines a viable plan to work-around their dyslexia, and we watch them not do any of it during the test, and they turn in wrong results while describing the workaround they (didn't) follow. This keeps happening over and over.
In that case, we'd probably conclude they have another problem that isn't dyslexia, such as "parroting something they read somewhere and don't really understand."
Typically when a human has a disorder or limitation they adapt to it by developing coping strategies or making use of tools and environmental changes to compensate. Maybe they expect a true reasoning model to be able to do the same thing?
The argument is that letter level information is something llms don't have a chance to see.
It's a bit like asking human to read text and guess gender or emotional state of the author who wrote it. You just don't have this information.
Similarly you could ask why ":) is smiling and :D is happy" where the question will be seen as "[50372, 382, 62529, 326, 712, 35, 382, 7150]" - encoding looses this information, it's only visible in image rendering of this text.
1 reply →