Comment by mdp2021
2 months ago
> they don't see individual letters
Yet they seem to be from many other tests (characters corrections or manipulation in texts, for example).
> The fact that reasoning models can count letters, even though they can't see individual letters
To a mind, every idea is a representation. But we want the processor to work reliably on them representations.
> If we don't allow a [mind] to base its reasoning on the training data it's seen, what should it base it on
On its reasoning and judgement over what it was told. You do not repeat what you heard, or you state that's what you heard (and provide sources).
> uses randomness
That is in a way a problem, a non-final fix - satisficing (Herb Simon) after random germs instead of constructing through a full optimality plan.
In the way I used the expression «chancey guesses» though I meant that guessing by chance when the right answer falls in a limited set ("how many letters in 'but'") is a weaker corroboration than when the right answer falls in a richer set ("how many letters in this sentence").
Most people act on gut instincts first as well. Gut instinct = first semi-random sample from experience (= training data). That's where all the logical fallacies come from. Things like the bat and the ball problem, where 95% people give an incorrect answer, because most of the time, people simply pattern-match too. It saves energy and works well 95% time. Just like reasoning LLMs, they can get to a correct answer if they increase their reasoning budget (but often they don't).
An LLM is a derivative of collective human knowledge, which is intrinsically unreliable itself. Most human concepts are ill-defined, fuzzy, very contextual. Human reasoning itself is flawed.
I'm not sure why people expect 100% reliability from a language model that is based on human representations which themselves cannot realistically be 100% reliable and perfectly well-defined.
If we want better reliability, we need a combination of tools: a "human mind model", which is intrinsically unreliable, plus a set of programmatic tools (say, like a human would use a calculator or a program to verify their results). I don't know if we can make something which works with human concepts and is 100% reliable in principle. Can a "lesser" mind create a "greater" mind, one free of human limitations? I think it's an open question.
> Most people act on gut instincts first as well
And we do not hire «most people» as consultants intentionally. We want to ask those intellectually diligent and talented.
> language model that is based on human representations
The machine is made to process the input - not to "intake" it. To create a mocker of average-joe would be an anti-service in both that * the project was to build a processor and * we refrain to ask average-joe. The plan can never have meant to be what you described, the mockery of mediocrity.
> we want better reliability
We want the implementation of a well performing mind - of intelligence. What you described is the "incompetent mind", the habitual fool - the «human mind model» is prescriptive based on what the properly used mind can do, not descriptive on what sloppy weak minds do.
> Can a "lesser" mind create a "greater" mind
Nothing says it could not.
> one free of human limitations
Very certainly yes, we can build things with more time, more energy, more efficiency, more robustness etc. than humans.