Comment by lelanthran
6 months ago
> I think you're absolutely right that judging LLMs' "intelligence" on their ability to count letters is silly.
I don't think it is silly; it's an accurate reflection that what is happening inside the black box is not at all similar to what is happening inside a brain.
Computer: trained on trillions of words, gets tripped up by spelling puzzles.
My five year old: trained on Distar alphabet since three, working vocab of perhaps a thousand words, can read maybe half of those and still gets the spelling puzzles correct.
There's something fundamentally very different that has emerged from the black box, but it is not intelligence as we know it.
Yup, LLMs are very different from human brains, so whatever they have isn't intelligence as we know it. But ...
1. If the subtext is "not intelligence as we know it, but something much inferior": that may or may not be true, but crapness at spelling puzzles isn't much evidence for it.
2. More generally, skill with spelling puzzles just isn't a good measure of intelligence. ("Intelligence" is a slippery word; I mean something like "the correlation between skill at spelling puzzles and most other measures of cognitive ability is pretty poor". Even among humans, still more for Very Different things the "shape" of whose abilities is quite different from ours.)
> 1. If the subtext is "not intelligence as we know it, but something much inferior": that may or may not be true, but crapness at spelling puzzles isn't much evidence for it.
I'm not making a judgement call on whether it is or isn't intelligence, just that it's not like any sort of intelligence we've ever observed in man or beast.
To me, LLMs feels more like "A tool with built-in knowledge" rather than "A person who read up on the specific subject"
I know that many people use the analogy of coding LLMs as "An eager junior engineer", but even eager junior engineers only lack knowledge. They can very well come up with something that they've never seen before. In fact, it's common for them to reinvent a code method or code mechanism that they've never seen before.
And that's only for coding, which is where 99.99% of LLM usage falls today.
This is why I say it's not intelligence as we define it, but it's certainly something even if it is not an intelligence we recognise.
It's not unintelligent, but it's not intelligent either. It's something else.
Sure. But all those things you just said are about the AI systems' ability to come up with new ideas versus their knowledge of existing ones. And that doesn't have much to do with whether or not they're good at simple spelling puzzles.
(Some of the humans I know who are worst at simple spelling puzzles are also among the best at coming up with good new ideas.)