Comment by MegaButts
9 months ago
> LLMs aren't good at being search engines, they're good at understanding things.
LLMs are literally fundamentally incapable of understanding things. They are stochastic parrots and you've been fooled.
9 months ago
> LLMs aren't good at being search engines, they're good at understanding things.
LLMs are literally fundamentally incapable of understanding things. They are stochastic parrots and you've been fooled.
A stochastic parrot with a sufficiently tiny residual error rate needs a stochastic model so precisely compressing the world and sophisticated decompression algorithms that it could be called reasoning.
Take two 4K frames of a falling vase, ask a model to predict the next token... I mean the following images. Your model now needs include some approximations of physics - and the ability to apply it correctly - to produce a realistic outcome. I'm not aware of any model capable of doing that, but that's what it would mean to predict the unseen with high enough fidelity.
We're talking about a stochastic parrot which in many circumstances responds in a way which is indistinguishable from actual understanding.
I've always been amazed by this. I have never not been frustrated with the profound stupidity of LLMs. Obviously I must be using it differently because I've never been able to trust it with anything and more than half the time I fact check it even for information retrieval it's objectively incorrect.
If you got as far as checking the output it must have appeared to understand your question.
I wouldn't claim LLMs are good at being factual, or good at arithmetic, or at drawing wine glasses, or that they are "clever". What they are very good at is responding to questions in a way which gives you the very strong impression they've understood you.
4 replies →
Its ok to be paranoid
1 reply →
For them to work at all they need to have some representation of concepts. Recent research at anthropic has shown a surprising complexity in their reasoning behavior. Perhaps the parrot here is you.
What do you call someone that mentions "stochastic parrots" every time LLMs are mentioned?
That makes me think, has anyone ever heard of an actual parrot which wasn't stochastic?
I'm fairly sure I've never seen a deterministic parrot which makes me think the term is tautological.
It's the first time I've ever used that phrase on HN. Anyway, what phrase do you think works better than 'stochastic parrot' to describe how LLMs function?
It’s good rhetoric but bad analogy. LLMs can be very creative (to the point of failure, in hallucinations).
I don’t know if there is a pithy shirt phrase to accurately describe how LLMs function. Can you give me a similar one for how humans think? That might spur my own creativity here.
Try to come up with a way to prove humans aren't stochastic parrots then maybe people will atart taking you seriously. Just childish reddit angst rn nothing else.
8 replies →
What does the word "understand" mean to you?
An ability to answer questions with a train of thought showing how the answer was derived, or the self-awareness to recognize you do not have the ability to answer the question and declare as much. More than half the time I've used LLMs they will simply make answers up, and when I point out the answer is wrong it simply regurgitates another incorrect answer ad nauseum (regularly cycling through answers I've already pointed out are incorrect).
Rather than give you a technical answer - if I ever feel like an LLM can recognize its limitations rather than make something up, I would say it understands. In my experience LLMs are just algorithmic bullshitters. I would consider a function that just returns "I do not understand" to be an improvement, since most of the time I get confidently incorrect answers instead.
Yes, I read Anthropic's paper from a few days ago. I remain unimpressed until talking to an LLM isn't a profoundly frustrating experience.
I just want to say that's a much better answer than I anticipated!