Comment by kthejoker2
1 year ago
> Try asking an LLM about something which is semantically patently ridiculous, but lexically superficially similar to something in its training set, like "the benefits of laser eye removal surgery" or "a climbing trip to the Mid-Atlantic Mountain Range".
Without anthropomorphizing it, it does respond like an alien / 5 year old child / spec fiction writer who will cheerfully "go along with" whatever premise you've laid before it.
Maybe a better thought is: at what point does a human being "get" that "the benefits of laser eye removal surgery" is "patently ridiculous" ?
> Maybe a better thought is: at what point does a human being "get" that "the benefits of laser eye removal surgery" is "patently ridiculous" ?
Probably as soon as they have any concept of physical reality and embodiment. Arguably before they know what lasers are. Certainly long before they have the lexicon and syntax to respond to it by explaining LASIK. LLMs have the latter, but can only use that to (also without anthropormphizing) pretend they have the former.
In humans, language is a tool for expressing complex internal states. Flipping that around means that something which only has language may appear as if it has internal intelligence. But generating words in the approximate "right" order isn't actually a substitute for experiencing and understanding the concepts those words refer to.
My point is that it's not a "point" on a continuous spectrum which distinguishes LLMs from humans. They're missing parts.
Gruesomely useful in a war situation, unfortunately. I wonder at what point the LLMs would "realize" that "surgery" doesn't apply to that.
> it does respond like a ... 5 year old child
This is the comparison that's made most sense to me as LLMs evolve. Children behave almost exactly as LLMs do - making stuff up, going along with whatever they're prompted with, etc. I imagine this technology will go through more similar phases to human development.