← Back to context

Comment by sharts

6 hours ago

I remember the guy saying that disembodied AI couldn’t possibly understand meaning.

We see this now with LLMs. They just generate text. They get more accurate over time. But how can they understand a concept such as “soft” or “sharp” without actual sensory data with which to understand the concept and varying degrees of “softness” or “sharpness.”

The fact is that they can’t.

Humans aren’t symbol manipulation machines. They are metaphor machines. And metaphors we care about require a physical basis on one side of that comparison to have any real fundamental understanding of the other side.

Yes, you can approach human intelligence almost perfectly with AI software. But that’s not consciousness. There is no first person subjective experience there to give rise to mental features.

> I remember the guy saying that disembodied AI couldn’t possibly understand meaning.

While I don't disagree with the substance of this post, I don't think this was one of Searle's arguments. There was definitely an Embodied Cognition camp on campus, but that was much more in Lakoff's wheelhouse.

> I remember the guy saying that disembodied AI couldn’t possibly understand meaning.

This is not a theory (or is one, but false) according to Popper as far as I understand, because the only way to check understanding that I know of is to ask questions, and LLMs passes it. So in order to satisfy falsifiability another test must be devised.

  • I think the claim would be that an LLM would only ever pass a strict subset of the questions testing a particular understanding. As we gather more and more text to feed these models, finding those questions will necessarily require more and more out-of-the-box thinking... or a (un)lucky draw. Giveaways will always be lurking just beyond the inference horizon, ready to yet again deflate our high hopes of having finally created a machine which actually understands our everyday world.

    I find this thesis very plausible. LLMs inhabit the world of language, not our human everyday world, so their understanding of it will always be second-hand. An approximation of our own understanding of that world, itself imperfect, but at least aiming for the real thing.

    The part about overcoming this limitation by instantiating the system in hardware I find less convincing, but I think I know where he comes from with that as well: by giving it hardware sensors, the machine would not have to simulate the world outside as well - on top of the inner one.

    The inner world can more easily be imagined as finite, at least. Many people seem to take this as a given, actually, but there's no good reason to expect that it is. Plank limits from QM are often brought up as an argument for digital physics, but in fact they are only a limit on our knowledge of the world, not on the physical systems themselves.