← Back to context

Comment by lostmsu

2 months ago

> I remember the guy saying that disembodied AI couldn’t possibly understand meaning.

This is not a theory (or is one, but false) according to Popper as far as I understand, because the only way to check understanding that I know of is to ask questions, and LLMs passes it. So in order to satisfy falsifiability another test must be devised.

I think the claim would be that an LLM would only ever pass a strict subset of the questions testing a particular understanding. As we gather more and more text to feed these models, finding those questions will necessarily require more and more out-of-the-box thinking... or a (un)lucky draw. Giveaways will always be lurking just beyond the inference horizon, ready to yet again deflate our high hopes of having finally created a machine which actually understands our everyday world.

I find this thesis very plausible. LLMs inhabit the world of language, not our human everyday world, so their understanding of it will always be second-hand. An approximation of our own understanding of that world, itself imperfect, but at least aiming for the real thing.

The part about overcoming this limitation by instantiating the system in hardware I find less convincing, but I think I know where he comes from with that as well: by giving it hardware sensors, the machine would not have to simulate the world outside as well - on top of the inner one.

The inner world can more easily be imagined as finite, at least. Many people seem to take this as a given, actually, but there's no good reason to expect that it is. Plank limits from QM are often brought up as an argument for digital physics, but in fact they are only a limit on our knowledge of the world, not on the physical systems themselves.

Do they pass it? I doubt a current LLM can fool anyone to not realize they are an LLM for a long enough discussion session.

The question is only if future LLMs might be good enough to trick anyone in most iterations, whether we would be forced to admit they understand meaning.

  • This isn't Turing test we are talking about. Of course LLMs pass understanding tests. ChatGPT can easily explain overloading or polymorphism and answer related questions.