← Back to context

Comment by qsera

2 hours ago

>you can be intelligent only if you fully imitate biological processes

It is not that. It is about having an understanding of how it is trained. For example, if it was trained on ideas, instead of words, then it would be closer to intelligent behavior.

Someone will say that during training it builds ideas and concepts, but that is just a name that we give for the internal representation that results from training and is not actual ideas and concepts. When it learns about the word "car", it does not actually understand it as a concept, but just as a word and how it can relate to other words. This enables it to generate words that include "car" that are consistent, projecting an appearance of intelligence.

It is hard to propose a test for this, because it will become the next target for the AI companies to optimize for, and maybe the next model will pass it.

The latest models are mostly LMMs (large multimodal models). If a model builds an internal representation that integrates all the modalities we are dealing with (robotics even provides tactile inputs), it becomes harder and harder to imagine why those representations should be qualitatively different.

  • It can't, simply because the textual description of a concept is different from the concept itself.

    • Obviously, a concept (which is an abstraction in more ways than one) is different from a textual representation. But LLMs don't operate on the textual description of a concept when they are doing their thing. A textual description (which is associated with other modalities in the training data) serves as an input format. LLMs perform non-linear transformations of points in their latent space. These transformations and representations are useful not only for generating text but also for controlling robots, for example (see VLAs in robotics).