Comment by red75prime
5 hours ago
[Citation needed] Neuroscience isn't yet at a point when it can say this with any certainty.
Anyway. It's not a theorem that you can be intelligent only if you fully imitate biological processes. Like flight can be achieved not only by the flapping wings.
>you can be intelligent only if you fully imitate biological processes
It is not that. It is about having an understanding of how it is trained. For example, if it was trained on ideas, instead of words, then it would be closer to intelligent behavior.
Someone will say that during training it builds ideas and concepts, but that is just a name that we give for the internal representation that results from training and is not actual ideas and concepts. When it learns about the word "car", it does not actually understand it as a concept, but just as a word and how it can relate to other words. This enables it to generate words that include "car" that are consistent, projecting an appearance of intelligence.
It is hard to propose a test for this, because it will become the next target for the AI companies to optimize for, and maybe the next model will pass it.
The latest models are mostly LMMs (large multimodal models). If a model builds an internal representation that integrates all the modalities we are dealing with (robotics even provides tactile inputs), it becomes harder and harder to imagine why those representations should be qualitatively different.
It can't, simply because the textual description of a concept is different from the concept itself.
3 replies →