← Back to context

Comment by qsera

4 hours ago

> don't operate on the textual description of a concept when they are doing their thing.

It could be mapping the text to some other internal representation with connections to mappings from some other text/tokens. But it does not stop text from being the ground truth. It has nothing else going on!

The "hallucination" behavior alone should be enough to reject any claims that these are at least minimally similar to animal intelligence.

The internal representation happen to be useful not only for outputting text. What does it mean from your standpoint?

  • I didn't understand. Can you clarify?

    • If LLMs' internal representations are essentially one-to-one mappings of input texts with no additional structure, how can those representations be useful for tasks like object manipulation in robotics?

      How is transfer learning possible when non-textual training data enhances performance on textual tasks?

      1 reply →