Comment by embedding-shape
5 days ago
> Form ideas without the use of language.
Don't LLMs already do that? "Language" is just something we've added as a later step in order to understand what they're "saying" and "communicate" with them, otherwise they're just dealing with floats with different values, in different layers, essentially (and grossly over-simplified of course).
But language is the input and the vector space within which their knowledge is encoded and stored. The don't have a concept of a duck beyond what others have described the duck as.
Humans got by for millions of years with our current biological hardware before we developed language. Your brain stores a model of your experience, not just the words other experiencers have shared with yiu.
> But language is the input and the vector space within which their knowledge is encoded and stored. The don't have a concept of a duck beyond what others have described the duck as.
I guess if we limit ourselves to "one-modal LLMs" yes, but nowadays we have multimodal ones, who could think of a duck in the way of language, visuals or even audio.
You don’t understand. If humans had no words to describe a duck, they would still know what a duck is. Without words, LLMs would have no way to map an encounter with a duck to anything useful.
3 replies →
LLMs don’t form ideas at all. They search vector space and produce output, sometimes it can resemble ideas if you loop into itself.
What if we learned that brains reduce to the same basic mechanics?
Impossible.
What is an idea?