← Back to context

Comment by joegibbs

13 hours ago

Right choice IMO. LLMs aren’t going to reach AGI by themselves because language is a thing by itself, very good at encoding concepts into compact representations but doesn’t necessarily have any relation to reality. A human being gets years of binocular visuals of real things, sound input, other various sensations, much less than what we’re training these models with. We think of language in terms of sounds and pictures rather than abstract language.