Comment by ipnon
2 days ago
I really struggle to feel the AGI when I read such things. I understand this is all of year old. And that we have superhuman results in mathematics, basic science, game playing, and other well-defined fields. But why is it difficult to impossible for LLMs to intuit and deeply comprehend what it is we are trying to coax from them?
> But why is it difficult to impossible for LLMs to intuit and deeply comprehend what it is we are trying to coax from them?
It's right there in the name. Large language models model language and predict tokens. They are not trained to deeply comprehend, as we don't really know how to do that.
Have you ever tried to get an average human to do that? It’s a mixed bag. Computers til now were highly repeatable relative to humans, once programmed, but hopeless at “fuzzy” or associative tasks. Now they have a new trick, that lets them grapple with ambiguity, but the cost is losing that repeatability. The best, most reliable humans were not born that way, it took years or decades of education, and even then it can take a lot of talking to transfer your idea into their brain.
> superhuman results in mathematics
LLMs mostly spew nonsense if you ask them basic questions on research or even master's degree-level mathematics. I've only ever seen non-mathematicians suggest otherwise, and even the biggest mathematician advocate for AI, Terry Tao, seems to recognise this too.
Ask yourself "what is intelligence?". Can intelligence at the level of human experience exist without that which we all also (allegedly) have... "consciousness". What is the source of "consciousness"? Can consciousness be computed?
Without answers to these questions, I don't think we are ever achieving AGI. At the end of the day, frontier models are just arithmetic, conditionals, and loops.