Comment by windowshopping
10 days ago
Just wondering how many others in this thread perceive this quest for "AGI" as delusional at the current time, when we don't yet understand the basis of natural general intelligence in almost any way at all? It's good to shoot for the stars, but it feels like if NASA were asking for funding for a manned mission to Andromeda before even landing a man on Mars. The belief that LLMs are the ticket feels absolutely quixotic to me.
The idea that LLMs have any road to AGI is much like looking at Charles Babbage's analytical engine design and decreeing that the road to creating a mind is, to borrow a quote from Henry Babbage, merely "a question of cards and time".
Various parts of their corporate structure and previous business/financial relationships are tied to the notion of “AGI” being achieved — which is poorly defined and likely to become a semantic/legal debate more than a scientific one.
So them pushing that language in their pr/marketing activity is not a surprise and not really even meant to be scientifically meaningful.
I'm not sure people are saying LLMs are the ticket. Human intelligence has many aspects apart from language. Large language models seem to do quite well with language but are not really the thing for spatial awareness, doing maths, playing go, operating robot bodies and various other tasks. Computers can do ok with that stuff too, but not generally with language models.
If you define AGI as human level intelligence in all aspects there's a way to go yet but things seem to be getting quite close to me. I'd say the Turing test is basically passed, stuff like Woz's coffee test that a robot can go into a house, find the coffee stuff and make coffee is not there but maybe in a couple of years? With that stuff I'd say Deepmind is much closer than OpenAI.
AGI doesn't have a strict definition though so I think it would depend a lot what you see "AGI" as being.
We're well on our way to building AIs which are competent at many tasks. Assuming an AGI doesn't need to be able to do every task a human can do, and doesn't need to do all of those tasks as well as an expert human, then something which could be called AGI doesn't seem that far off at all.
I remember a time quite recently when the idea of an AI beating a good-faith interpretation of the Turing test seemed very far away. I feel like we're much closer to AGI today than we were to beating the Turing test in the late 00s.
I've been saying that for some time, but you can cash in on the hype.
All you need to do is convince the credulous and greedy.
Yep if it happens in 200 years and/or is LLM-like consider me a dullard future selves. I think the humans-feeding-data to the computer (web crawling, RLHF, etc, etc) as a substitute for sense organs as input is nowhere near enough data for AGI. Also convinced these sums of money put into neuroscience would bring about AGI quicker than any alternative.
It's all about data ingestion, and the assimilatable data for computers is tiny.
I am wondering about why all these people think AGI will care about humans like enough to send terminators for them.
Would be fun to watch billionaires pouring all their wealth into something that would make its own mind to go away and not give a damn about anything related to living things.
Not calling out any books not to spoil stuff for people - just mentioning it is not my original idea but one that I find interesting.
One can define AGI the way it is achieved already: for example outperform avg human in large number of intellectual tasks.
But general intelligence has so much more to it than this. It's so overly simplistic to say "outperform on tasks."
General intelligence means perceiving opportunities. It means devising solutions for problems nobody else noticed. It means understanding what's possible and what's valuable just from existing without being told. It means asking questions without prompting, simply for the sake of wondering and learning. It means so many things beyond "if I feed this data input to this function and hit run, can it come up with the correct output matching my expectations"?
Sure, an LLM might pass a series of problem-solving questions, but could it look up and see the motion of stars and realize they implied something about the nature of the world and start to study them, unasked, and deduce the existence of solar systems and galaxies and gravity and all the other things?
I just don't buy it. It's so reductive. They're hoping to skip over all the real understanding and achieve something great without doing the real legwork to understand the true mechanisms of intelligence by just pouring enough processing time into training. It won't work. They're missing integral mechanisms by overfocusing on the one thing they have a handle on. They don't know what they don't know, but worse, they're not trying to find out.
> It means asking questions without prompting, simply for the sake of wondering and learning.
I disagree. What you are describing is one of the possible goals of intelligence, it doesn't define intelligence itself. Many humans are not really interested in wondering and learning, but we call them intelligent.
You totally can tune LLM that it will ask tons of questions to someone who created chat: how are you today? What are you doing right now? What are your hobbies? Etc.
A $3 drugstore slim wallet calculator can beat humans at all kinds of problems.
The market itself is also arguably a massive form of AGI that well-predates the concept. I choose this interpretation when watching Terminator (any of them really).
TBF this doesn't imply anything about OpenAI's quest to make a chatbot that gets along with people at parties.
I’m with you on that.