Comment by freediver
4 years ago
I co-authored a paper exploring this topic few years ago, while I was pretty excited about the possiblity of using embeddings for generalization.
"Towards conceptual generalization in the embedding space" https://arxiv.org/abs/1906.01873
I still think the approach outlined in the paper (using embeddings to map the physical world) is sound especially for the field of self-driving which is in dire need of generalization, but I've since changed my mind and currently do not believe we can achieve AGI (ever).
While embeddings are a great tool for compressing information, they do not provide inherent mechanisms for manipulating the information stored in order to generalize and infer outcomes in new, unseen situations.
And even if we would start producing embeddings in a way where they would have some basic understanding of the physical world, we could never achieve it to the level of detail necessary - because physical world is not a discrete function. Otherwise we would be creating a perfect simulation (within a simulation?). And the last time I was playing God, was in "Populous".
> I've since changed my mind and currently do not believe we can achieve AGI (ever).
Considering we (as in humans) developed general intelligence, isn't that already in contradiction with your statement? If it happened for us and is "easily" replicated through our DNA, it certainly can be developed again in an artificial medium. But the solution might not have anything to do with what we call machine learning today and sure we might go extinct before (but I didn't have the feeling that's what you were implying).
It is not a contradiction as I meant "achieving" in the context of creating it (through software).
The fact it happened to us is undeniable (from our perspective), but the how/why of it is still one of the many mysteries of the universe - one we will likely never solve.
FWIW this is the same argument once made against human flight. In the late 19th century, there were a lot of debates in the form
> Clearly flight is possible, birds do it
> Sure but how/why is one of the many mysteries of the universe, one we will likely never solve.
"Man won't fly for a million years – to build a flying machine would require the combined and continuous efforts of mathematicians and mechanics for 1-10 million years." - NYT 1903
8 replies →
I’m curious why you think that. Do you think it’s a fundamental problem with the discrete nature of traditional computers? Or a problem with scale and computational limits? If it’s the latter, if a hypothetical computer has unlimited time and memory capacity, why do you think AGI would remain impossible?
8 replies →
It's semantics at this point but we did not create ourselves, it was a complex process that took billions of years to create each one of us. Something being conceivable isn't the same as it being practically possible. I can imagine what you propose, but the same goes for traveling to distant stars or a time machine for going to the future. All perfectly possible in theory.
Yeah but interstellar and time travel haven’t been done, or at least we haven’t observed such.
Flight had. Intelligence has.
2 replies →
Thanks for your perspective. We’re still in disagreement but I wouldn’t bet on either side of the AGI debate with any significant conviction.
Embeddings are very good at a few things: combining concepts (addition), untangling commonalities (subtraction) and determining similarity between concepts (distance).
> While embeddings are a great tool for compressing information, they do not provide inherent mechanisms for manipulating the information stored
What are the manipulations you’re referring to? I would love to learn more. From my understanding, embeddings actually provide great generalisation. If you have a well conditioned embedding space then you can interpolate into previously unseen parts of that space and still get sensible results. That is generalisation to me. Many current ML methods do _not_ result in a fully meaningful embedding space but my hunch is that we will get there with future insights and advances.
> We’re still in disagreement but I wouldn’t bet on either side of the AGI debate with any significant conviction.
That is probably a superior position to hold. I am agnostic by nature, and interestingly this is one of the rare topics I've taken a hard position on. It could be a result of the years spent in the field but also some kind of bias.
> What are the manipulations you’re referring to?
Need to take a step back and mention that in the field of AI there is a great debate between symbolic and non-symbolic approaches. (and after decades spent with AI under symbolic approaches domination we are now in the golden age of non-symbolic AI; with symbolic starting to have a comeback. this podcast can be a good starting point to learn more https://lexfridman.com/gary-marcus/ - although I disagree with GM on many things - and this tweet for learning about symbolic making a comeback https://twitter.com/hardmaru/status/1470847417193209856)
Basically embeddings are "non-symbolic AI" (which is great and this is where their huge potential stems from), but the very way they are generated and then later utlized is completely "symbolic". Which means the the limits of embeddings is defined by the limits of (in this case human written) symbols used to define them. Hope that makes sense.
> currently do not believe we can achieve AGI (ever).
Do you mean with embeddings as the approach, or in general?
I think AGI will remain out of reach. Even a simpler thing like level 5 self-driving, which is only like 0.3 AGI or something, will remain forever out of reach no matter how much compute we throw at it (though I also think that if we ever reach 0.3 AGI we will also reach 100%).
The reason is that the mundane world keeps surprising us everywhere we look and constantly keeps creating more questions than answers. Just look at the questions field of quantum mechanics is trying to tackle, but also every other field of research science - astronomy, genetics, biology, antropology even mathematics... Now imagine trying to keep up with all that - by writing code.
Also, mastering these things would make us 'God'.