← Back to context

Comment by tsimionescu

4 years ago

I firmly believe that some parts of the AI community have a vast overestimation of their capabilities.

AGI is not 50 years away - it is unknowably far in the future. It may be a century, it may be a millenium. We just know far too little about the human mind to make any firm prediction - we're like people in Ancient Greece or 1800s England trying to predict how long it would take to develop technology to reach the moon.

We are at the level where we can't understand how the nervous system of a worm works in terms of computation. Emulating a human-level mind is so far beyond our capabilities that it is absurd to even imagine we know how to do it, "we just need a little more time".

And yet most scientists would claim that there was no plausible, practical way to achieve a controlled fission reaction, with many flat-out stating it was impossible, at the same time that the first fusion reactor was ticking away in Chicago in 1942.

I agree it's not possible to have a good idea to the answer to this question, unless you happen to be involved with a development group that is the first to figure out the critical remaining insights, but it's more in the league of "don't know if it's 5 years or 50" rather than "100 years or 1000".

Predictions are easy when there are no consequences, but I wouldn't make a bet with serious consequences on critical developments for AGI not happening in the next decade. Low probability, probably, but based on history, not impossible.

  • My reasoning is based on two things. For one, what we know about brains in general and the amount of time it has taken to learn these things (relatively little - nothing concrete about memory or computation). For the other, the obvious limitations in all publically shown AI models, despite their ever-increasing sizes, and the limitted nature of the problems they are trying to solve.

    It seems to me extremely clear that we are attacking the problem from two directions - neuroscience to try to understand how the only example of general intelligence works, and machine learning to try to engineer our way from solving specific problems to creating a generalized problem solver. Both directions are producing some results, but slowly, and with no ability to collaborate for now (no one is taking inspiration from actual neural networks in ML, despite the naming; and there is no insight from ML that could be applicable in formulating hypotheses about living brains).

    So I can't imagine how anyone really believes that we are close to AGI. The only way I can see that happen is if the problem turns out to be much, much simpler than we believe - if it turns out that you can actually find a simple mathematical model that works more or less as well as the entire human brain.

    I wouldn't hold my breath for this, since evolution has had almost a billion years to arrive at complex brains, while basic computation has started from the first unicellular organisms (even organelles inside the cell and nucleus are implementing simple algorithms to digest and reproduce, and even unicellular organisms tend to have some amount of directed movement and environmental awareness).

    This is all not to mention that we have no way right now of tackling the problem of teaching the vast amounts of human common sense knowledge that is likely baked into our genes to an AI, and it's hard to tell how much that will impact true AGI.

    And even then, we shouldn't forget that there is no obvious way to go from approximately human-level AGI to the kinds of sci-fi super-super-human AGIs that some AI catastrophists imagine. There isn't even any fundamental reason to assume that it is even possible to be significantly more intelligent than a human, in a general sort of way (there is also no reason to assume that you can't be!).

    • I think mixing in the human bit confuses the issue, you could have a goal oriented AGI that isn't human like that causes problems (paperclip maximizer).

      Check out GPT-3’s performance on arithmetic tasks in the original paper (https://arxiv.org/abs/2005.14165)

      Pages: 21-23, 63

      Which shows some generality, the best way to accurately predict an arithmetic answer is to deduce how the mathematical rules work. That paper shows some evidence of that.

      > evolution has had almost a billion years to arrive at complex brains

      There are brains everywhere and evolution is extremely slow. Maybe the large computational cost of training models is similar to speeding that computation up?

      > there is no obvious way to go from approximately human-level AGI to the kinds of sci-fi super-super-human AGIs that some AI catastrophists imagine.

      It's worth reading more about the topic, it's less that we'll have some human comparable AI and then be stuck with it - more so that things will continue to scale. Stopping at human level might be a harder task (or even getting something that's human like at all).

      > This is all not to mention that we have no way right now of tackling the problem of teaching the vast amounts of human common sense knowledge that is likely baked into our genes to an AI, and it's hard to tell how much that will impact true AGI.

      This is a good point and basically the 'goal alignment problem' or 'friendly AI' problem. It's the main reason for the risk since you're more likely to get a powerful AGI without these 'common sense' human intuitions of things. I think your mistake is thinking the goal alignment is a prerequisite for AGI - the risk comes from the truth being that it's not. Also humans aren't entirely goal aligned either, but that's a different issue.

      I understand the skepticism, I was skeptical too - but if you read more about it (not pop-sci, but the books from the people working on the stuff) it's more solid than you probably think and your positions on it won't hold up.

      3 replies →