← Back to context

Comment by fossuser

4 years ago

It's not the same: https://www.youtube.com/watch?v=hx7BXih7zx8

You can watch that talk and see the approach they're taking. Maybe you're more skeptical than they are about the near term possibility, but you can see that the work and progress is real.

The AGI risk is real too.

What Milton said doesn't make semantic sense, it's not a question of timelines.

https://intelligence.org/2017/10/13/fire-alarm/

I firmly believe that some parts of the AI community have a vast overestimation of their capabilities.

AGI is not 50 years away - it is unknowably far in the future. It may be a century, it may be a millenium. We just know far too little about the human mind to make any firm prediction - we're like people in Ancient Greece or 1800s England trying to predict how long it would take to develop technology to reach the moon.

We are at the level where we can't understand how the nervous system of a worm works in terms of computation. Emulating a human-level mind is so far beyond our capabilities that it is absurd to even imagine we know how to do it, "we just need a little more time".

  • And yet most scientists would claim that there was no plausible, practical way to achieve a controlled fission reaction, with many flat-out stating it was impossible, at the same time that the first fusion reactor was ticking away in Chicago in 1942.

    I agree it's not possible to have a good idea to the answer to this question, unless you happen to be involved with a development group that is the first to figure out the critical remaining insights, but it's more in the league of "don't know if it's 5 years or 50" rather than "100 years or 1000".

    Predictions are easy when there are no consequences, but I wouldn't make a bet with serious consequences on critical developments for AGI not happening in the next decade. Low probability, probably, but based on history, not impossible.

    • My reasoning is based on two things. For one, what we know about brains in general and the amount of time it has taken to learn these things (relatively little - nothing concrete about memory or computation). For the other, the obvious limitations in all publically shown AI models, despite their ever-increasing sizes, and the limitted nature of the problems they are trying to solve.

      It seems to me extremely clear that we are attacking the problem from two directions - neuroscience to try to understand how the only example of general intelligence works, and machine learning to try to engineer our way from solving specific problems to creating a generalized problem solver. Both directions are producing some results, but slowly, and with no ability to collaborate for now (no one is taking inspiration from actual neural networks in ML, despite the naming; and there is no insight from ML that could be applicable in formulating hypotheses about living brains).

      So I can't imagine how anyone really believes that we are close to AGI. The only way I can see that happen is if the problem turns out to be much, much simpler than we believe - if it turns out that you can actually find a simple mathematical model that works more or less as well as the entire human brain.

      I wouldn't hold my breath for this, since evolution has had almost a billion years to arrive at complex brains, while basic computation has started from the first unicellular organisms (even organelles inside the cell and nucleus are implementing simple algorithms to digest and reproduce, and even unicellular organisms tend to have some amount of directed movement and environmental awareness).

      This is all not to mention that we have no way right now of tackling the problem of teaching the vast amounts of human common sense knowledge that is likely baked into our genes to an AI, and it's hard to tell how much that will impact true AGI.

      And even then, we shouldn't forget that there is no obvious way to go from approximately human-level AGI to the kinds of sci-fi super-super-human AGIs that some AI catastrophists imagine. There isn't even any fundamental reason to assume that it is even possible to be significantly more intelligent than a human, in a general sort of way (there is also no reason to assume that you can't be!).

      4 replies →

I don't understand why it's relevant to watch a video of someone who is not Musk as a comparison to Milton.

I'm not claiming that the Head of AI Research at Tesla has said stupid things (I don't think he has), I'm saying that Musk has said stupid and nonsensical things that don't make semantic or scientific sense, and in comparison to Milton their hucksterism is just a difference of degrees.

  • He's the head of AI and self-driving at Tesla because Elon hired him to do exactly that and understands the approach they're taking.

    It's not a matter of degree, but one of kind.