← Back to context

Comment by hwillis

8 years ago

If anything it should make you feel the opposite. The brain is an extremely effective optimizer that solves these problems in a way that is opaque to us. We find it very difficult to solve them.

An AI is an "unlocked" brain. It can see all the things that it does and then see how it does them. If it learns to ride a bike -and if it's a humanlike AI, it will be able to- it'll be able to see exactly what it does and build a model from that. We would be able to do the exact same except we can't see inside our own brains and we have to study slow motion video and notice every little thing. The AI just sees oh, I turn the other direction slightly before initiating a turn. Voila, countersteering is discovered.

Except looking inside a neural network is almost impossible. For example, google photos still turns mum if you search "gorilla". They could not figure out wtf was wrong with the recognition networks and had to just blacklist the keywords.

https://www.wired.com/story/when-it-comes-to-gorillas-google...

  • That's only an argument against using present-day neural networks as a basis for advanced AI. It's not a fundamental rule of reality that a mind can't understand its own internals.

  • It's not impossible, just time-consuming. It's the kind of task computers excel at but cannot currently do for general cases.

> An AI is an "unlocked" brain. It can see all the things that it does and then see how it does them.

Except this AI doesn't exist. You're imagining there is such a technology, and this technology will just be able to see all the detail in the world it needs to learn, and presto it does everything better than opaque humans.

If you think deep learning is such a technology, then ask yourself to what extent ANNs understand themselves and you'll see they don't at all. They're just good at optimizing for certain problems humans are able to set them up for.

So, how will us opaque humans create such a transparent technology?

  • > Except this AI doesn't exist. You're imagining there is such a technology, and this technology will just be able to see all the detail in the world it needs to learn

    Obviously I'm imagining it. Strong AI does not yet exist. It's also obvious that it could exist, because humans do it. I'm only making two logical inferences here:

    1: Future superhuman AI will have at least the capabilities of the human brain, because we know those abilities are possible, because we do them.

    2. A future superhuman AI will have the ability to examine itself in memory and identify things about itself in a way that far exceeds human introspection: we can barely examine out own emotions, much less the actual neuronal contents of our heads.

    > And it's also why I think he paperclip maximizer and gray goo scenarios are silly. Maybe it's theoretically possible to create something that would eat the world, but in order to do so, it would have to overcome every obstacle the world throws at it. [...] If you think deep learning is such a technology, then ask yourself to what extent ANNs understand themselves and you'll see they don't at all.

    Well first off, they're quite good at it[1], but more importantly that's weak AI rather than strong AI. Arguing that weak AI is unlikely to be superhuman is plausible, but strong AI is definitely self-improving.

    What I think you're saying is that you're skeptical of us being able to create strong AI out of current techniques, which is also reasonable. NN are not gonna evolve into skynet any time soon. But believing them to be categorically impossible requires the human brain to be special in some way- either beyond human comprehension or comprising a supernatural component.

    [1]: https://en.wikipedia.org/wiki/Generative_adversarial_network

    • I'm skeptical that strong AI will be superhuman in a way that allows it to do much better than the entire human race at any task.

      In context of the OP, the issue is all the detail in the world that takes many people over ages to work out. Will the superhuman AI be able to recognize all the detail it needs to know to accomplish tasks better than us (all humans)? Notice this isn't the same issue as being transparently intelligent.

      1 reply →

  • > If you think deep learning is such a technology (...)

    I hope they don't think that. Deep learning is not suited for building a proper general AI, but that's a feature of deep learning - nothing we know about in physics / information theory says that the only possible intelligence is a neural-like black box.

    > So, how will us opaque humans create such a transparent technology?

    The same way we identify bugs in our own thinking - by careful application of mathematical methods. It will take time.

  • We know that human level opaque AI can exist. So the question becomes: how do you solve this problem if you can get access to the low level details of your brain processes and if you can run copies of parts of your brain with arbitrary inputs?

    One answer would be: generate thousands of short descriptions of input signal for its different aspects, find the ones which correlate with the outputs more, refine hypotheses using scientific method.