← Back to context

Comment by molszanski

2 years ago

If anyone thinks we can build SAFE superintelligence, I think they are wrong.

I highly, highly recommend this book "Superintelligence" by Nick Bostrom

https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...

It has one of the highest information and idea density per paragraph I've read.

And it is full good ideas. Honestly, reading any discussion or reasoning about AGI / Safe AGI is kinda pointless after that.

Author has described all possible decisions and paths and we will follow just one of them. Could find a single flaw in his reasoning in any history branch.

His reasoning is very flawed and this book is responsible for a lot of needless consternation.

We don’t know how human intelligence works. We don’t have designs or even a philosophy for AGI.

Yet, the Bostrom view is that our greatest invention will just suddenly "emerge" (unlike every other human invention). And that you can have "AGI" (hand-wavy) without having all the other stuff that human intelligence comes along with, namely consciousness, motivation, qualia, and creativity.

This is how you get a "paperclip maximizer" – an entity able to create new knowledge at astounding rates yet completely lacking in other human qualities.

What has us believe such a form of AGI can exist? Simply because we can imagine it? That's not an argument rooted in anything.

It's very unlikely that consciousness, motivation, qualia, and creativity are just "cosmic waste" hitching a ride alongside human intelligence. Evolution doesn't breed inefficiencies like that. They are more likely than not a part of the picture, which disintegrates the idea of a paperclip maximizer. He's anthropomorphizing a `while` loop.

  • > His reasoning is very flawed and this book is responsible for a lot of needless consternation.

    Is it? I feel that there is a stark difference between what you say and what I remember what was written in the book.

    > We don’t know how human intelligence works.

    I think it was addressed in the first half of the book. About research and progress in the subject. Both with the tissue scanning resolution, emulation attempts like human brain project and advances in 1:1 simulations on primitive nervous systems like worms that simulate 1 second in 1 real hour or something.

    While primitive, we are doing exponential progress.

    > Yet, the Bostrom view is that our greatest invention will just suddenly "emerge"

    I think it is quite the contrary. There was nothing sudden in the reasoning. It was all about slow progress in various areas that gets us closer to advanced intelligence.

    The path from a worm to a village idiot is million times longer than from a village idiot to the smartest person on earth.

    > an entity able to create new knowledge at astounding rates yet completely lacking in other human qualities.

    This subject was also explored IMO in the depth...

    Maybe my memory is cloudy, I've read the book 5+ years ago, but it feels like we've understood it very (very) differently.

    That said, for anyone reading, I am not convinced by the presented argument and suggest reading the book.