← Back to context

Comment by kraf

2 years ago

I don't really see an argument made by Ng as to why they're not dangerous. I hardly ever see arguments, we're completely drowned in biases.

I know that he often said that we're very far away from building a superintelligence and this is the relevant question. This is what is dangerous, something that is playing every game of life like AlphaZero is playing Go after learning it for a day or so, namely better that any human ever could. Better than thousands of years of human culture around it with passed on insights and experience.

It's so weird, I'm scared shitless but at the same time I really want to see it happen in my lifetime hoping naively that it will be a nice one.

I think he said extinction risk. Obviously these tools can be dangerous.

The upcoming generation doesn’t know a world where the government’s role isn’t to take extreme measures to “keep us safe” from our neighbors at home rather than just foreign adversaries. It’ll be interesting to see how that plays out with mounting ethnic conflict as Boomer-defined coalitions fall apart.

Ironically AI’s place in this broader safety culture is probably the biggest foreseeable risk.