Comment by cwillu

2 years ago

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

https://www.safe.ai/work/statement-on-ai-risk, signed by Ilya Sutskever among others.

I clicked, hoping that "human extinction" was just the worst thing they were against. But that's the only thing. That leaves open a whole lot of bad stuff that they're OK with AI doing (as long as it doesn't kill literally everyone).

  • That's like saying a bus driver is okay with violence on his bus because he has signed a statement against dangerous driving.