Comment by shafyy
2 years ago
> Ng is right in that it's taking a lot the oxygen out of the room for more concrete discussion on the legitimate harms of generative AI -- like silently proliferating social biases present in the training data, or making accountability a legal and social nightmare.
And this is the important bit. All these people like Altman and Musk who go on rambling about the existential risk of AI distracts from the real AI harm discussions we should be having, and thereby directly harms people.
I'm always unsure what people like you actually believe regarding existential AI risk.
Do you think it's just impossible to make something intelligent that runs in a computer? That intelligence will automatically mean it will share our values? That it's not possible to get anything smarter than a smart human?
Or do you simply believe that's a very long way away (centuries) and there's no point in thinking about it yet?
I don’t see how we could make some artificial intelligence that, like in some Hollywood movie, can create robots with arms and kill all of humanity. There’s a physical component to it. How would it create factories to build all this?