Comment by godelski

4 days ago

  > you are taking the rationalist argument to be

I think they say P(doom) is high number[0]. Or in other words, AGI is likely to kill us. I interpret this as "if we make a really intelligent machine it is very likely to kill us all." My interpretation is mainly biased on them saying "if we build a really intelligent machine, it is very likely to kill us all."

Yud literally wrote a book titled "If Anyone Builds It, Everyone Dies."[1] There's not much room for ambiguity here...

[0] Yud is on the record saying at least 95% https://pauseai.info/pdoom He also said anyone with a higher P(doom) than him is crazy so I think that says a lot...

[1] https://ifanyonebuildsit.com/

Yes, I agree they are saying it is likely going to kill us all. My interpretation is consistent with that, and so is yours. The difference is in why/how it will kill us; you sound to me like you think the rationalist position is that from intelligence follows malice, and therefore it will kill us. I think that's a wrong interpretation of their views.

  • Well then, instead of just telling me I'm wrong why don't you tell me why I'm wrong.