← Back to context

Comment by ben_w

10 hours ago

FWIW, my P(doom) is quite low (~0.1) because I think we're going to get enough non-doomy-but-still-bad incidents caused by AI which lack the competence to take over, and the response to those will be enough to stop actual doom scenarios.

People like Simon Willson are noting the risk of a Challenger-like disaster, talking about normalisation of deviance as we keep using LLMs which we know to be risky in increasing critical systems. I think an AI analogy to Challenger would not be enough to halt the use of AI in the way I mean, but an AI analogy to Chernobyl probably would.

> my P(doom) is quite low (~0.1)

10% or 0.1%? Either way, that's not low! If airplanes crash with that probability, we would avoid them at all cost.

  • 10%; doomers say this kind of number is unreasonably optimistic, hence the blunt title of recent book by Yudkowsky and Soares. Do with this rank-ordering factoid, that 10% makes me an optimist, what you will.