← Back to context

Comment by resident423

9 hours ago

There's isn't even a solution for how to control highly capable systems at all, everyone wants to decide what to do with the AI before they've even solved the problem of controlling it.

It's like how everybody imagines their lives will be great once they're a millionare, but they have no plan for how to get there. It's too easy to get lost dreaming of solutions instead of actually solving the important problems.

What’s an “important problem”? p(doom)? Anything else?

  • FWIW, my P(doom) is quite low (~0.1) because I think we're going to get enough non-doomy-but-still-bad incidents caused by AI which lack the competence to take over, and the response to those will be enough to stop actual doom scenarios.

    People like Simon Willson are noting the risk of a Challenger-like disaster, talking about normalisation of deviance as we keep using LLMs which we know to be risky in increasing critical systems. I think an AI analogy to Challenger would not be enough to halt the use of AI in the way I mean, but an AI analogy to Chernobyl probably would.

    • > my P(doom) is quite low (~0.1)

      10% or 0.1%? Either way, that's not low! If airplanes crash with that probability, we would avoid them at all cost.

  • Pdoom would be the most important for me, everything else depends on us being able to control the AI.

    But beyond that there's still problems like concentration of power and surveillance, permanent loss of jobs, cyber and bio security. I'm not convinced things will go well even if we can avoid these problems though. I try to think about what the world will be like if AI becomes more creative than us, what happens if it can produce the best song or movie ever made with a prompt, do people get lost in AI addiction? We sort of see that with social media already, and it's only optimizing the content delivery, what happens when algorithms can optimize the content itself?