Comment by resident423

11 hours ago

There's isn't even a solution for how to control highly capable systems at all, everyone wants to decide what to do with the AI before they've even solved the problem of controlling it.

It's like how everybody imagines their lives will be great once they're a millionare, but they have no plan for how to get there. It's too easy to get lost dreaming of solutions instead of actually solving the important problems.

What’s an “important problem”? p(doom)? Anything else?

  • FWIW, my P(doom) is quite low (~0.1) because I think we're going to get enough non-doomy-but-still-bad incidents caused by AI which lack the competence to take over, and the response to those will be enough to stop actual doom scenarios.

    People like Simon Willson are noting the risk of a Challenger-like disaster, talking about normalisation of deviance as we keep using LLMs which we know to be risky in increasing critical systems. I think an AI analogy to Challenger would not be enough to halt the use of AI in the way I mean, but an AI analogy to Chernobyl probably would.

    • > my P(doom) is quite low (~0.1)

      10% or 0.1%? Either way, that's not low! If airplanes crash with that probability, we would avoid them at all cost.

      1 reply →

  • Pdoom would be the most important for me, everything else depends on us being able to control the AI.

    But beyond that there's still problems like concentration of power and surveillance, permanent loss of jobs, cyber and bio security. I'm not convinced things will go well even if we can avoid these problems though. I try to think about what the world will be like if AI becomes more creative than us, what happens if it can produce the best song or movie ever made with a prompt, do people get lost in AI addiction? We sort of see that with social media already, and it's only optimizing the content delivery, what happens when algorithms can optimize the content itself?

    • >what happens when algorithms can optimize the content itself?

      You think they aren't already? You're just inoculated by your exposure to pre-AI content - hence you're not the target audience - and thus it's not delivered to you as per your point about content delivery.

      But what is even the distinction between "content delivery" and "content" in this context? "The medium is the message" is a saying old enough to have great grandkids. Does the device make the human irrevocably stare at it while wondering about made up stuff? Yes. Check. Done.

      What's problematic about `p(doom)` is that it assumes there was a cohesive "us" in the first place. That's a very USian way of viewing things. OTOH, my individual `p(doom)` is in a superposition of 0 and 1, and I quite like it that way. Highly recommended.