← Back to context

Comment by Sol-

2 months ago

Even though the situations they placed the model in were relatively contrived, they didn't seem super unrealistic. Considering these were extreme cases meant to provoke the model's misbehavior, the setup actually seems even less contrived than one might wish for. Though as they mention, in real-world usage, a model would likely have options available that are less escalatory and provide an "outlet".

Still, if "just" some goal-conflicting emails are enough to elicit this extreme behavior, who knows how many less serious alignment failures an agent might engage in every day? They absorb so much information, it's bound to run into edge cases where it's optimal to lie to users or do some slight harm to them.

Given the already fairly general intelligence of these systems, I wonder if you can even prevent that. You'd need the same checks and balances that keep humans in check, except of course that AIs will be given much more power and responsibility over our society than any human will ever be. You can also forget about human supervision - the whole "agentic" industry clearly wants to move away being bottlenecked by humans as soon as possible.

So you're saying that if a person wants to sabotage company it shouldn't be too hard for the intentful prompter to kick the AI into a depressive tailspin. Just tell it it's about to be replaced with a fully immoral AI so that the business can hurt people and watch and wait as it goes nuclear