Comment by worldsayshi
2 years ago
Yeah this feels close to the issue. Seems more likely that a harmful super intelligence emerges from an organisation that wants it to behave in that way than it inventing and hiding motivations until it has escaped.
I think a harmful AI simply emerges from asking an AI to optimize for some set of seemingly reasonable business goals, only to find it does great harm in the process. Most companies would then enable such behavior by hiding the damage from the press to protect investors rather than temporarily suspending business and admitting the issue.
Not only will they hide it, they will own it when exposed, and lobby to ensure it remains legal to exploit for profit. See oil industry.
Forget AI. We can't even come up with a framework to avoid seemingly reasonable goals doing great harm in the process for people. We often don't have enough information until we try and find out that oops, using a mix of rust and powdered aluminum to try to protect something from extreme heat was a terrible idea.
> We can't even come up with a framework to avoid seemingly reasonable goals doing great harm in the process for people.
I mean it's not like we're trying all that much in a practical sense right?
Whatever happened to charter cities?
We can’t even correctly gender people LOl
This is well known via the paperclip maximization problem.
The relevancy of the paperclip maximization thought experiment seems less straightforward to me now. We have AI that is trained to mimic human behaviour using a large amount of data plus reinforcement learning using a fairly large amount of examples.
It's not like we're giving the AI a single task and ask it to optimize everything towards that task. Or at least it's not architected for that kind of problem.
1 reply →