← Back to context

Comment by UncleMeat

2 hours ago

The problem is that effort spent to reduce the "risk" of creating an evil god who tortures us all for the rest of time doesn't actually produce outcomes that reduces the risk of things like widespread job loss or hyperaggregation of influence and money.

"Oh we'll at least get some side benefit" is not actually what is coming out of the endlessly circular forums talking about the apocalypse.

Even if there was no overlap*, that would be like criticising the green movement for not focussing on working hours and pay like trade unions do.

Different people can care about different things; it's good that each of us gets to focus on what motivates us, rather than all chasing the same thing, because when multiple teams do all chase the same thing typically only the best few of them actually make a difference.

* as it happens, there is some overlap. Knowing more about how a narrow utility function behaves outside distribution is useful for both capabilities and safety. We're not even at the stage of being able to make AI not kill random subsets of the users with bad advice, nor reliably prevent users from falling into delusions of grandeur, let alone giving AI a reliable sense of liberty and the pursuit of happiness to maintain.