← Back to context

Comment by jiveturkey

2 years ago

exactly. and define safe. eg, is it safe (ie dereliction) to _not_ use ai to monitor dirty bomb threats? or more simple, CSAM?

In the context of super-intelligence, “safe” has been perfectly well defined for decades: “won't ultimately result in everyone dying or worse”.

You can call it hubris if you like, but don't pretend like it's not clear.

  • It’s not, when most discussion around AI safety in the last few years has boiled down to “we need to make sure LLMs never respond with anything that a stereotypical Berkeley progressive could find offensive”.

    So when you switch gears and start using safety properly, it would be nice to have that clarified.