Comment by tarsinge
2 years ago
I'm still unconvinced safety is a concern at the model level. Any software wrongly used can be dangerous, e.g. Therac-25, 737 MAX, Fujitsu UK Post scandal... Also maybe I spent too much time in the cryptocurrency space but it doesn't help prefix "Safe" has been associated with scams like SafeMoon.
> Fujitsu UK Post scandal...
As an aside, I suspect Fujitsu is getting a bit of a raw deal here. I get the feeling this software was developed during, and (ahem) "vigorously defended" mostly by staff left over from, the time when the company was still Imperial Computers Limited. Fujitsu only bought ICL sometime early this century (IIRC), and now their name is forever firmly attached to this debacle. I wonder how many Brits currently think "Huh, 'Fujitsu'? Damn Japanese, all the furriners' fault!" about this very much home-grown British clusterfuck?
Got to try profiting on some incoming regulation - I'd rather be seen as evil rather than incompetent!
Safety is just enforcing political correctness in the AI outputs. Any actual examples of real world events we need to avoid are ridiculous scenarios like being eaten by nanobots (yes, this is an actual example by Yud)
> ...ridiculous scenarios like being eaten by nanobots (yes, this is an actual example by Yud)
Well, borrowed from Greg Bear: https://en.wikipedia.org/wiki/Blood_Music_(novel) .
What does political correctness means for the output of a self driving car system or a code completion tool? This is a concern only if you make a public chat service branded as an all knowing assistant. And you can have world threatening scenarii by directly plugging basic automations to nuclear warheads without human oversight.
How could a code completion tool be made safe?
One natural response seems to be “it should write bug-free code”. This is the domain of formal verification, and it is known to be undecidable in general. So in this formulation safe AI is mathematically impossible.
Should it instead refuse to complete code that can be used to harm humans? So, it should read the codebase to determine if this is a military application? Pretty sure mainstream discourse is not ruling out military applications.