← Back to context

Comment by tarsinge

2 years ago

What does political correctness means for the output of a self driving car system or a code completion tool? This is a concern only if you make a public chat service branded as an all knowing assistant. And you can have world threatening scenarii by directly plugging basic automations to nuclear warheads without human oversight.

How could a code completion tool be made safe?

One natural response seems to be “it should write bug-free code”. This is the domain of formal verification, and it is known to be undecidable in general. So in this formulation safe AI is mathematically impossible.

Should it instead refuse to complete code that can be used to harm humans? So, it should read the codebase to determine if this is a military application? Pretty sure mainstream discourse is not ruling out military applications.