Comment by Jensson

25 days ago

> So if you're a doctor making manual decisions about how to treat a patient, and some harm/loss occurs, you'd be criminally guilty-until-proven-innocent?

Yes, that proof is called a professional license, without that you are presumed guilty even if nothing goes wrong.

If we have licenses for AI and then require proof that the AI isn't tampered with for requests then that should be enough, don't you think? But currently its the wild west.

> Yes, that proof is called a professional license, without that you are presumed guilty even if nothing goes wrong.

A professional license is evidence against the offense of practicing without a license, and the burden of proof in such a case still rests on the prosecution to prove beyond reasonable doubt that you did practice without a license - you aren't presumed guilty.

Separately, what trod1234 was suggesting was being guilty-until-proven-innocent when harm occurs (with no indication that it'd only apply to licensed professions). I believe that's unjust, and that the suggestion stemmed mostly from animosity towards AI (maybe similar to "nurses administering vaccines should be liable for every side-effect") without consideration of impact.

> If we have licenses for AI and then require proof that the AI isn't tampered with for requests then that should be enough, don't you think?

Mandatory safety testing for safety-critical applications makes sense (and already occurs). It shouldn't be some rule specific to AI - I want to know that it performs adequately regardless of whether it's AI or a traditional algorithm or slime molds.