Comment by Ukv
22 days ago
> Something appropriate would be where if AI was used in a safety-critical or life-sustaining environment and harm or loss was caused; those who chose to use it are guilty until they prove they are innocent I think would be sufficient, not just civil but also criminal
Would this guilty-until-proven-innocent rule apply also to non-ML code and manual decisions? If not, I feel it's kind of arbitrarily deterring certain approaches potentially at the cost of safety ("sure this CNN blows traditional methods out of the water in terms of accuracy, but the legal risk isn't worth it").
In most cases I think it'd make more sense to have fines and incentives for above-average and below-average incident rates (and liability for negligence in the worse cases), then let methods win/fail on their own merit.
> Would this guilty-until-proven-innocent rule apply also to non-ML code and manual decisions?
I would say yes because the person deciding must be the one making the entire decision but there are many examples where someone might be paid to just rubberstamp decisions already made. Letting the person who decided to implement the solution off scot-free.
The mere presence of AI (anything based on underlying work of perceptrons) being used accompanied by a loss should prompt a thorough review which corporations currently are incapable of performing for themselves due to lack of consequences/accountability. Lack of disclosure, and the limits of current standing, is another issue that really requires this approach.
The problem of fines is that they don't provide the needed incentives to large entities as a result of money-printing through debt-issuance, or indirectly through government contracts. Its also far easier to employ corruption to work around the fine later for these entities as market leaders. We've seen this a number of times in various markets/sectors like JPM and the 10+ year silver price fixing scandal.
Merit of subjective rates isn't something that can be enforced, because it is so easily manipulated. Gross negligence already exists and occurs frighteningly common but never makes it to court because proof often requires showing standing to get discovery which isn't generally granted absent a smoking gun or the whim of a judge.
Bad things happen certainly where no one is at fault, but most business structure today is given far too much lee-way and have promoted the 3Ds. Its all about: deny, defend, depose.
> > Would this guilty-until-proven-innocent rule apply also to non-ML code and manual decisions?
> I would say yes [...]
So if you're a doctor making manual decisions about how to treat a patient, and some harm/loss occurs, you'd be criminally guilty-until-proven-innocent? I feel it should require evidence of negligence (or malice), and be done under standard innocent-until-proven-guilty rules.
> The mere presence of AI (anything based on underlying work of perceptrons) [...]
Why single out based on underlying technology? If for instance we're choosing a tumor detector, I'd claim what's relevant is "Method A has been tested to achieve 95% AUROC, method B has been tested to achieve 90% AUROC" - there shouldn't be an extra burden in the way of choosing method A.
And it may well be that the perceptron-based method is the one with lower AUROC - just that it should then be discouraged because it's worse than the other methods, not because a special case puts it at a unique legal disadvantage even when safer.
> The problem of fines is that they don't provide the needed incentives to large entities as a result of money-printing through debt-issuance, or indirectly through government contracts.
Large enough fines/rewards should provide large enough incentive (and there would still be liability for criminal negligence where there is sufficient evidence of criminal negligence). Those government contracts can also be conditioned on meeting certain safety standards.
> Merit of subjective rates isn't something that can be enforced
We can/do measure things like incident rates, and have government agencies that perform/require safety testing and can block products from market. Not always perfect, but seems better to me than the company just picking a scape-goat.
> So if you're a doctor making manual decisions about how to treat a patient, and some harm/loss occurs, you'd be criminally guilty-until-proven-innocent?
Yes, that proof is called a professional license, without that you are presumed guilty even if nothing goes wrong.
If we have licenses for AI and then require proof that the AI isn't tampered with for requests then that should be enough, don't you think? But currently its the wild west.
1 reply →