That's reasonable. Let's just call it root cause analysis in this case.
The original point seemed to me to be "we can't use computers because they're not accountable". I say, we can, because we can do fault analysis and fix what is wrong. I won't say "we can hold them accountable", to avoid the category error.
I think folks may have different interpretations of accountability.
If your algorithm kills someone, is the accountability an improvement to the algorithm? A fine and no change to the algorithm? Imprisonment for related humans? Dissolution of some legal entity?
That's reasonable. Let's just call it root cause analysis in this case.
The original point seemed to me to be "we can't use computers because they're not accountable". I say, we can, because we can do fault analysis and fix what is wrong. I won't say "we can hold them accountable", to avoid the category error.
I think folks may have different interpretations of accountability.
If your algorithm kills someone, is the accountability an improvement to the algorithm? A fine and no change to the algorithm? Imprisonment for related humans? Dissolution of some legal entity?