← Back to context

Comment by notacoward

6 years ago

If explainable AI is too limiting, what's the alternative? What's going to happen when someone gets hauled into court to be held liable for their non-explainable AI's outcomes? Oh right, I know, they'll hide behind corporate limited-liability shenanigans, until people get tired of that and go straight for the guillotines. Or maybe the non-explainable AI's owners will decide they want to prevent that, and ... do you want Skynet? Because that's how you get Skynet. Maybe spend some time thinking about the various awful ways this could play out before concluding that explainability isn't important.

I love the phrase “explainable AI”. We still can’t explain how our intelligence works with any degree of biological detail.

  • We can't explain the implementation details, but a human system can literally explain the logic she used to reach a decision. For example, for applications in the justice system that AI has been recommended for, this is a highly important quality.

    • Eeeeeeh... what we do is more like parallel construction. We can give a series of plausible steps to explain where we ended up, but sometimes we can't really explain why we did some of the steps.