Comment by cs702

6 years ago

The majority of businesses and governments are insisting on learning this bitter lesson anew.

In the minds of many business executives and government officials, "explainable AI" means, quite literally, "show it to me as a linear combination of a small number of features" (sometimes called "drivers" or "factors") that have monotonic relationships with measurable outcomes.

I would go further: most people are understandably scared and worried of intelligence that arises from scalable search and learning by self-play.

If explainable AI is too limiting, what's the alternative? What's going to happen when someone gets hauled into court to be held liable for their non-explainable AI's outcomes? Oh right, I know, they'll hide behind corporate limited-liability shenanigans, until people get tired of that and go straight for the guillotines. Or maybe the non-explainable AI's owners will decide they want to prevent that, and ... do you want Skynet? Because that's how you get Skynet. Maybe spend some time thinking about the various awful ways this could play out before concluding that explainability isn't important.

  • I love the phrase “explainable AI”. We still can’t explain how our intelligence works with any degree of biological detail.

    • We can't explain the implementation details, but a human system can literally explain the logic she used to reach a decision. For example, for applications in the justice system that AI has been recommended for, this is a highly important quality.

      1 reply →