Comment by bumby

3 years ago

I think we probably come to some of the same conclusions, but I'm approaching this problem slightly differently.

To use your car example: say I'm driving in front of a park where there are lots of parked cars lining the street. Then a ball rolls out into the street from between two parked cars. I may have never personally seen a child run into the street from between two parked cars before, but I can infer (i.e., imagine) that from the context of the scenario. So I slow waaaay down in case that event happens. I don't need to see all edge case to still cover an awful lot of them.

I'm not sure AI is to that point (yet). There are some arguments for approaches like reinforcement learning that say they perform quite well on unseen edge cases from past learning. But when the stakes are high, I'm not sure that is good enough.

(And regarding the 'it only has to be slightly better than the average human' counterpoint): I disagree. I think one of the reasons that we are comfortable with sharing the road with other ape-driven vehicles is that we have a theory of mind and can intuit what someone else is thinking and are able to 'imagine' their course of action. We've evolved to have this sense. We do not, however, have the ability to intuit what a computer will do because it 'evolved' under very different circumstances. So our intuitions about whether or not to trust it may be out of whack with whether or not it performs better. And, like it or not, the policy that governs if AI-controlled cars is legal will be highly dependent on public trust.