Comment by pmarreck

5 hours ago

> to minimize other damage

You mean deaths to multiple other people, do you not? Let's just call a spade a spade here and point out the genuine ethical dilemma.

What's the ratio between "bodies of your own kids" and "other human bodies you have no other connection with" in terms of what a "proper" AI that is controlling a car YOU purchased, should be willing to make in trade in terms of injury or death?

I think most people would argue that it's greater than 1* (unless you are a pure rationalist, in which case, I tip my hat to you), but what "SHOULD" it be?

*meaning, in the case of a ratio of 2 for example, you would require 2 nonfamiliar deaths to justify losing one of your own kids

Yeah, you also have to consider that your kids can be on either side of the equation too.

  • I honestly don't know if by the other side of the equation is your kid being on the street when somebody elses's av causes the accident. Bonus points of the owner of the av is not liable for the accident.

We can take the AI out of the question entirely and ask how many other humans you personally as a driver would be willing to mow down to avoid your own death—driving off a bridge, say.

I would suggest that all but the most narcissistic would have some limit to how many pedestrians they would be willing to run over to save their own lives. The demand that the AI have no such limit—“that the AI will prioritize my life and safety over literally any other concern”—is grotesque.

> You mean deaths to multiple other people, do you not

I mean deaths the AI predicts for other people, yes

And I'm not saying I would never choose to kill myself over killing a schoolbus full of children, but I'll be damned if a computer will make that choice for me.

  • I don't believe any AV software out there attempts to solve the trolley problem. It's just not relevant and moreover, actually illegal to have that code in some situations.

    You can't get into a trolley situation without driving unsafely for the conditions first, so companies focus on preventing that earlier issue.

  • > deaths the AI predicts for other people

    Isn’t this entirely hypothetical? In reality, are any systems doing this calculus? Or are they mimicking humans, avoiding obstacles and reducing energies in a series of rapid-fire calls?

    • It was an entire media beat up because the media was too afraid to talk about anything real and the public not interested.

      There's plenty we could talk about: i.e. the failure scenarios of shallow reasoning systems, the serious limitations on the resolution and capability of the actual Tesla cameras used for navigation, the failure modes of LIDAR etc.

      Instead we got "what if the car calculates the trolley problem against you?"

      And observationally, proof a staggering number of people don't know their road rules (since every variant of it consists of concocting some scenario where slamming on the brakes is done at far too late but you somehow know perfectly well there's not a preschool behind the nearest brick wall or something).

      I remember running some basic numbers on this in an argument and you basically wind up at, assuming an AI is fast enough to detect a situation, it's sufficiently fast that it would literally always be able to stop the car with the brakes, or no level of aggressive manoeuvring would avoid the collision.

      Which is of course what the road rules are: you slam on the brakes. Every other option is worse and gets even worse when an AI can brake quicker and faster if its smart enough to even consider other options.

      3 replies →

  • The AI can also only ever predict that you might die. So how should these predictions be weighed? Say there's a group of five children - the car predicts a 90% chance of death for them, vs. 50% for you if the car avoids them. According to your comments, it seems like you'd want the car to choose to hit the children, right?

    What is the lowest likelihood of your own death you'd find acceptable in this situation?