Comment by bluefirebrand
5 hours ago
Personally I don't know if I care. Unless I can have some guarantee that the AI will prioritize my life and safety over literally any other concern, I'm not sure I would trust it
I don't ever want to be inside an AI driven vehicle that might decide to sacrifice me to minimize other damage
> to minimize other damage
You mean deaths to multiple other people, do you not? Let's just call a spade a spade here and point out the genuine ethical dilemma.
What's the ratio between "bodies of your own kids" and "other human bodies you have no other connection with" in terms of what a "proper" AI that is controlling a car YOU purchased, should be willing to make in trade in terms of injury or death?
I think most people would argue that it's greater than 1* (unless you are a pure rationalist, in which case, I tip my hat to you), but what "SHOULD" it be?
*meaning, in the case of a ratio of 2 for example, you would require 2 nonfamiliar deaths to justify losing one of your own kids
Yeah, you also have to consider that your kids can be on either side of the equation too.
I honestly don't know if by the other side of the equation is your kid being on the street when somebody elses's av causes the accident. Bonus points of the owner of the av is not liable for the accident.
We can take the AI out of the question entirely and ask how many other humans you personally as a driver would be willing to mow down to avoid your own death—driving off a bridge, say.
I would suggest that all but the most narcissistic would have some limit to how many pedestrians they would be willing to run over to save their own lives. The demand that the AI have no such limit—“that the AI will prioritize my life and safety over literally any other concern”—is grotesque.
> You mean deaths to multiple other people, do you not
I mean deaths the AI predicts for other people, yes
And I'm not saying I would never choose to kill myself over killing a schoolbus full of children, but I'll be damned if a computer will make that choice for me.
I don't believe any AV software out there attempts to solve the trolley problem. It's just not relevant and moreover, actually illegal to have that code in some situations.
You can't get into a trolley situation without driving unsafely for the conditions first, so companies focus on preventing that earlier issue.
> deaths the AI predicts for other people
Isn’t this entirely hypothetical? In reality, are any systems doing this calculus? Or are they mimicking humans, avoiding obstacles and reducing energies in a series of rapid-fire calls?
4 replies →
The AI can also only ever predict that you might die. So how should these predictions be weighed? Say there's a group of five children - the car predicts a 90% chance of death for them, vs. 50% for you if the car avoids them. According to your comments, it seems like you'd want the car to choose to hit the children, right?
What is the lowest likelihood of your own death you'd find acceptable in this situation?
> not sure I would trust it
This is a fair concern. I’m unconvinced it’s even remotely a real market or political pressure.
On the market side, Waymo is constrained by some combination of production and auxiliaries. (Tesla, by technology.) On the political side, the salient debate is around jobs, in large part because Waymo has put to bed many of the practical safety questions from a best-in-class perspective.
Sure, but what happens when the tech gains market capture and inevitably enshittifies, the same way every other piece of tech has?
I'm not really thinking about when self driving is State of the Art Research. I'm talking about when it becomes table stakes.
Honestly the real truth is I just do not trust tech companies to make decisions that are remotely in my best interest anymore.
I can't even trust tech companies to build software that respects a "do not send me marketing emails" checkbox, why would I ever trust a car driven by software built by the same sort of asshole?
> what happens when the tech gains market capture
Idk, we solve it then. Motor vehicles kill 40,000 Americans a year [1]. I’m willing to cautiously align with Google and maybe even Tesla if they can take a bit out of those numbers.
[1] https://www.cdc.gov/nchs/fastats/accidental-injury.htm
What would that guarantee look like and would it be legal to sell a product that made that guarantee?
"Prioritizing my life over every other concern" looks like plowing over pedestrians to get me to the hospital. I dont think you can legally sell a product that promises that.
I find it interesting that you don't give other drivers any consideration in your analysis.
Other drivers should take public transit if they don't want to / are afraid to operate their own vehicles
As for me I actually like driving and I'm good at it. I'm not afraid of operating my own vehicle like so many people seem to be
No, I mean that they are not prioritizing you and many make poor choices.
Replacing bad other drivers with good autonomous systems is likely a great trade off for you, even if you are in an autonomous vehicle that is eager to sacrifice you if there is an unavoidable incident.
They are not afraid to operate their own vehicles. They are afraid you will kill them.
You just said that you do not care how many people you kill - regardless of whether they are pedestrians, whether they are driving cars or whether they are on the bus. That is what people react to.
Appreciate the honesty.
Sure, but then I don't want you to have a vehicle at all to minimize my own risk.
Feel free to minimize your own risk by staying home and never leaving
[flagged]