Comment by jasoncartwright
14 hours ago
If it autonomous or self-driving then why is the person in the car paying for the insurance? Surely if it's Tesla making the decisions, they need the insurance?
14 hours ago
If it autonomous or self-driving then why is the person in the car paying for the insurance? Surely if it's Tesla making the decisions, they need the insurance?
Generally speaking, liability for a thing falls on the owner/operator. That person can sue the manufacturer to recover the damages if they want. At some point, I expect it to become somewhat routine for insurures to pay out, then sue the manufacturer to recover.
Or at some point subscribing to a service may be easier than owning the damn thing.
All according to plan
5 replies →
Ah, but could one not argue that the owner of the self-driving car is _not_ the operator, and it is the car, or perhaps Tesla, which operates it?
All Tesla vehicles require the person behind the steering wheel to supervise the operations of the vehicle and avoid accidents at all times.
Also, even if a system is fully automated, that doesn’t necessarily legally isolate the person who owns it or set it into motion from liability. Vehicle law would generally need to be updated to change this.
Mercedes agrees. They take on liability when their system is operated appropriately.
1 reply →
Because that's the law of the land currently.
The product you buy is called "FSD Supervised". It clearly states you're liable and must supervise the system.
I don't think there's law that would allow Tesla (or anyone else) to sell a passenger car with unsupervised system.
If you take Waymo or Tesla Robotaxi in Austin, you are not liable for accidents, Google or Tesla is.
That's because they operate on limited state laws that allow them to provide such service but the law doesn't allow selling such cars to people.
That's changing. Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
Tacking "Supervised" on the end of "Full Self Driving" is just contradictory. Perhaps if it was "Partial Self Driving" then it wouldn't be so confusing.
Its only to differentiate it from their "Unsupervised FSD" which is what they call it now.
I imagine insurance would be split in two in that case. Carmakers would not want to be liable for e.g. someone striking you in a hit-and-run.
If the car that did a hit-and-run was operated autonomously the insurance of the maker of that car should pay. Otherwise it's a human and the situation falls into the bucket of what we already have today.
So yes, carmakers would pay in a hit-and-run.
2 replies →
You can sell autonomous vehicles to consumers all day long. There's no US federal law prohibiting that, as long as they're compliant with FMVSS as all consumer vehicles are required to be.
Waymo is also a livery service which you normally aren’t liable for as a passenger of taxi or limousine unless you have deep pockets. /IANAL
> Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
This is news to me. This context seems important to understanding Tesla's decision to stop selling FSD. If they're on the hook for insurance, then they will need to dynamically adjust what they charge to reflect insurance costs.
I see. So not Tesla's product they are using to sell insurance around isn't "Full Self-Driving" or "Autonomous" like the page says.
My current FSD usage is 90% over ~2000 miles (since v14.x). Besides driving everywhere, everyday with FSD, I have driven 4 hours garage to hotel valet without intervention. It is absolutely "Full Self-Driving" and "Autonomous".
FSD isn't perfect, but it is everyday amazing and useful.
10 replies →
Without LIDAR and/or additional sensors, Tesla will never be able to provide "real" FSD, no matter how wonderful their software controlling the car is.
Also, self driving is a feature of a vehicle someone owns, I don't understand how that should exempt anyone from insuring their property.
Waymo and others are providing a taxi service where the driver is not a human. You don't pay insurance when you ride Uber or Bolt or any other regular taxi service.
> Also, self driving is a feature of a vehicle someone owns, I don't understand how that should exempt anyone from insuring their property.
Well practically speaking, there’s nothing stopping anyone from voluntarily assuming liability for arbitrary things. If Tesla assumes the liability for my car, then even if I still require my “own” insurance for legal purposes, the marginal cost of covering the remaining risk is going to be close to zero.
Never say never—it’s not physically impossible. But yes, as it stands, it seems that Tesla will not be self driving any time soon (if ever).
2 replies →
If your minor child breaks something, or your pet bites someone, you are liable.
This analogy may be more apt than Tesla would like to admit, but from a liability perspective it makes sense.
You could in turn try to sue Tesla for defective FSD, but the now-clearly-advertised "(supervised)" caveat, plus the lengthy agreement you clicked through, plus lots of lawyers, makes you unlikely to win.
Can a third party reprogram my dog or child at any moment? Or even take over and control them?
Risk gets passed along until someone accepts it, usually an insurance company or the operator. If the risk was accepted and paid for by Tesla, then the cost would simply be passed down to consumers. All consumers, including those that want to accept the risk themselves. In particular, if you have a fleet of cars it can be cheaper to accept the risk and only pay for mandatory insurance, because not all of your cars are going to crash at the same time, and even if they did, not all in the worst way possible. This is how insurance works, by amortizing lots of risk to make it highly improbable to make a loss in the long run.
Seems like the role of the human operator in the age of AI is to be the entity they can throw in jail if the machine fails (e.g. driver, pilot)
I’ve said for years that pragmatically, our definition of a “person” is an entity that can accept liability and take blame.
LLCs can't go to jail though
1 reply →
Not to be confused with “human” thanks to SCOTUS.
> Surely if it's Tesla making the decisions, they need the insurance?
Why surely? Turning on cruise control doesn't absolve motorists of their insurance requirement.
And the premise is false. While Tesla does "not maintain as much insurance coverage as many other companies do," there are "policies that [they] do have" [1]. (What it insures is a separate question.)
[1] https://www.sec.gov/ix?doc=/Archives/edgar/data/0001318605/0...
Cruise control is hardly relevant to a discussion of liability for autonomous vehicle operation.
In the context of ultramodern cruise control (eg comma.ai), which has a radar to track the distance to the car (if any) in front of you, and cameras so the car can wind left or right and track the freeway, I think it does.
2 replies →
I think there is an even bigger insurance problem to worry about: if autonomous vehicles become common and are a lot safer than manual driven vehicles, insurance rates for human driven cars could wind up exploding as the risk pool becomes much smaller and statistically riskier. We could go from paying $200/month to $2000/month if robo taxis start dominating cities.
> if autonomous vehicles become common and are a lot safer than manual driven vehicles, insurance rates for human driven cars could wind up exploding as the risk pool becomes much smaller and statistically riskier.
The assumption there is that the remaining human drivers would be the higher risk ones, but why would that be the case?
One of the primary movers of high risk driving is that someone goes to the bar, has too many drinks, then needs both themselves and their car to get home. Autonomous vehicles can obviously improve this by getting them home in their car without them driving it, but if they do, the risk profile of the remaining human drivers improves. At worst they're less likely to be hit by a drunk driver, at best the drunk drivers are the early adopters of autonomous vehicles and opt themselves out of the human drivers pool.
Drunk driving isn't the primary mover of high risk driving. Rather you have:
1. People who can't afford self driving cars (now the insurance industry has a good proxy for income that they couldn't tap into before)
2. Enthusiasts who like driving their cars (cruisers, racers, Helcat revving, people who like doing donuts, etc...)
3. Older people who don't trust technology.
None of those are good risk pools to be in. Also, if self driving cars go mainstream, they are bound to include the safest drivers overnight, so whatever accidents/crashes happen afterwards are covered by a much smaller and "active" risk pool. Oh, and those self driving cars are expensive:
* If you hit one and are at fault, you might pay out 1-200k, most states only require 25k-50k of coverage...so you need more coverage or expect to pay more for incident.
* Self driving cars have a lot of sensors/recorders. While this could work to your advantage (proving that you aren't at fault), it often isn't (they have evidence that you were at fault). Whereas before fault might have been much more hazy (both at fault, or both no fault).
The biggest factor comes if self driving cars really are much safer than human drivers. They will basically disappear from the insurance market, or somehow be covered by product liability instead of insurance...and the remaining drivers will be in a pool of the remaining accidents that they will have to cover on their own.
2 replies →
The fact you think $200 per month is sane is amusing to people in other countries
Haha, yes, today already sucks badly in many US markets. Imagine what will happen when the only people driving cars manually are "enthusiasts".
Hell, I was paying €180/yr for my New Beetle a decade ago...
Is that low or high?
That's probably the future; Mercedes currently does do this in limited form:
https://www.roadandtrack.com/news/a39481699/what-happens-if-...
Not "currently," "used to": https://www.theverge.com/transportation/860935/mercedes-driv...
It was way too limited to be useful to anyone.
Why ship owner is paying for the insurance while it's a captain making all decisions?
Because the operator is liable? Tesla as a company isn't driving the car, it's a ML model running on something like HW4 on bare metal in the car itself. Would that make the silicon die legally liable?
Sounds like it's neither self-driving, nor autonomous, if I'm on the hook if it goes wrong.
Yeah, Tesla gets to blame the “driver”, and has a history of releasing partial and carefully curated subsets of data from crashes to try to shift as much blame onto the driver as possible.
And the system is designed to set up drivers for failure.
An HCI challenge with mostly autonomous systems is that operators lose their awareness of the system, and when things go wrong you can easily get worse outcomes than if the system was fully manual with an engaged operator.
This is a well known challenge in the nuclear energy sector and airline industry (Air France 447) - how do you keep operators fully engaged even though they almost never need to intervene, because otherwise they’re likely to be missing critical context and make wrong decisions. These days you could probably argue the same is true of software engineers reviewing LLM code that’s often - but not always - correct.
1 reply →
Its neither self-driving, nor autonomous, eventually not even a car! (as Tesla slowly exits the car business). It will be 'insurance' on Speculation as a service, as Tesla skyrockets to $20T market cap. Tesla will successfully transition from a small revenue to pre-revenue company: https://www.youtube.com/watch?v=SYJdKW-UnFQ
The last few years of Tesla 'growth' show how this transition is unfolding. S and X production is shutdown, just a few more models to shutdown.
2 replies →
Especially since they can push regressions over the air and you could be lulled into a sense of safety and robustness that isn’t there and bam you pay the costs of the regressions, not Tesla.
Who’s the “operator” of an “autonomous” car? If I sit in it and it drives me around, how am I an “operator”?
If you get on a horse and let go of the reins you are also considered the operator of the horse. Such are the definitions in our society.
1 reply →
The point is if the liability is always exclusively with the human driver then any system in that car is at best a "driver assist". Claims that "it drives itself" or "it's autonomous" are just varying degrees of lying. I call it a partial lie rather than a partial truth because the result more often than not is that the customer is tricked into thinking the system is more capable than it is, and because that outcome is more dangerous than the opposite.
Any car has varying degrees of autonomy, even the ones with no assists (it will safely self-drive you all the way to the accident site, as they say). But the car is either driven by the human with the system's help, or is driven by the system with or without the human's help.
A car can't have 2 drivers. The only real one is the one the law holds responsible.
> If it autonomous or self-driving then why is the person in the car paying for the insurance? Surely if it's Tesla making the decisions, they need the insurance?
Suppose ACME Corporation produces millions of self-driving cars and then goes out of business because the CEO was embezzling. They no longer exist. But the cars do. They work fine. Who insures them? The person who wants to keep operating them.
Which is the same as it is now. It's your car so you pay to insure it.
I mean think about it. If you buy an autonomous car, would the manufacturer have to keep paying to insure it forever as long as you can keep it on the road? The only real options for making the manufacturer carry the insurance are that the answer is no and then they turn off your car after e.g. 10 years, which is quite objectionable, or that the answer is "yes" but then you have to pay a "subscription fee" to the manufacturer which is really the insurance premium, which is also quite objectionable because then you're then locked into the OEM instead of having a competitive insurance market.
Not all insurance claims are based off of the choices of the driver.
It’s because you bought it. Don’t buy it if you don’t want to insure.
Yep, you bought it, you own it, you choose to operate it on the public roads. Therefore your liability.
If you bought and owned it, you could sell it to another auto manufacturer for some pretty serious amounts of money.
In reality, you acquired a license to use it. Your liability should only go as far as you have agreed to identify the licenser.
6 replies →
I don't think Tesla lets you buy FSD
They do, until Feb 14th.
1 reply →
You insure the property, not the person.
well it's the risk, the combination ..
it's why young drivers pay more for insurance
Not an expert here, but I recall reading that certain European countries (Spain???) allow liability to be put on the autonomous driving system, not the person in the car. Does anyone know more about this?
That is the case everywhere. It is common when buying a product for the contract to include who has liability for various things. The price often changes by a lot depending on who has liability.
Cars are traditionally sold as the customer has liability. Nothing stops a car maker (or even an individual dealer) from selling cars today taking all the insurance liability in any country I know of - they don't for what I hope are obvious reasons (bad drivers will be sure to buy those cars since it is a better deal for them an in turn a worse deal for good drivers), but they could.
Self driving is currently sold as customers has liability because that is how it has always been done. I doubt it will change, but it is only because I doubt there will ever be enough advantage as to be worth it for someone else to take on the liability - but I could be wrong.
The coder and sensor manufacturers need the insurance for wrongful death lawsuits
and Musk for removing lidar so it keeps jumping across high speed traffic at shadows because the visual cameras can't see true depth
99% of the people on this website are coders and know how even one small typo can cause random fails, yet you trust them to make you an alpha/beta tester at high speed?
It isn't fully autonomous yet. For any future system sold as level 5 (or level 4?), I agree with your contention -- the manufacturer of the level 5 autonomous system is the one who bears primary liability and therefore should insure. "FSD" isn't even level 3.
(Though, there is still an element of owner/operator maintenance for level 4/5 vehicles -- e.g., if the owner fails to replace tires below 4/32", continues to operate the vehicle, and it causes an injury, that is partially the owner/operator's fault.)
Wouldn't that requirement completely kill any chance of a L5 system being profitable? If company X is making tons of self-driving cars, and now has to pay insurance for every single one, that's a mountain of cash. They'd go broke immediately.
I realize it would suck to be blamed for something the car did when you weren't driving it, but I'm not sure how else it could be financially feasible.
No? Insurance costs would be passed through to consumers in the form of up-front purchase price. And probably the cost to insure L5 systems for liability will be very low. If it isn't low, the autonomous system isn't very safe.
The way it works in states like California currently is that the permit holder has to post an insurance bond that accidents and judgements are taken out against. It's a fixed overhead.