← Back to context

Comment by _ea1k

1 month ago

TBH, the comments here amaze me. The claim is that a human being paid to monitor a driver assistance feature is 3x more likely to crash than a human alone.

That needs extraordinary evidence. Instead the evidence is misleading guesses.

> That needs extraordinary evidence.

Waymo studied this back when it was a Google moonshot, and concluded that going full automation is safer than human supervision. A driving system that mostly works lulls the driver into complacency.

Besides automation failure, driver complacency was a big component[1] of the fatal accident that led to the shuttering of Ubers self-driving efforts - the safety driver was looking at her phone for minutes in the lead up. It is also the reason why driver attention is monitored in L2

  • If rider and pedestrian safety is the main concern, the automated assistance and safety systems that car manufacturers were already developing make the most sense. They either warn or intervene in situations the human may not realize they are in danger and/or do not respond in time. Developing these solves the harder problems first, automation is easy in comparison.

    The idea that mostly-automating the system because it's statistically better than humans, but requiring human-assistance to monitor and respond in these exact situations, was flawed logic to begin with. Comparisons of statistics should be made like-for-like, given these are scenarios we can easily control.

    For example, a robotic taxis should at least be compared to professional drivers on similar routes, roads, vehicles, and times of day. Not just comparing "all drivers in all vehicles in all scenarios over time" with private company data that cherry-picks "automated driving" miles on highways etc. (where existing assistance systems could already achieve near-perfect results).

    Companies testing autonomy on the public should be forced to upload all crash data to investigators as part of their licensing. The vehicles already have extremely detailed sensor and video data to operate. The fact that we have no verified data to compare to existing human statistics is damning. It's a farce.

  • Sure, but we now have millions of miles of Tesla autopilot and FSD data in the hands of untrained and often semi-malicious end users as well. Out of that data, we've gotten flawed reports from Tesla claiming that it is dramatically safer, as well as independent renormalization that showed it to be at best about the same.

    None of those millions of miles resulted in a smoking gun showing the cars to be even 2x worse.

    And yet a badly written blog thinks they've shown them to be 3x worse with professional monitors? This is indeed an extraordinary claim.

That... is not really an extraordinary claim. That has been many people's null hypothesis since before this technology was even deployed, and the rationale for it is sufficiently borne out to play a role in vigilance systems across nearly every other industry that relies on automation.

A safety system with blurry performance boundaries is called "a massive risk." That's why responsible system designers first define their ODD, then design their system to perform to a pre-specified standard within that ODD.

Tesla's technology is "works mostly pretty well in many but not all scenarios and we can't tell you which is which"

It is not an extraordinary claim at all that such a system could yield worse outcomes than a human with no asssistance.