Comment by roenxi

3 years ago

I would like to nit-pick and say you're estimating some novel metric. The value add would be somewhat higher than that because the alternative is a human driver, so the value-add of the software is approximately ($/h value of driver x hours) that it frees up to do something else. Plus all the evidence I see suggests that whatever the current state, sooner rather than later computers are going to be superhuman at driving (like they are at most other discrete tasks) and save a lot of lives. Plus reduced wear & tear on vehicles.

$365B in value is probably a serious undercall. The only reason to complain here is if only $100B has been put in to the venture so far.

I think computers will become superhuman at driving under the right conditions.

However, there will inevitably be conditions that require the use of general intelligence (rather than driving heuristics), and in those situations all you can do is pray the computer acts rationally despite not having GI.

I think self driving cars have already passed the test of "number of crashes" or "number of fatalities" per mile driven. But I don't think that's enough to sway the public, if every once in a billion miles a self driving car slowly drives off a cliff for no apparent reason.

  • Nothing says self driving cars can’t phone a call center for the 0.001% edge cases. Just add a cellphone connection and a Starlink receiver in the roof for a backup. At which point we would need to add some cell reception to some tunnels and cars can have literally global coverage.

    High drips the problem drops from near AGI to not outrunning the cars ability to stop without hitting anything.

  • >I think self driving cars have already passed the test of "number of crashes" or "number of fatalities" per mile driven.

    This is not definitively known. The distribution of conditions under which self driving cars operate is very different from the distribution of human driving. Self driving car miles are disproportionately on the highway, with little traffic, in perfect weather (i.e. by far the safest driving conditions). In addition, we don’t know how many disengagements (or remote interventions) would have resulted in an accident.

    • True, true. However, if I had to bet... I'd bet in a fully self driving world (using today's tech) there'd be overall far fewer fatal accidents. However there'd also be a lot of bullshit we are simply unwilling to deal with (e.g. cars going 10mph in the rain; slow motion fender benders; car unwilling to turn left; gridlock at intersections; lots of random oddities). I guess my point is that safety metrics are certainly important stats to gauge self driving progress, but certainly not the only metrics that matter, and perhaps not even the most important.

      8 replies →

>>Plus all the evidence I see suggests that whatever the current state, sooner rather than later computers are going to be superhuman at driving (like they are at most other discrete tasks) and save a lot of lives.

What is that evidence, exactly? I agree that we might eventually get there, but the scale seems to be 50-100 years at this point. We are as arrogant as the researchers in the 60s who famously announced that absolutely perfect image recognition is only 1-2 years away - except the problem is several orders of magnitude harder.

  • Typically from what I've seen computers go a long period being hopeless at a task, then slightly subhuman-par-superhuman very quickly. It seems to me that as far as algorithm-and-Hz cares human intelligence lives in a very narrow window and once computers get close to it they tend to jump over it.

    I'd judge self driving to be slightly subhuman right now - there are definitely worse drivers on the road (typically impaired - drunk, near-blind or high). I'd expect superhuman performance this decade just based on that and the rate of improvement in anything AI related right now.