← Back to context

Comment by katbyte

3 years ago

It’s weird to me people forget how bad vision can be for driving and Exocet Tesla to somehow be better then our own eyes.

How often a do you encounter situations like bad fog/sun set/rain at night where it’s a total struggle to drive and you slow right down to a crawl and even then only do alright because of a ton of inference?

i think Tesla deciding to go vision only will be regarded of one of the greatest blunders is self driving history.

The counterargument to this is that since humans reach acceptable safety levels with vision only, it must be possible to do self driving with vision only. That said, augmenting vision with other methods does seem like a no brainer for better performance.

  • We have a lot more signal than vision only. For example audio, the “feel of the road”, like feedback on the steering wheel and traction that we physically experience. Most of all we have actual intelligence and reasoning - not just pattern recognition.

    • Feel of the road is easy enough to get with the traction control hardware that most modern cars have.

    • Do you drive much? It would be QUITE the stretch to say most drivers have "actual intelligence and reasoning."

      Pattern matching for driving is probably better, frankly. You don't have people who are stressed out, pissed off, inattentive or in a hurry doing risky stuff on the road.

      3 replies →

  • The counterargument to THAT is that human safety levels aren’t acceptable. They are tolerable perhaps, but I wouldn’t call the number of accidents and fatalities we have today acceptable.

    • The average human isn't acceptable, but in principle a system like this should still be better than the best human just because it has more cameras than humans have eyeballs and they can arranged so there are no blind spots, and even with just two cameras placed (for no good reason) inside the cabin should be able to reach the performance of the best human all of the time.

      Current AI isn't that, but in principle it could get there.

  • That counterargument only holds if Tesla can build software that can approximate the human brain. I think it's laughable to expect they can do that, at least on any reasonable timeframe.

    Even if they could, a goal of self driving should be to do better than a human driver. Avoiding technology that can "see" in ways a human cannot is just short-sighted, and a huge missed opportunity.

    And all that still even ignores the fact that many common environmental conditions make driving only with human eyes very unsafe. Think fog or heavy rain. A car relying only in cameras to drive in those situations will be next to useless.

    • Why is it laughable that Tesla can build software that approximates the human brain? There is software that is better at chess, better at go and better at poker than humans. Why is driving so special?

      Agree with your other points.

      1 reply →

  • Two thoughts: how sure are you that the safety levels achieved by humans during bad vision would be considered acceptable for AVs? And secondly: humans have access to a reasonable (non-artificial) general intelligence.

  • > since humans reach acceptable safety levels with vision only

    No AI (reasoning) exists yet, only Machine Learning. It will take decades if not centuries.

    • Even ignoring that "AI" is generally accepted as a term of art:

      What do you mean by "reasoning", such that there is no example of a ML system that does this?

      2 replies →

  • > The counterargument to this is that since humans reach acceptable safety levels with vision only

    100+ car pileups in Southern California checking in to provide a counterexample.

    The patchy fog in Southern California on I-5 can go from "not too bad" to "can't see your own hood" in a matter of seconds. Radar is going to catch hazards WAY before a human will.

    • My thinking is similar. Removing ultrasound may in the end be more of a legal decision than a purely technical one. I suspect neither humans nor ultrasound can deliver real safety under fog conditions or blizzard conditions; so it may be best to clearly fold under truly difficult conditions and cut lawsuits vs Tesla for fog crashes off at the pass. If drivers want to drive in fog; they will be entirely responsible for the results and can hardly argue otherwise.

      This leaves the question of moving to radar, but for precise resolution well ahead of the vehicle you need microwaves and a lot of power, I would guess - which reduces the vehicle's range. For all I know you might parboil passersby, too. One old Mig had a radar that would kill and roast rabbits on the runway as it took off, but that's a much different use case, of course.

Or perhaps it will be their advantage in the short term.

Imagine a foggy condition that causes a 50 car pile up on the highway. Which is more likely to avoid the collision, a Tesla that slowed down because it couldn't see or a Waymo/Cruise blasting down the highway at 65 mph because it's Lidar can see through the fog?

  • Lidar can't see through fog (or snow/rain to enough of a degree), which is one reason tesla has avoided it. Do you mean radar? In the case of radar, I would hope that it becomes a base features of all cars eventually to avoid/mitigate rear endings by preemptive braking.

    • Many modern lidars absolutely can see through fog/snow/dust/rain, albeit degraded.

      Blackmore, and Aeva can see through fog and dust that others can’t. Most sensors can see sufficiently through rain and snow.

    • If there's a car in front of you, and you're following at normal following distances, it might not be enough. That person will keep going decently fast, and your radar car will follow. The lead car comes to an almost instant stop when it hits, and while radar might pick up the hit, it won't be able to stop as fast as a car hitting a stationary car.

      2 replies →