Comment by embedding-shape
6 hours ago
> Freeways might appear "easy" on the surface, but there are all sorts of long tail edge-cases that make them insanely tricky to do confidently without a driver
Maybe my memory is failing me, but I seem to remember people saying the exact opposite here on HN when Tesla first announced/showed off their "self-driving but not really self-driving" features, saying it'll be very easy to get working on the highways, but then everything else is the tricky stuff.
Highways are on average a much more structured and consistent environment, but every single weird thing (pedestrians, animals, debris, flooding) that occurs on streets also happens on highways. When you're doing as many trips and miles as Waymo, once-in-a-lifetime exceptions happen every day.
On highways the kinetic energy is much greater (Waymo's reaction time is superhuman, but the car can't brake any harder.) and there isn't the option to fail safe (stop in place) like their is on normal roads.
Those constraints apply to humans too. So it seems likely that:
- it's easier to get to human levels of safety on freeways then on streets
- it's much harder to get to an order of magnitude better than humans on freeways than it is on streets
Freeways are significantly safer than streets when humans are driving, so "as good as humans" may be acceptable there.
I don't have any specific knowledge about Waymo's stack, but I can confidently say Waymo's reaction time is likely poorer than an attentive human. By the time sensor data makes it through the perception stack, prediction/planning stack, and back to the controls stack, you're likely looking at >500ms. Waymos have the advantage of consistency though (they never text and drive).
> but I can confidently say [...] you're likely looking at >500ms
That sounds outrageous if true. Very strange to acknowledge you don't actually have any specific knowledge about this thing before doing a grand claim, not just "confidently", but also label it as such.
They've been publishing some stuff around latency (https://waymo.com/search?q=latency) but I'm not finding any concrete numbers, but I'd be very surprised if it was higher than the reaction time for a human, which seems to be around 400-600ms typically.
3 replies →
> I don't have any specific knowledge about Waymo's stack, but I can confidently say Waymo's reaction time is likely poorer than an attentive human.
Wait, so basically, "I don't know anything about this subject, but I'm confident regardless"?
Even if we assume this to be true, waymos have the advantage of more sensors and less blind spots.
Unlike humans they can also sense what's behind the car or other spots not directly visible to a human. They can also measure distance very precisely due to lidars (and perhaps radars too?)
A human reacts to the red light when a car breaks, without that it will take you way more time due to stereo vision to realize that a car ahead was getting closer to you.
And I am pretty sure when the car detects certain obstacles fast approaching at certain distances, or if a car ahesd of you stopped suddenly or deer jumped or w/e it breaks directly it doesn't need neural networks processing those are probably low level failsafes that are very fast to compute and definitely faster than what a human could react to
What gives you that confidence?
You're quite wrong. It tends to be more like 100–200 ms, which is generally significantly faster than a human's reaction.
People have lots of fears about self-driving cars, but their reaction time shouldn't be on the list.
1 reply →
Waymo "sees" further - including behind cars - and has persistent 360-degree awareness, wheres humans have to settle for time-division of the fovea and are limited to line-of-sight from driver's seat. Humans only have an advantage if the event is visible from the cabin, and they were already looking at it (i.e. it's in front of them) for every other scenario, Waymo has better perception + reaction times. "They just came out of nowhere" happens less for Waymo vehicles with their current sensor suite.
It's actually a really interesting topic to think about. Depending on the situation, there might be some indecision in a human driver that slows the process down. Whereas the Waymo probably has a decisive answer to whatever problem is facing it.
I don't really know the answers for sure here, but there's probably a gray area where humans struggle more than the Waymo.
It's easier to get from zero to something that works on divided highways, since there's only lanes, other vehicles, and a few signs to care about. No cross traffic, cyclists, pedestrians, parked cars, etc.
One thing that's hard with highways is the fact that vehicles move faster, so in a tenth of a second at 65 mph, a car has moved 9.5 feet. So if say a big rock fell off a truck onto the highway, to detect it early and proactively brake or change lanes to avoid it, it would need to be detected at quite a long distance, which demands a lot from sensors (eg. how many pixels/LIDAR returns do you get at say 300+ feet on an object that's smaller than a car, and how much do you need to detect it as an obstruction).
But those also happen quite infrequently, so a vehicle that doesn't handle road debris (or deer or rare obstructions) can work with supervision and appear to work autonomously, but one that's fully autonomous can't skip those scenarios.
One of the first high-profile Tesla fatalities was on a highway, where the vehicle misunderstood a left exit and crashed into a concrete barrier.
https://enewspaper.latimes.com/infinity/article_share.aspx?g...
the difficult part of the highways is the interchanges, not the straight shots between interchanges. and iirc, tesla didn't do interchanges at the time people were criticizing them for only doing the easiest part of self-driving.
I think the key is, it's easy to get "self-driving" where the car will hand off to the driver working on highways. "Follow the lines, go forward, don't get hit". But having it DRIVERLESS is a different beast, and the failure states are very different than those in surface street driving.
> remember people saying the exact opposite
It was a common but bad hypothesis.
"If you had asked me in 2018, when I first started working in the AV industry, I would’ve bet that driverless trucks would be the first vehicle type to achieve a million-mile driverless deployment. Aurora even pivoted their entire company to trucking in 2020, believing it to be easier than city driving.
...
Stopping in lane becomes much more dangerous with the possibility of a rear-end collision at high speed. All stopping should be planned well in advance, ideally exiting at the next ramp, or at least driving to the closest shoulder with enough room to park.
This greatly increases the scope of edge cases that need to be handled autonomously and at freeway speeds.
...
The features that make freeways simpler — controlled access, no intersections, one-way traffic — also make ‘interesting’ events more rare. This is a double-edged sword. While the simpler environment reduces the number of software features to be developed, it also increases the iteration time and cost.
During development, ‘interesting’ events are needed to train data-hungry ML models. For validation, each new software version to be qualified for driverless operation needs to encounter a minimum number of ‘interesting’ events before comparisons to a human safety level can have statistical significance. Overall, iteration becomes more expensive when it takes more vehicle-hours to collect each event.”
https://kevinchen.co/blog/autonomous-trucking-harder-than-ri...
Highway is easier, but if something goes wrong the chance of death is pretty high. This is bad PR and could get you badly regulated if you fuck it up.