← Back to context

Comment by space_fountain

7 months ago

Yes, but I think the other lesson might be that those black box machine translations have ended up being more valuable? It sucks when things don't always work, but that is also kind of life and if the AI version worked more often that is usually ok (as long as the occasional failures aren't so catastrophic as to ruin everything)

> Yes, but I think the other lesson might be that those black box machine translations have ended up being more valuable?

The key difference is how tolerant the specific use case is of a probably-correct answer.

The things recent-AI excels at now (generative, translation, etc.) are very tolerant of "usually correct." If a model can do more, and is right most of the time, then it's more valuable.

There are many other types of use cases, though.

  • A case in point is the ubiquity of Pleco in the Chinese/English space. It’s a dictionary, not a translator, and pretty much every non-native speaker who learns or needs to speak Chinese uses it. It has no ML features and hasn’t changed much in the past decade (or even two). People love it because it does one specific task extremely well.

    On the other hand ML has absolutely revolutionised translation (of longer text), where having a model containing prior knowledge about the world is essential.

Can’t help but read that and think of Tesla’s Autopilot and “Full Self Driving”. For some comparisons they claim to be safer per mile than human drivers … just don’t think too much about the error modes where the occasional stationary object isn’t detected and you plow into it at highway speed.

  • relevant to the grandparent’s point: I am demoing FSD in my Tesla and what I find really annoying is that the old Autopilot allowed you to select a maximum speed that the car will drive. Well, on “FSD” apparently you have no choice but to hand full longitudinal control over to the model.

    I am probably the 0.01% of Tesla drivers who have the computer chime when I exceed the speed limit by some offset. Very regularly, even when FSD is in “chill” mode, the model will speed by +7-9 mph on most roads. (I gotta think that the young 20 somethings who make up Tesla's audience also contributed their poor driving habits to Tesla's training data set) This results in constant beeps, even as the FSD software violates my own criteria for speed warning.

    So somehow the FSD feature becomes "more capable" while becoming much less legible to the human controller. I think this is a bad thing generally but it seems to be the fad today.

    • I have no experience with Tesla and their self-driving features. When you wrote "chill" mode, I assume it means the lowest level of aggressiveness. Did you contact Tesla to complain the car is still too aggressive? There should be a mode that tries to drive exactly the speed limit, where reasonable -- not over or under.

      1 reply →

  • > For some comparisons they claim to be safer per mile than human drivers

    They are lying with statistics, for the more challenging locations and conditions the AI will give up and let the human take over or the human notices something bad and takes over. So Tesla miles are miles are cherry picked and their data is not open so a third party can make real statistics and compare apples to apples.

  • Or in some cases, the Tesla slows down, then changes its mind and starts accelerating again to run over child-like obstructions.

    Ex: https://www.youtube.com/watch?v=URpTJ1Xpjuk&t=293s

    • Tesla's driver assist since the very beginning to now seems to not posses object/decision permanence.

      Here you can see it detected an obstacle (as evidenced by info on screen), made a decision to stop, however it failed to detect existence of the object right in front of the car, promptly forgot about the object and decision to stop and happily accelerated over the obstacle. When tackling a more complex intersection it can happily change its mind with regards to exit lane multiple times, e.g. it will plan to exit on one side of a divider, replan to exit onto upcoming traffic, replan again.

  • Well Tesla might be the single worst actor in the entire AI space, but I do somewhat understand your point. The lake of predictable failures is a huge problem with AI, I'm not sure that understandability is by itself. I will never understand the brain of an Uber driver for example