← Back to context

Comment by Cornbilly

11 hours ago

It's great that they faced essentially no consequences for this. A sure sign that we have a functional and sane market.

Why would they face consequences? Every store has video surveillance that can be reviewed.

They trusted their tech enough to accept the false-positive rate, then worked to determine / validate their false positive rate with manual review, and iterate their models with the data.

From a consumer perspective the point is that you can "just walk out". They delivered that.

  • If the stock price goes down, I won’t be surprised if there’s a shareholder lawsuit claiming that they misrepresented their level of AI achievement and that lead to this write-off by keeping operating costs and error rates high. The whole business model really assumed that they could undercut competitors by lower staffing.

  • Their initial advertising claimed near full automation by their "AI" system when, in reality, they had people manually handling around 70% of the transactions.

    I get that this is a message board for YC, so lying about your company's tech is considered almost a virtue but that is an unreasonably big lie to tell without getting your hand-slapped by some regulatory body or investor backlash.

    • I don’t remember Amazon claiming “near-full automation” by AI. They said that you can checkout automatically and that AI/computer vision is somehow involved.

    • Well that's because, again, it was indeed algorithms doing the work, and the people were only used to verify / train the system, after the fact. People keep, intentionally, conflating the two things, doing everything in their power to say (or strongly imply), that the people involved were managing the orders in real time, which is a lie. You are the one pushing misinformation here.

    • I think investors like Amazon taking shots like this? It was never a broad roll-out, 43 stores is micro-scale for Amazon.

      Still, would love to see a breakdown of why it didn't improve. Regardless of the accuracy at launch, I'd think that advances in AI would have been massively to their advantage. I wonder if security degradation hit them hard.

      The entire system depends on a level of social trust that doesn't exist in American cities today. Similarly, the "Dash Cart" seems like a cheaper and easier way to accomplish the same thing.

      At the end of the day, there's also a mismatch in the use case. If I'm going to a smaller format store, like they had, I'm not buying a ton of stuff. Self checkout is great, and minimal friction.

      I'd think that improving the UX of self-checkout gets 80% of the way there with way less fraud, way less theft, and way less technology.

      Still, I think it's wicked cool they took a big shot.

      I know someone that worked on the project in the early days. It was always incredibly difficult technology, they were always behind on their accuracy targets, and the solutions were increasingly kludgy as they layered more and more complex systems on top. An honorable failure.

      A lot of smart people really tried to make it work.

      1 reply →

    • Who cares how they monitor and validate transactions? That's Amazon's problem, not mine.

      Indians, AI, whatever, meh.

What's the crime? If lying about AI capabilities is a crime we have some billionaires in big trouble.

  • AI is not unique in this regard. We just saw the same thing with the crypto/blockchain nonsense.

    Regulation lags so far behind that you can get away with bad behavior long enough that, by the time regulation catches up, you can buy your way out of consequences.