← Back to context

Comment by tzs

7 hours ago

I wonder if the meteoric rise in people using LLMs for advice had anything to do with this?

I was recently using ChatGPT and Perplexity to try to figure out some hardware glitches. I've found LLMs are way better than me at finding relevant threads for this kind of problem on Reddit, company support forums, forums of tech sites like Tom's Hardware, and similar.

The most common cause of the glitch I was seeing was a marginal Thunderbolt cable. A Best Buy 15 minutes from me had a 1m Apple Thunderbolt 5 cable. Amazon had the same cable for the same price with overnight Prime delivery.

If I'm spending $70 for an Apple cable I want it to actually be an Apple cable, so I asked ChatGPT if an Apple cable sold by Amazon was sure to be a genuine Apple cable.

It told me that it likely would be, but if I wanted to be sure buy it from Best Buy.

I bought from Best Buy.

I've made that decision before without the help of LLMs so I'm not sure what you're trying to say here. It feels vaguely insulting to our intelligence.

  • I've made that decision before without LLMs too. If I had been Googling to find possibly relevant material instead of using LLMs to find possibly relevant material, I probably would have bought from Amazon.

    With Googling the "figure out what is going wrong" part of solving the problem is more decoupled from the "figure out where to buy this thing" part. The first part involves Googling, looking at a bunch of results, finding a lot are not relevant, trying to refine the search, and repeating probably many times. After that time consuming process when I have finally decided that I needed a new cable I'd probably just go to Amazon without thinking about it.

    I always have a little doubt when buying from Amazon because of commingling, but usually not enough to look deeper into it unless the product is something with a high risk of it.

    With the LLM instead of Google I upfront described to it a lot of details of my equipment, how I was using it, what symptoms I was seeing, what diagnostic steps I'd taken and the results of those, and why I believe certain things that could cause such problems would not be applicable in my case.

    It then finds all the stuff I would have found by Googling, but because it also has way more information from what I told it at the start it can eliminate a whole bunch of the irrelevant results, so I'm starting out way ahead of where I would be after a first Google. A little back and forth and I know what I need to buy.

    At that point I'm still at the LLM screen. Since it is right there tossing in a final question about buying from Amazon vs Best Buy is trivial.

    I'm not a frequent LLM user. I have yet to pay for any LLM. (I did have a year of free Perplexity Pro that Xfinity gave to its customers a little over a year ago, but when that expired I did not subscribe.

    (There's a funny story there--when it expired and they tried to convince me to subscribe, I asked Perplexity if a subscription would be worth it. It told me that considering my usage patterns the free plan was perfectly fine for me and I should stick with that).

    A lot of people now are using LLMs instead of or before traditional Google-style searches when they want information. Not just techies or early adopters. The are or are quickly becoming mainstream.

    If they are recommending not buying from Amazon that might be something Amazon would want to address.

    • I might be wrong, but, wouldn't the recommendation to avoid Amazon if you want to be sure come from the massive amount of training data pulled from internet conversations? The kind that would already have been discussing the issue of counterfeit products on Amazon being mixed in with legitimate products from the original manufacturer, since this is a problem that's been going on for, what, at least a decade at this point, right?

      The LLM is inherently distrustful of Amazon due to having consumed and trained on a bunch of text that's about how one should be distrustful of Amazon.

  • Yes, it is common knowledge, but you need to get that information from somewhere in the first place, and why not a LLM?

    And sometimes, common knowledge may be wrong, so it doesn't hurt to use LLMs, search engines and other sources to confirm that. Maybe you could discover that Best Buy has a problem with just the product you want, or any other reason. It doesn't hurt to spend a couple of minutes to double check and avoid losing $70.

And right there it is where you will get ads in LLM responses. Or opinion manipulation like we have seen with Cambridge Analytica. Next time ChatGPT might always recommend Amazon.