← Back to context

Comment by 542458

2 days ago

Honestly, this is huge for people like me who tend to over-research and over-think the hell out of product choices. "Find me a top-fill warm-mist humidifier that looks nice, is competitively priced against similar products, and is available from a retailer in $city_name. Now watch for it to go on sale and lmk."

If they can figure out how to get the right kickbacks/referrals without compromising user trust and really nail the search and aggregation of data this could be a real money-maker.

Trusting AI with your shopping is very short sighted.

Lol what a terrible idea. Why not just hand every decision you'll ever make to AI?

Nobody needs critical thinking or anything. Just have AI do it so you save $3 and 4 minutes.

  • Why would I want to spend 1-2h researching humidifiers if I can spend that time in any other way, and still end up with a humidifier that fits my needs first try?

    This kind of task is perfect for AI in a way that doesn't take away too much from the human experience. I'll keep my art, but shopping can die off.

    • Because you end up with a $1 value piece of crap that someone spent considerable time optimizing and faking reviews for LLMs instead of on the product. Basically in the medium term this strategy will get you temu stuff

    • How often do you buy the first result on an Amazon search? Because that's delegating your labour, isn't it? Surely the best products are getting to the top, right? Well no, they're being paid to get to the top. An LLM that has in-app shopping is gonna be the same thing

    • > This kind of task is perfect for AI in a way that doesn't take away too much from the human experience.

      Not the current form of AI. I regularly use Project Farm to find the best "insert tool". In an ideal world a robot runs all of these tests in perpetuity covering every physical appliance possible (with every variation, etc.). However, current AI cannot do this. Obviously LLMs can't do this because they don't operate in the physical world.

    • Well, you can always do the same thing that an LLM would: open SEO spam ranking sites "best humidifiers 2025", filled with referral links to Amazon or other sellers, which basically copy product descriptions and assign rankings that aren't based on any tests or real data.

    • for the same problems with amazon, youre relying on a computer to tell you what to buy, which is very shortly going to be infested with promoted products and adverts instead of genuine advice. The AI implementers will poison the responses in the name of advertising, of this i have zero doubt in my mind.

      1 reply →

  • Fundamentally, is it really that different from being persuaded by an advertisement or trusting what the marketing says on the box?

  • > Just have AI do it so you save $3 and 4 minutes.

    Maybe I am deeply suboptimal, but typically this kind of decision takes me far more than 4 minutes.

For this to be useful they need up to date information, so it just Googles shit and reads Reddit comments. I just don't see how that is likely to be any better than Googling shit and reading Reddit comments yourself.

If they had some direct feed of quality product information it could be interesting. But who would trust that to be impartial?

  • It’s better in that I don’t have to waste my time reading Google and Reddit myself, but can let a robot do it.

    • Do you buy the first item that pops up on Amazon for a search that you've made? Because that's letting the robot do it for you.

      If the answer is "no because that's an ad", well, how do you know that the output from ChatGPT isn't all just products that have bought their rank in the results?

      1 reply →

  • Project Farm solves the trust problem with methodology + video documentation and the monetization problem with affiliate links for every product tested.

> If they can figure out how to get the right kickbacks/referrals without compromising user trust

i'm trying to envision a situation in which the former doesn't cancel out the latter but i'm having a pretty hard time doing that. it seems inevitable that these LLM services will just become another way to deliver advertised content to users.

> If they can figure out how to get the right kickbacks/referrals without compromising user trust and really nail the search and aggregation of data this could be a real money-maker.

As another commenter points out, "not compromising user trust" seems at odds with "money-maker" in the long-term. Surely Google and other large tech companies have demonstrated that to you at this point? I don't understand why so many people think OpenAI or any of them will be any different?

  • I still approximately trust (yes, I know it's imperfect, but so is every other source) NYT's Wirecutter, and they do affiliate links.

> If they can figure out how to get the right kickbacks/referrals without compromising user trust

This is a complete contradiction. Once there's money involved in the recommendation you can no longer trust the recommendation. At a minimum any kind of referral means that there's strong incentive to get you to buy something instead of telling you "there are no good options that meet your criteria". But the logical next step for this kind of system is companies paying money to tilt the recommendation in their favour. Would OpenAI leave that money on the table? I can't imagine they would.