Comment by lxgr

6 months ago

Bolting banner ads onto a technology that can organically weave any concept into a trusted conversation would be incredibly crude.

True - but if you erode that trust then your users may go elsewhere. If you keep the ads visually separated, there's a respected boundary & users may accept it.

  • There will be a respected boundary for a time, then as advertisers find its more effective the boundaries will start to disappear

  • google did it. LLms are the new google search. It'll happen sooner or later.

    • Yes, but for a while google was head and shoulders above the competition. It also poured a ton of money into building non-search functionality (email, maps, etc.). And had a highly visible and, for a while, internally respected "don't be evil" corporate motto.

      All of which made it much less likely that users would bolt in response to each real monetization step. This is very different to the current situation, where we have a shifting landscape with several AI companies, each with its strengths. Things can change, but it takes time for 1-2 leaders to consolidate and for the competition to die off. My 2c.

I imagine they would be more like product placements in film and TV than banner ads. Just casually dropping a recommendation and link to Brand (TM) in a query. Like those Cerveza Cristal ads in star wars. They'll make it seem completely seamless to the original query.

  • I just hope that if it comes to that (and I have no doubt that it will), regulation will catch up and mandate any ad/product placement is labeled as such and not just slipped in with no disclosure whatsoever. But, given that we've never regulated influencer marketing which does the same thing, nor are TV placements explicitly called out as "sponsored" I have my doubts but one can hope.

  • Yup, and I wouldn't be willing to bet that any firewall between content and advertising would hold, long-term.

    For example, the more product placement opportunities there are, the more products can be placed, so sooner or later that'll become an OKR to the "content side" of the business as well.

how is it "trusted" when it just makes things up

  • That's a great question to ask the people who seem to trust them implicitly.

    • They aren't trusted in a vacuum. They're trusted when grounded in sources and their claims can be traced to sources. And more specifically, they're trusted to accurately represent the sources.

      5 replies →

  • 15% of people aren't smart enough to read and follow directions explaining how to fold a trifold brochure, place it in an envelope, seal it, and address it

    you think those people don't believe the magic computer when it talks?

  • “trusted” in computer science does not mean what it means in ordinary speech. It is what you call things you have no choice but to trust, regardless of whether that trust is deserved or not.

    • For one, it's not like we're at some CS conference, so we're engaging in ordinary speech here, as far as I can tell. For two, "trusted" doesn't have just one meaning, even in the narrower context of CS.

    • I meant it in the ordinary speech sense (which I don't even thing contradicts the "CS sense" fwiw).

      Many people have a lot of trust in anything ChatGPT tells them.

Like that’s ever stopped the adtech industry before.

It would be a hilarious outcome though, “we built machine gods, and the main thing we use them for is to make people click ads.” What a perfect Silicon Valley apotheosis.