Comment by roadside_picnic
2 days ago
> It also estimates that LLM companies will capture 2% of the digital advertising market, which seems kind of low to me.
I'm not super bullish on "AI" in general (despite, or maybe because of working in this space the last few years), but strongly agree that the advertising revenue that LLM providers will capture can be potentially huge.
Even if LLMs never deliver on their big technical promises, I know so many casual users of LLMs that basically have replaced their own thought process with "AI". But this is an insane opportunity for marketing/advertising that stands to be a much of a sea change in the space as Google was (if not more so).
People trust LLMs with tons of personal information, and then also trust it to advise them. Give this behavior a few more years to continue to normalize and product recommendations from AI will be as trusted as those from a close friends. This is the holy grail of marketing.
I was having dinner with some friends and one asked "Why doesn't Claude link to Amazon when recommending a book? Couldn't they make a ton in affiliate links?" My response was that I suspect Anthropic would rather pass on that easy revenue to build trust so that one day they can recommend and sell the book to you.
And, because everything about LLMs is closed and private, I suspect we won't even know when this is happening. There's a world where you ask an LLM for a recipe, it provides all the ingredients for your meal from paid sponsors, then schedules to have them delivered to your door bypassing Amazon all together.
All of this can be achieved with just adding layers on to what AI already is today.
What in the dystopia?
The "holy grail" of the AI business model is to build a feeling of trust and security with their product and then turn around to try and gouge you on hemmorrhoid cream and the like?
We really need to stop the worship of mustache twirling exploitation
There's no worship here on my part (in fact I got out of the AI space because was increasingly less about tech/solving problems, and more about pure hype), but my experience in this industry has been that the most dystopian path tends to be the most likely. I would prefer if Google search, Reddit and YouTube were closer to what they were 15 years ago, but I do recognize how they got here.
I mean, look at all this "alignment" research. I think the people working in this space sincerely believe they are protecting humanity of a "misaligned" AGI, but I also strongly believe the people paying for this research want to figure out how to make sure we can keep LLMs aligned with the interests of advertisers.
Meta put so much money into the Metaverse because they were looking for the next space that would be like the iPhone ecosystem: one of total control (but ideally better). Already people are using LLMs for more and more mundane tasks, I can easily imagine a world where an LLM is the interface for interacting online world rather than a web browser (isn't that what we want with all these "agents"?) People already have AI lovers, have AI telling them that they are gods, having people connecting with them on a deeper level than they should. You believe Sam Altman doesn't realize the potential for exploitation here is unbounded?
What AI represents is where a single company control every piece of information fed to you and has also established deep trust with you. All the benefits of running a social media company (unlimited free content creation, social trust) with none of the draw backs (having to manage and pay content creators).
In my experience LLMs suck at (product) recommendations - I was looking for books with certain themes, asked ChatGPT 5, the answer was vague, generic and didn't fit the bill. At another time I writing an essay and was looking for famous figures to cite as examples of an archetype, and ChatGPT's answers were barely related.
In both cases, LLMs gave me examples that were generally famous, but very tangentially related to the subject at hand (at times, ChatGPT was reaching or straight up made up stuff).
I don't know why it has this bias, but it certainly does.
I work on rec systems.
The ideal here will be a multi tiered approach where the LLM first identifies that a book should be recommended, a traditional recommendation system chooses the best book for the user (from a bank of books that are part of an ads campaign), and then finally the LLM weaving that into the final response by prompt suggestion. All of this is individually well tested for efficacy within the social media industry.
I'll probably get comments calling this dystopian but I'm just addressing the claim that LLMs don't do good recommendations right now, which is not fundamental to the chatbot system.
All this would imply that the core value derives from better rec systems and not LLMs, which will merely embed the recommendation into their polite fluff.
Rec systems are in use right now everywhere, and they're not exactly mindblowing in practice. If we take my example of books with certain plotlines, it would need some super-high quality feature extraction from books (which would be even more valuable imo, than having better algorithms working on worse data). LLMs can certainly help with that, but that's just one domain.
And that would be a bespoke solution for just books, which would, if worked, would work with a standard search bar, no LLM needed in the final product.
We would need people to solve every domain for recommendation, whereas a group of knowledgeable humans can give you great tips on every domain they're familiar with on what to read, watch, buy to fix your leaky roof, etc.
So in essence, what you suggest would amount to giving up on LLMs (except as helpers for data curation and feature extraction) and going back to things we know work.