Comment by jsnell
6 months ago
They'd be profitable if they showed ads to their free tier users. They wouldn't even need to be particularly competent at targeting or aggressive with the amount of ads they show, they'd be profitable with 1/10th the ARPU of Meta or Google.
And they would not be incompetent at targeting. If they were to use the chat history for targeting, they might have the most valuable ad targeting data sets ever built.
Bolting banner ads onto a technology that can organically weave any concept into a trusted conversation would be incredibly crude.
True - but if you erode that trust then your users may go elsewhere. If you keep the ads visually separated, there's a respected boundary & users may accept it.
There will be a respected boundary for a time, then as advertisers find its more effective the boundaries will start to disappear
google did it. LLms are the new google search. It'll happen sooner or later.
1 reply →
I imagine they would be more like product placements in film and TV than banner ads. Just casually dropping a recommendation and link to Brand (TM) in a query. Like those Cerveza Cristal ads in star wars. They'll make it seem completely seamless to the original query.
I just hope that if it comes to that (and I have no doubt that it will), regulation will catch up and mandate any ad/product placement is labeled as such and not just slipped in with no disclosure whatsoever. But, given that we've never regulated influencer marketing which does the same thing, nor are TV placements explicitly called out as "sponsored" I have my doubts but one can hope.
Yup, and I wouldn't be willing to bet that any firewall between content and advertising would hold, long-term.
For example, the more product placement opportunities there are, the more products can be placed, so sooner or later that'll become an OKR to the "content side" of the business as well.
how is it "trusted" when it just makes things up
That's a great question to ask the people who seem to trust them implicitly.
6 replies →
15% of people aren't smart enough to read and follow directions explaining how to fold a trifold brochure, place it in an envelope, seal it, and address it
you think those people don't believe the magic computer when it talks?
“trusted” in computer science does not mean what it means in ordinary speech. It is what you call things you have no choice but to trust, regardless of whether that trust is deserved or not.
2 replies →
Like that’s ever stopped the adtech industry before.
It would be a hilarious outcome though, “we built machine gods, and the main thing we use them for is to make people click ads.” What a perfect Silicon Valley apotheosis.
Targeted banner ads based on chat history is last-two-decades thinking. The money with LLMs will be targeted answers. Have Coca-Cola pay you a few billion dollars to reinforce the model to say "Coke" instead of "soda". Train it the best source of information about political subjects is to watch Fox News. This even works with open-source models, too!
It sounds quite scary that an LLM could be trained on a single source of news (specially FN).
If interactions with your AI start sounding like your conversation partner shilling hot cocoa powder at nobody in particular those conversations are going to stop being trusted real quick. (Pop culture reference: https://youtu.be/MzKSQrhX7BM?si=piAkfkwuorldn3sb)
Which may be for the best, because people shouldn’t be implicitly trusting the bullshit engine.
I heard majority of the users are techies asking coding questions. What do you sell to someone asking how to fix a nested for loop in C++? I am genuinely curious. Programmers are known to be the stingiest consumers out there.
I'm not sure that stereotype holds up. Developers spend a lot: courses, cloud services, APIs, plugins, even fancy keyboards.
A quick search shows that click on ads targeting developers are expensive.
Also there is a ton of users asking to rewrite emails, create business plans, translate, etc.
OpenAI has half a billion active users.
You don't need every individual request to be profitable, just the aggregate. If you're doing a Google search for, like, the std::vector API reference you won't see ads. And that's probably true for something like 90% of the searches. Those searches have no commercial value, and serving results is just a cost of doing business.
By serving those unmonetizable queries the search engine is making a bet that when you need to buy a new washing machine, need a personal injury lawyer, or are researching that holiday trip to Istanbul, you'll also do those highly commercial and monetizable searches with the same search engine.
Chatbots should have exactly the same dynamics as search engines.
> I heard majority of the users are techies asking coding questions.
Citation needed? I can't sit on a bus without spotting some young person using ChatGPT
According to fb's aggressively targeted marketing, you sell them donald trump propaganda.
It's very important to note that advertisers set the parameters in which FB/Google's algorithms and systems operate. If you're 25-55 in a red state, it seems likely that you'll see a bunch of that information (even if FB are well aware you won't click).
4 replies →
You sell them Copilot. You Sell them CursorAI. You sell them Windsurf. You sell them Devin. You sell the Claude Code.
Software guys are doing much, much more than treating LLM's like an improved Stack Overflow. And a lot of them are willing to pay.
You'd probably do brand marketing for Stripe, Datadog, Kafka, Elastic Search etc.
You could even loudly proclaim that the are ads are not targeted by users which HN would love (but really it would just be old school brand marketing).
…for starters, you can sell them the ability to integrate your AI platform into whatever it is they are building, so you can then sell your stuff to their customers.
The existence of the LLMs will themselves change the profile and proclivities of people we consider “programmers” in the same way the app-driven tech boom did. Programmers who came up in the early days are different from ones who came up in the days of the web are different from ones who came up in the app era.
A lot of people use it for cooking and other categories as well.
Techies are also great for network growth and verification for other users, and act as community managers indirectly.
and they wouldn't even have to make the model say the ads. I think that's a terrible idea which would drive model performance down.
Traditional banner ads, inserted inline into the conversation based on some classifier seem a far better idea.