Comment by lordnacho

6 months ago

Are you saying they'd be profitable if they didn't pour all the winnings into research?

From where I'm standing, the models are useful as is. If Claude stopped improving today, I would still find use for it. Well worth 4 figures a year IMO.

They'd be profitable if they showed ads to their free tier users. They wouldn't even need to be particularly competent at targeting or aggressive with the amount of ads they show, they'd be profitable with 1/10th the ARPU of Meta or Google.

And they would not be incompetent at targeting. If they were to use the chat history for targeting, they might have the most valuable ad targeting data sets ever built.

  • Bolting banner ads onto a technology that can organically weave any concept into a trusted conversation would be incredibly crude.

    • True - but if you erode that trust then your users may go elsewhere. If you keep the ads visually separated, there's a respected boundary & users may accept it.

      3 replies →

    • I imagine they would be more like product placements in film and TV than banner ads. Just casually dropping a recommendation and link to Brand (TM) in a query. Like those Cerveza Cristal ads in star wars. They'll make it seem completely seamless to the original query.

      2 replies →

    • Like that’s ever stopped the adtech industry before.

      It would be a hilarious outcome though, “we built machine gods, and the main thing we use them for is to make people click ads.” What a perfect Silicon Valley apotheosis.

  • Targeted banner ads based on chat history is last-two-decades thinking. The money with LLMs will be targeted answers. Have Coca-Cola pay you a few billion dollars to reinforce the model to say "Coke" instead of "soda". Train it the best source of information about political subjects is to watch Fox News. This even works with open-source models, too!

    • It sounds quite scary that an LLM could be trained on a single source of news (specially FN).

  • If interactions with your AI start sounding like your conversation partner shilling hot cocoa powder at nobody in particular those conversations are going to stop being trusted real quick. (Pop culture reference: https://youtu.be/MzKSQrhX7BM?si=piAkfkwuorldn3sb)

    Which may be for the best, because people shouldn’t be implicitly trusting the bullshit engine.

  • I heard majority of the users are techies asking coding questions. What do you sell to someone asking how to fix a nested for loop in C++? I am genuinely curious. Programmers are known to be the stingiest consumers out there.

    • I'm not sure that stereotype holds up. Developers spend a lot: courses, cloud services, APIs, plugins, even fancy keyboards.

      A quick search shows that click on ads targeting developers are expensive.

      Also there is a ton of users asking to rewrite emails, create business plans, translate, etc.

    • OpenAI has half a billion active users.

      You don't need every individual request to be profitable, just the aggregate. If you're doing a Google search for, like, the std::vector API reference you won't see ads. And that's probably true for something like 90% of the searches. Those searches have no commercial value, and serving results is just a cost of doing business.

      By serving those unmonetizable queries the search engine is making a bet that when you need to buy a new washing machine, need a personal injury lawyer, or are researching that holiday trip to Istanbul, you'll also do those highly commercial and monetizable searches with the same search engine.

      Chatbots should have exactly the same dynamics as search engines.

    • > I heard majority of the users are techies asking coding questions.

      Citation needed? I can't sit on a bus without spotting some young person using ChatGPT

    • You sell them Copilot. You Sell them CursorAI. You sell them Windsurf. You sell them Devin. You sell the Claude Code.

      Software guys are doing much, much more than treating LLM's like an improved Stack Overflow. And a lot of them are willing to pay.

    • You'd probably do brand marketing for Stripe, Datadog, Kafka, Elastic Search etc.

      You could even loudly proclaim that the are ads are not targeted by users which HN would love (but really it would just be old school brand marketing).

    • …for starters, you can sell them the ability to integrate your AI platform into whatever it is they are building, so you can then sell your stuff to their customers.

    • The existence of the LLMs will themselves change the profile and proclivities of people we consider “programmers” in the same way the app-driven tech boom did. Programmers who came up in the early days are different from ones who came up in the days of the web are different from ones who came up in the app era.

    • A lot of people use it for cooking and other categories as well.

      Techies are also great for network growth and verification for other users, and act as community managers indirectly.

  • and they wouldn't even have to make the model say the ads. I think that's a terrible idea which would drive model performance down.

    Traditional banner ads, inserted inline into the conversation based on some classifier seem a far better idea.

That's calculating value against not having LLMs and current competitors. If they stopped improving but their competitors didn't, then the question would be the incremental cost of Claude (financial, adjusted for switching costs, etc) against the incremental advantage against the next best competitor that did continue improving. Lock in is going to be hard to accomplish around a product that has success defined by its generalizability and adaptability.

Basically, they can stop investing in research either when 1) the tech matures and everyone is out of ideas or 2) they have monopoly power from either market power or oracle style enterprise lock in or something. Otherwise they'll fall behind and you won't have any reason to pay for it anymore. Fun thing about "perfect" competition is that everyone competes their profits to zero

But if Claude stopped pouring their money into research and others didn't, Claude wouldn't be useful a year from now, as you could get a better model for the same price.

This is why AI companies must lose money short term. The moment improvements plateau or the economic environment changes, everyone will cut back on research.

For me, if Anthropic stopped now, and given access to all alternative models, they still would be worth exactly $240 which is the amount I'm paying now. I guess Anthropic and OpenAI can see the real demand by clearly seeing what are their free:basic:expensive plan ratios.

> Well worth 4 figures a year IMO

only because software engineering pay hasn't adjusted down for the new reality . You don't know what its worth yet.

  • Can you explain this in more detail? The idiot bottom rate contractors that come through my team on the regular have not been helped at all by LLMs. The competent people do get a productivity boost though.

    The only way I see compensation "adjusting" because of LLMs would need them to become significantly more competent and autonomous.

    • There's another specific class of person that seems helped by them: the paralysis by analysis programmer. I work with someone really smart who simply cannot get started when given ordinary coding tasks. She researches, reads and understands the problem inside and out but cannot start actually writing code. LLMs have pushed her past this paralysis problem and given her the inertia to continue.

      On the other end, I know a guy who writes deeply proprietary embedded code that lives in EV battery controllers and he's found LLMs useless.

    • > Can you explain this in more detail?

      Not sure what GP meant specifically, but to me, if $200/m gets you a decent programmer, then $200/m is the new going rate for a programmer.

      Sure, now it's all fun and games as the market hasn't adjusted yet, but if it really is true that for $200/m you can 10x your revenue, it's still only going to be true until the market adjusts!

      > The competent people do get a productivity boost though.

      And they are not likely to remain competent if they are all doing 80% review, 15% prompting and 5% coding. If they keep the ratios at, for example, 25% review, 5% prompting and the rest coding, then sure, they'll remain productive.

      OTOH, the pipeline for juniors now seems to be irrevocably broken: the only way forward is to improve the LLM coding capabilities to the point that, when the current crop of knowledgeable people have retired, programmers are not required.

      Otherwise, when the current crop of coders who have the experience retires, there'll be no experience in the pipeline to take their place.

      If the new norm is "$200/m gets you a programmer", then that is exactly the labour rate for programming: $200/m. These were previously (at least) $5k/m jobs. They are now $200/m jobs.

      9 replies →

  • I mean, it adjusted down by having some hundreds of thousands of engineers laid off in he last 2+ years. they know slashing salaries is legal suicide, so they just make the existing workers work 3x as hard.