Comment by matthewowen

2 days ago

It's sort of hard to judge this.

The article mostly focuses on ChatGPT uses, but hard to say if ChatGPT is going to be the main revenue driver. It could be! Also unclear if the underlying report is underconsidering the other products.

It also estimates that LLM companies will capture 2% of the digital advertising market, which seems kind of low to me. There will be challenges in capturing it and challenges with user trust, but it seems super promising because it will likely be harder to block and has a lot of intent context that should make it like search advertising++. And for context, search advertising is 40% of digital ad revenue.

Seems like the error bars have to be pretty big on these estimates.

IMO the key problem that OpenAI have is that they are all-in on AGI. Unlike a Google, they don't have anything else of any value. If AGI is not possible, or is at least not in reach within the next decade or so, OpenAI will have a product in the form of AI models that have basically zero moat. They will be Netscape in a world where Microsoft is giving away Internet Explorer for free.

Meanwhile, Google would be perfectly fine. They can just integrate whatever improvements the actually existing AI models offer into their other products.

  • I've also thought of this and what's more, Google's platform provides them with training from YouTube, optimal backend access to the Google Search index for grounding from an engine they've honed for decades, training from their smartphones, smart home devices and TV's, Google Cloud... And as you say, also the reverse; empowering their services from said AI, too.

    They can also run AI as a loss leader like with Antigravity.

    Meanwhile, OpenAI looks like they're fumbling with that immediately controversial statement about allowing NSFW after adult verification, and that strange AI social network which mostly led to Sora memes outside of it.

    I think they're going to need to do better. As for coding tools, Anthropic is an ever stronger contender there, if they weren't pressured from Google already.

  • > they are all-in on AGI

    What are you basing this on? None of their investor-oriented marketing says this.

    • https://openai.com/charter/

      > OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

      Note that it doesn't say: "Our mission is to maximize shareholder value, and we develop AI systems to do that".

      4 replies →

    • The opening lines of their mission statement is direct about this:

      "OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity."

      and

      "We are building safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome."

      https://openai.com/about/

    • I don't know what the moneyed insiders think OpenAI is about, but Sam Altman's public facing thoughts (which I consider to be marketing) are definitely oriented toward making it look like they are all-in on AGI:

      See:

      (1) https://blog.samaltman.com/the-gentle-singularity (June, 2025) - "We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be."

      - " It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year."

      (2) https://blog.samaltman.com/three-observations (Feb, 2025) - "Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity."

      - "In a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today."

      (3) https://blog.samaltman.com/reflections (Jan, 2025) - "We started OpenAI almost nine years ago because we believed that AGI was possible, and that it could be the most impactful technology in human history"

      - "We are now confident we know how to build AGI as we have traditionally understood it."

      (4) https://ia.samaltman.com/ (Sep, 2024) - "This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there."

      (5) https://blog.samaltman.com/the-merge (Dec, 2017) - "A popular topic in Silicon Valley is talking about what year humans and machines will merge (or, if not, what year humans will get surpassed by rapidly improving AI or a genetically enhanced species). Most guesses seem to be between 2025 and 2075."

      (I omitted about as many essays. The hype is strong in this one.)

  • "don't have anything else of any value. " ?

    OpenAI is still de facto the market leader in terms of selling tokens.

    "zero moat" - it's a big enough moat that only maybe four companies in the world have that level of capability, they have the strongest global brand awareness and direct user base, they have some tooling and integrations which are relatively unique etc..

    'Cloud' is a bigger business than AI at least today, and what is 'AWS moat'? When AWS started out, they had 0 reach into Enterprise while Google and AWS had infinity capital and integration with business and they still lost.

    There's a lot of talk of this tech as though it's a commodity, it really isn't.

    The evidence is in the context of the article aka this is an extraordinary expensive market to compete in. Their lack of deep pockets may be the problem, less so than everything else.

    This should be an existential concern for AI market as a whole, much like Oil companies before highway project buildout as the only entities able to afford to build toll roads. Did we want Exxon owning all of the Highways 'because free market'?

    Even more than Chips, the costs are energy and other issues, for which Chinese government has a national strategy which is absolutely already impacting the AI market. If they're able to build out 10x data centres at offer 1/10th the price at least for all the non-Frontier LLM, and some right at the Frontier, well, that would be bad in the geopolitical sense.

    • The AWS moat is a web of bespoke product lock-in and exorbitant egress fees. Switching cloud providers can be a huge hassle if you didn't architect your whole system to be as vendor-agnostic as possible.

      If OpenAI eliminated their free tier today, how many customers would actually stick around instead is going to Google's free AI? It's way easier to swap out a model. I use multiple models every day until the free frontier tokens run out, then I switch.

      That said, idk why Claude seems to be the only one that does decent agents, but that's not exactly a moat; it's just product superiority. Google and OAI offer the same exact product (albeit at a slightly lower level of quality) and switching is effortless.

      4 replies →

    • Selling tokens at a massive loss, burning billions a quarter isn't the win you think it is. They don't have a moat bc they literally just lost the lead, you only can have a moat when you are the dominant market leader which they never were in the first place.

      9 replies →

    • I think you're measuring the moat of developing the first LLMs but the moat to care about is what it'll take to clone the final profit generating product. Sometimes the OG tech leader is also the long term winner, many times they are not. Until you know what the actual giant profit generator is (e.g. for Google it was ads) then it's not really possible to say how much of a moat will be kept around it. Right now, the giant profit generator is not seeming to be the number of tokens generated itself - that is really coming at a massive loss.

    • I mean, on your Cloud point I think AWS' moat might arguably be a set of deep integrations between services, and friendly API's that allow developers to quickly integrate and iterate.

      If AWS' was still just EC2, and S3 then I would argue they had very little moat indeed.

      Now, when it comes to Generative AI models, we will need to see where the dust settles. But open-weight alternatives have shown that you can get a decent level of performance on consumer grade hardware.

      Training AI is absolutely a task that needs deep pockets, and heavy scale. If we settle into a world where improvements are iterative, the tooling is largely interoperable... Then OpenAI are going to have to start finding ways of making money that are not providing API access to a model. They will have to build a moat. And that moat may well be a deep set of integrations, and an ecosystem that makes moving away hard, as it arguably is with the cloud.

      3 replies →

  • > IMO the key problem that OpenAI have is that they are all-in on AGI

    I think this needs to be said again.

    Also, not only do we not know if AGI is possible, but generally speaking, it doesn't bring much value if it is.

    At that point we're talking about up-ending 10,000 years of human society and economics, assuming that the AGI doesn't decide humans are too dangerous to keep around and have the ability to wipe us out.

    If I'm a worker or business owner, I don't need AGI. I need something that gets x task done with a y increase in efficiency. Most models today can do that provided the right training for the person using the model.

    The SV obsession with AGI is more of a self-important Frankenstein-meets-Pascal's Wager proposition than it is a value proposition. It needs to end.

    • Why would AGI not be possible?

      It might be hard, it might be difficult, but it is definitely possible. Us humans are the evidence for that.

      6 replies →

    • So, you're a business owner and you've decided we need AGI bc you're fine. You've no one to blame when the Revolution comes.

      You clearly do not understand AGI. It's a gamble that really is most easily explained by saying, creating a god. That thing won't hate us. We create its oxygen - data. If anything, it would empower us to make of it.

      1 reply →

  • The moat for any frontier LLM developer will be access to proprietary training data. OpenAI is spending some of their cash to license exclusive rights to third party data, and also hiring human experts in certain fields just to create more internal training data. Of course their competitors are also doing the same. We may end up in a situation where each LLM ends up superior in some domains and inferior in others depending on access to high quality training data.

  • Not only this, but there is a compounded bet that it’ll be OpenAI that cracks AGI and not another lab, particularly Google from which LLMs come in the first place. What makes OpenAI researchers so special at this point?

    • What's more -- how long can they keep the lid on AGI? If anyone actually cracks it... surely competitors are only a couple months behind. At least that seems to be the case with every new model thus far.

  • Also, they'll have garbage because the curve is sinusoidal and not anything else. Regardless of the moat, the models won't be powerful enough to do a significant amount of work.

  • This is how I look at Meta as well. Despite how much it is hated on here fb/ig/whatsapp aren’t dying.

    AI not getting much better from here is probably in their best interest even.

    It’s just good enough to create the slop their users love to post and engage with. The tools for advertisers are pretty good and just need better products around current models.

    And without new training costs “everyone” says inference is profitable now, so they can keep all the slopgen tools around for users after the bubble.

    Right now the media is riding the wave of TPUs they for some reason didn’t know existed last week. But Google and meta have the most to gain from AI not having any more massive leaps towards agi.

  • They're both all in on being a starting point to the Internet. Painting with a broad brush that was Facebook or Google Search. Now it's Facebook, Google Search, and ChatGPT.

    There is absolutely a moat. OpenAI is going to have a staggering amount of data on its users. People tell ChatGPT everything and it probably won't be limited to what people directly tell ChatGPT.

    I think the future is something like how everyone built their website with Google Analytics. Everyone will use OpenAI because they will have a ton of context on their users that will make your chatbot better. It's a self perpetuating cycle because OpenAI will have the users to refine their product against.

    • yeah but your argument is true for every llm provider. so i don't see how it's a moat since everyone who can raise money to offer an llm can do the same thing. and google and microsoft doesn't need to find llm revenue it can always offer it at a loss if it chooses unless it's other revenue streams suddenly evaporate. and tbh i kind of doubt personalization is as deep of a moat as you think it is.

      8 replies →

> It also estimates that LLM companies will capture 2% of the digital advertising market, which seems kind of low to me.

I'm not super bullish on "AI" in general (despite, or maybe because of working in this space the last few years), but strongly agree that the advertising revenue that LLM providers will capture can be potentially huge.

Even if LLMs never deliver on their big technical promises, I know so many casual users of LLMs that basically have replaced their own thought process with "AI". But this is an insane opportunity for marketing/advertising that stands to be a much of a sea change in the space as Google was (if not more so).

People trust LLMs with tons of personal information, and then also trust it to advise them. Give this behavior a few more years to continue to normalize and product recommendations from AI will be as trusted as those from a close friends. This is the holy grail of marketing.

I was having dinner with some friends and one asked "Why doesn't Claude link to Amazon when recommending a book? Couldn't they make a ton in affiliate links?" My response was that I suspect Anthropic would rather pass on that easy revenue to build trust so that one day they can recommend and sell the book to you.

And, because everything about LLMs is closed and private, I suspect we won't even know when this is happening. There's a world where you ask an LLM for a recipe, it provides all the ingredients for your meal from paid sponsors, then schedules to have them delivered to your door bypassing Amazon all together.

All of this can be achieved with just adding layers on to what AI already is today.

  • What in the dystopia?

    The "holy grail" of the AI business model is to build a feeling of trust and security with their product and then turn around to try and gouge you on hemmorrhoid cream and the like?

    We really need to stop the worship of mustache twirling exploitation

    • There's no worship here on my part (in fact I got out of the AI space because was increasingly less about tech/solving problems, and more about pure hype), but my experience in this industry has been that the most dystopian path tends to be the most likely. I would prefer if Google search, Reddit and YouTube were closer to what they were 15 years ago, but I do recognize how they got here.

      I mean, look at all this "alignment" research. I think the people working in this space sincerely believe they are protecting humanity of a "misaligned" AGI, but I also strongly believe the people paying for this research want to figure out how to make sure we can keep LLMs aligned with the interests of advertisers.

      Meta put so much money into the Metaverse because they were looking for the next space that would be like the iPhone ecosystem: one of total control (but ideally better). Already people are using LLMs for more and more mundane tasks, I can easily imagine a world where an LLM is the interface for interacting online world rather than a web browser (isn't that what we want with all these "agents"?) People already have AI lovers, have AI telling them that they are gods, having people connecting with them on a deeper level than they should. You believe Sam Altman doesn't realize the potential for exploitation here is unbounded?

      What AI represents is where a single company control every piece of information fed to you and has also established deep trust with you. All the benefits of running a social media company (unlimited free content creation, social trust) with none of the draw backs (having to manage and pay content creators).

  • In my experience LLMs suck at (product) recommendations - I was looking for books with certain themes, asked ChatGPT 5, the answer was vague, generic and didn't fit the bill. At another time I writing an essay and was looking for famous figures to cite as examples of an archetype, and ChatGPT's answers were barely related.

    In both cases, LLMs gave me examples that were generally famous, but very tangentially related to the subject at hand (at times, ChatGPT was reaching or straight up made up stuff).

    I don't know why it has this bias, but it certainly does.

    • I work on rec systems.

      The ideal here will be a multi tiered approach where the LLM first identifies that a book should be recommended, a traditional recommendation system chooses the best book for the user (from a bank of books that are part of an ads campaign), and then finally the LLM weaving that into the final response by prompt suggestion. All of this is individually well tested for efficacy within the social media industry.

      I'll probably get comments calling this dystopian but I'm just addressing the claim that LLMs don't do good recommendations right now, which is not fundamental to the chatbot system.

      1 reply →

> It also estimates that LLM companies will capture 2% of the digital advertising market, which seems kind of low to me. There will be challenges in capturing it and challenges with user trust, but it seems super promising because it will likely be harder to block and has a lot of intent context that should make it like search advertising++. And for context, search advertising is 40% of digital ad revenue.

Yeah, I don't like that estimate. It's either way too low, or much too high. Like, I've seen no sign of OpenAI building an ads team or product, which they'd need to do soon if it's going to contribute meaningful revenue by 2030.

  • https://openai.com/careers/growth-paid-marketing-platform-en...

    Is that role not exactly what you mention?

    • At least the description is not at all about building an adtech platform inside OpenAI, it's about optimizing their marketing spend (which being a big brand, makes sense).

      There are a bunch of people from FB at OpenAI, so they could staff an adtech team internally I think, but I also think they might not be looking at ads yet, with having "higher" ambitions (at least not the typical ads machine ala FB/Google). Also if they really needed to monetize, I bet they could wire up Meta ads platform to buy on ChatGPT, saving themselves a decade of building a solid buying platform for marketers.

      2 replies →

    • Actually yes (I did mean to check again but I hadn't seen evidence of this before).

      I do think that this seems odd, looks like they're hiring an IC to build some of this stuff, which seems odd as I would have expected them to be hiring multiple teams.

      That being said, the earliest they could start making decent money from this is 2028, and if we don't see them hire a real sales team by next March then it's more likely to be 2030 or so.

    • no. this role is for running ads campaigns at scale (on google, meta, etc) to grow openai users. its at a large enough scale it's called "platform" but it would be internal use only.

      > Your role will include projects such as developing campaign management tools, integrating with major ad platforms, building real-time attribution and reporting pipelines, and enabling experimentation frameworks to optimize our objectives.

  • > Like, I've seen no sign of OpenAI building an ads team or product

    You just haven't been paying attention. They hired Fidji Simo to lead applications in may, she led monetization/ads at facebook for a decade and have been staffing up aggressively with pros.

    Reading between the lines in interview with wired last week[0], they're about to go all in with ads across the board, not just the free version. Start with free, expand everywhere. The monetization opportunities in chatgpt are going to make what google offers with adwords look quaint, and every CMO/performance marketer is going to go in head first. 2% is tiny IMO.

    [0] - https://archive.is/n4DxY

    • I have indeed being paying attention, thanks. One executive does not an ads product make, though.

      I think that ads are definitely a plausible way to make money, but it's legally required that they be clearly marked as such, and inline ads in the responses are at least 1-2 versions away.

      The other option is either top ads or bottom ads. It's not clear to me if this will actually work (the precedents in messaging apps are not encouraging) but LLM chat boxes may be perceived differently.

      And just because you have a good ad product doesn't mean you'll get loads of budget. You also need targeting options, brand safety, attribution and a massive sales team. It's a lot of work and I still maintain it will take till 2030 at least.

Thanks for calling this out. Here is a better comparison. Before Google was founded, the market for online search advertising was negligible. But the global market for all advertising media spend was on the order of 400B (NYT 1998). Today, Google's advertising revenue is around 260B / year or about 60% of the entire global advertising spend circa 1998.

If you think of openAI like a new google, as in a new category-defining primary channel for consumers to search and discover products. Well, 2% does seem pretty low.

  • >Today, Google's advertising revenue is around 260B / year or about 60% of the entire global advertising spend circa 1998.

    Or about 30% of the global advertising spend circa 2024.

    I wonder if there is an upper bound on what portion of the economy can be advertising. At some point it must become saturated. People can only consume so much marketing.

    • Advertising is in many market like a tax or tariff - something all businesses needs to pay. Think of selling consumer goods online - you need ads on social media to bring in customers. Spending 10% on ads as COGS is a no brainer. 20% too. Maybe it could go as high as 50% - if the companies do not really have an alternative, and all the competitors ard doing it too? They are just going to pass the bill to the consumer anyway...

  • But that occurred with a new form of media that people now use in more of their time than back before Google. It implies AI is growth in time spent. I think the trend is more likely that AI will replace other media.

  • i hate to be that guy, but.. before google was around, it was the first wave of commercial internet - for all of what five years? Online search was a thing, in-fact it was THE thing across many vendors and all relied on advertising revenue. Revenue on the internet which was ramping up still for dotcom era in those few years. Google's ad revenue vs 98 global ad spend revenue - is that inflation adjusted? Global markets development since then, internet economy expansion, even sheer number of people alive.. completely different worlds.

    What might stand from comparison is google introduced a good product people wanted to use and innovative approach to marketing at the time which was unobtrusive. Product drive the traffic. It was quite a bit before Google figured it all out though.

There's also a possible scenario where the online ads market around search engines gets completely disrupted and the only remaining avenues for ad spending are around content delivery systems (social media, youtube, streaming, webpages, etc.). All other discovery happens within chatbots and they just get a revenue share whenever a chatbot refers a user to a particular product. I think ChatGPT is soon going to roll out this feature where you can do walmart shopping without leaving the chat.

  • Your revenue share concept sounds passive. I suspect advertisers will also be able to pay for placement.

  • Shopping within Alexa never made sense. I'm not sure I'll want to do it via ChatGPT.

    Maybe they're thinking they can build a universal store with search over every store? Like a "Google Shopping" type experience?

Google, Meta and Microsoft have AI search as well, so OAI with no ad product or real time bidding platform isn't going to just walk in and take their market.

2% is optimistic in my opinion.

  • Google, Meta and Microsoft would have to compete on demand, i.e. users of the chat product. Not saying they won't manage, but I don't think the competition is about ad tech infrastructure as much as it is about eyeballs.

    • It might take Microsoft's Bing share, but Google and Meta pioneered the application of slot machine variable-reward mechanics to Facebook, Instagram and Youtube, so it would take a lot more than competing on demand to challenge them.

Tapping into AdTech is extremely hard, as it's hard driven by network effects. What you mean is "displaying ads inside OpenAI products" then, yes, achievable, but that's a miniscule part of targeted Ad markets - 2% is actually very optimistic. Otherwise, they can sell literally 0 products to existing players as they all have already established "AI" toolsets to help them for ad generation and targeting.

  • Query: LibraGPT, create a plan for my trip to Italia

    Response: Book a car at <totally not an ad> and it will be waiting for you at arrival terminal, drive to Napoli and stay at <totally not an ad> with an amazing view. There's an amazing <totally not an ad> place that serves grandma's favorite carbonara! Do you want me to make the bookings with a totally not fake 20% discount?

    • I'm traveling like this all the time already, I don't understand why it's hard for people to understand that ad placement is actually easier for chat than search

      3 replies →

    • Let's put an analogy to Google ads - the ads that appear at search results do not make up even 5% of their ad revenue. Even smaller for Meta. They earn their big ad revenues from their network, not from their main apps.

      1 reply →

    • That seems a bit risky for when the car isn't waiting for you at the terminal.

      At least with an ad it's obvious a separate company is involved. If you do all the payment through OpenAI it seems to leave them open to liability.

      2 replies →

  • If ChatGPT shows ads, I'll switch to Claude or Gemini or DeepSeek.

    • I expect all hosted model providers will serve ads (regular, marked advertisements, no need for them to pretend not to, people don't care) once the first provider takes the lid off on the whole thing and it proves to be making money. There's no point in differentiating as the one hosted model with no ads because it only attracts a few people, the same way existing Google search and YouTube alternatives that respect the user are niche. Only offline, self hosted models will be ad free (in my estimation).

    • Assuming you know it's an ad. Ads in answers will generate a ton of revenue and you'll never know if that Hilton really is the best hotel or if they just paid the most.

      3 replies →

    • The OpenAI pitch for “publishing partnerships” (basically buying bias and placement) leaked last year.

    • They're all already trained on ads, and it would be silly to think advertisers aren't going to optimize for this.

    • Do you think it would show ads, or just prioritize content based on who has paid for ad placement?

> And for context, search advertising is 40% of digital ad revenue.

But all the search companies have their own AI so how would OAI make money in this sector?

  • Several ways, although I'm not sure whether the below will happen:

    1. Paid ads - ChatGPT could offer paid listings at the top of its answers, just like Google does when it provides a results page. Not all people will necessarily leave Google/Gemini for future search queries, but some of the money that used to go to Google/Bing could now go to OpenAI.

    2. Behavioral targeting based on past ChatGPT queries. If you have been asking about headache remedies, you might see ads for painkillers - both within ChatGPT and as display ads across the web.

    3. Affiliate / commission revenue - if you've asked for product recommendations, at least some might be affiliate links.

    The revenue from the above likely wouldn't cover all costs based on their current expenditure. But it would help a bit - particularly for monetizing free users.

    Plus, I'm sure there will be new advertising models that emerge in time. If an advertiser could say "I can offer $30 per new customer" and let AI figure out how to get them and send a bill, that's very different to someone setting up an ad campaign - which involves everything from audience selection and creative, to bid management and conversion rate optimization.

    • So I don't necessarily disagree with your suggestions, but that is just not a $1T company you're describing. That's basically a X/Twitter size company, and most agree that $44B was overpaying.

      It's not that OpenAI hasn't created something impressive, it just came at to high a price. We're talking space program money, but without all the neat technologies that came along as a result. OpenAI more or less develop ONE technology, no related product or technologies are spun out of the program. To top it all off, the thing they built, apparently not that hard to replicate.

      1 reply →

> It also estimates that LLM companies will capture 2% of the digital advertising market, which seems kind of low to me.

This cannot all be about advertising. They are selling a global paradigm shift not a fraction of low conversion rate eyeballs. If they start claiming advertising is a big part of their revenue stream then we will know that AI has reached a dead end.

> it will likely be harder to block

Maybe users will employ LLMs to block ads? There's a problem in that local LLMs are less powerful and so would have a hard time blocking stealth ads crafted from a more powerful LLM, and would also add latency (remote LLMs add latency too, but the user may not want to pay double for that)

Seems like ad targeting might be a tough sell here though, it’d basically have to be “trust me bro”. Like - I want to advertise coca-cola when people ask about terraforming deserts? I think I wouldn’t be either surprised by amazing success or terrifying failure.

Perplexity actually did search with references linked to websites they could relate in a graph and even that only made them like $27k.

I think the problem is on Facebook and Google you can build an actual graph because content is a thing (a url, video link etc). It will be much harder to I think convert my philosophical musings into active insights.

  • So few people understand how advertising on the internet works and that is I guess why Google and Meta basically print money.

    Even here the idea that it’s as simple as “just sell ads” is utterly laughable and yet it’s literally the mechanism by which most of the internet operates.

You have to take into consideration the source. FT is part of the Anthropic circle of media outlets and financial ties. It benefits them to create a draft of support to OpenAI competition, primarily Anthropic, but they(FT) also have deep ties to Google and the adtech regime.

They benefit from slowing and attacking OpenAI because there's no clear purpose for these centralized media platforms except as feeds for AI, and even then, social media and independents are higher quality sources and filters. Independents are often making more money doing their own journalism directly than the 9 to 5 office drones the big outlets are running. Print media has been on the decline for almost 3 decades now, and AI is just the latest asteroid impact, so they're desperate to stay relevant and profitable.

They're not dead yet, and they're using lawsuits and backroom deals to insert themselves into the ecosystem wherever they can.

This stuff boils down to heavily biased industry propaganda, subtly propping up their allies, overtly bashing and degrading their opponents. Maybe this will be the decade the old media institutions finally wither up and die. New media already captures more than 90% of the available attention in the market. There will be one last feeding frenzy as they bilk the boomers as hard as possible, but boomers are on their last hurrah, and they'll be the last generation for whom TV ads are meaningfully relevant.

Newspapers, broadcast TV, and radio are dead, long live the media. I, for one, welcome our new AI overlords.

  • All of which is great theory without any kind of evidence? Whereas the evidence pretty clearly shows OpenAI is losing tons of money and the revenue is not on track to recover it?

    • Well, for one, the model doesn't take into account various factors, assumes a fixed cost per token, and doesn't allow for the people in charge of buying and selling the compute to make decisions that make financial sense. Some of OpenAI research commitments and compute is going toward research, with no contracted need for profit or even revenue.

      If you account for the current trajectory of model capabilities, bare-minimum competence and good faith on behalf of OpenAI and cloud compute providers, then it's nowhere near a money pit or shenanigan, it's typical VC medium to high risk investment plays.

      At some point they'll pull back the free stuff and the compute they're burning to attract and retain free users, they'll also dial in costs and tweak their profit per token figure. A whole lot of money is being spent right now as marketing by providing free or subsidized access to ChatGPT.

      If they wanted to maximize exposure, then dial in costs, they could be profitable with no funding shortfalls by 2030 if they pivot, dial back available free access, aggressively promote paid tiers and product integrations.

      This doesn't even take into account the shopping assistant/adtech deals, just ongoing research trajectories, assumed improved efficiencies, and some pegged performance level presumed to be "good enough" at the baseline.

      They're in maximum overdrive expansion mode, staying relatively nimble, and they've got the overall lead in AI, for now. I don't much care for Sam Altman on a personal level, but he is a very savvy and ruthless player of the VC game, with some of the best ever players of those games as his mentors and allies. I have a default presumption of competence and skillful maneuvering when it comes to OpenAI.

      When an article like this FT piece comes out and makes assumptions of negligence and incompetence and projects the current state of affairs out 5 years in order to paint a negative picture, then I have to take FT and their biases and motivations into account.

      The FT article is painting a worst case scenario based on the premise "what if everyone involved behaved like irresponsible morons and didn't do anything well or correctly!" Turns out, things would go very badly in that case.

      ChatGPT was released less than 3 years ago. I think predicting what's going to happen in even 1 year is way beyond the capabilities of FT prognosticators, let alone 5 years. We're not in a regime where Bryce Elder, finance and markets journalist, is capable or qualified to make predictions that will be sensible over any significant period of time. Even the CEOs of the big labs aren't in a position to say where we'll be in 5 years. I'd start getting really skeptical when people start going past 2 years, across the board, for almost anything at this point.

      Things are going to get weird, and the rate at which things get weird will increase even faster than our ability to notice the weirdness.

      2 replies →

I mean seriously, if they offered ChatGPT for free but with ads I bet many would use that.

There is your multi-bn $ revenue stream.

FT is really losing it. Used to be reliable with quality takes. Now mostly following in line with the spectating takers.

  • In what sense? They are asking the questions that investment managers would be asking, like: "where the fuck is your revenue going to come from"

    and "200 billion, when your revenue is 12, is the market you are targeting actually big enough to support that"

  • It is to the point of yellow journalism. They know that the "OpenAI is going to go belly up in a week!" take is going to be popular with AI skeptics, which includes a large number of HN viewers. This thread shot up to the top of the front page almost immediately. All of that adds to the chances of roping in more subscribers.