Comment by cmiles8

5 hours ago

They need to be more worried about creating a viable economic model for the present AI craze. Right now there’s no clear path to making any of the present insanity a profitable endeavor. Yes NVIDIA is killing it, but with money pumped in from highly upside down sources.

Things will regulate themselves pretty quickly when the financial music stops.

Nvidia's biggest mistake is investing money selling shovels into prospecting firms. If not for that they'd be fine.

Do you mean that they need to find better ways to create value by using AI, or that they need better ways to extract value from end-users of AI?

I'd argue that "value creation" is already at a decent position considering generative AI and the usecase as "interactive search engine" alone.

Regarding "value extraction": Advertising should always be an option here, just like it was for radio, television and online content in general in the past.

Preventing smaller entities (or private persons even) from just doing their own thing and making their own models seems like the biggest difficulty long term to me (from the perspective of the "rent seeking" tech giant).

  • > I'd argue that "value creation" is already at a decent position considering generative AI and the usecase as "interactive search engine" alone.

    > Regarding "value extraction": Advertising should always be an option here, just like it was for radio, television and online content in general in the past.

    Not at the actual price it's going to cost though. The cost of an "interactive search" (LLM) vs a "traditional search" (Google) is exponentially higher. People tolerate ads to pay Google for the service, but imagine how many ads would ChatGPT need, or how much it will have to cost, to compensate an e.g. 10x difference. Last time I read about this a few months ago, ChatGPT were losing money on their paid tier because the people paying for it were using it a lot.

    It's more likely that ChatGPT will just be spamming ads sprinkled in the responses (like you ask for a headphone comparison, and it gives you the sponsored brand one, from a sponsored vendor, with an affiliate link), and hope it's enough.

    • > Not at the actual price it's going to cost though.

      But we don't know that pricepoint yet; current prices for all this are inflated because of the gold-rush situation, and there are lots of ways to trim marginal costs. At worst, high longterm un-optimizable costs are going to decrease use/adoption a bit, but I don't even think that is going to happen.

      Just compare the situation with video hosting: That was not profitable at first, but hardware (and bandwidth) got predictably cheaper, technology more optimized and monetization more effective and now its a good chunk of googles total revenue.

      You could have made the same arguments about video hosting in 2005 (too expensive, nobody pays for this, where's the revenue) but this would have led to extremely bad business decisions in hindsight.

      1 reply →

    • You might be thinking of old models like banner ads or keyword results at the top of search and not when you ask ChatGPT the best way to clean up something and it suggests Dawn™ Dish Soap!

The music is just getting started. The way it is going, AI will be inevitable. Companies are CONVINCED it’s adopt AI or die, whether it is effective or not.

It's already starting to replace Google searching for many people. This is why Google (and other big tech firms) started investing in it immediately.

All they need to do is start adding in sponsored results (and the ability to purchase keywords), and AI becomes insanely profitable.

  • This is crazy for me with how inaccurate Google’s AI summaries are. They’ve basically just added a chunk of lies to the top of every search page that I have to scroll past.

  • Not according to both Google’ latest revenue and profit numbers and even Apple hinted they aren’t seeing less revenue from Google searches.

The race is to be the first to make a self-improving model (and have the infrastructure it will demand).

This is a winner-takes-all game, that stands a real chance of being the last winner-takes-all game humans will ever play. Given that, the only two choices are either throw everything you can at becoming the winner, or to sit out and hope no one wins.

The labs know that substantial losses will be had, they aren't investing in this to get a return, they are investing in it to be the winner. The losers will all be financially obliterated (and whoever sat out will be irrelevant).

I doubt they are sweating to hard though, because it seems overwhelmingly likely that most people would pay >$75/mo for LLM inference monthly (similar to cell phone costs), and at that rate without going hard on training, the models are absolute money printers.

  • There is zero evidence that the current approach will ever lead to a self-improving model, or that current GPU/TPU infrastructure is even capable of running self-improving models.