← Back to context

Comment by jstummbillig

9 days ago

People confuse themselves with the bubble-metaphor. If an AI bubble exists and pops (we need not discuss either) the already existing and on-the-way-demand will not just disappear. Millions of todays users will not just decide that they don't want to use claude code or chatgpt anymore.

Instead, an increasing number of people are going to want AI stuff from here on out, forever, because it's proven to be good enough in the eyes of hundreds of millions and that will create continuous hardware demand (at least because of hardware churn, but also because there are a lot of people in the world who currently don't have great access to this technology yet).

I don't know how much optimization will drive down hardware per token, but given that most people would rather wait like 5 seconds instead of 15 minutes for answers to their coding problems, I think it's safe to assume that hardware is going to be in demand for a long time, even if, for whatever wild reason, absolutely nothing happens on top of what has already happened.

The "bubble popping" mostly means that investment will drastically fall, investors will start demanding profit, and costs will soar. This will cause a lot of tools currently built on top of LLMs to become too expensive. Free tools will likely become rare.

There's a significant number of users that will not pay for AI. There's likely also a significant number of users that will not accept higher subscription costs no matter how much they use AI tools today.

When this happens, the market will go back to "normal". Yes, there will still be a higher demand for computer parts than before ChatGPT was released, but the demand will still go down drastically from current levels. So only a moderate increase in production capacity will be needed.

  • AI is already easily profitable without further optimization. If at any point investors decided that this is the best the models are going to get, because it's not worth further investment then we will run inference on the existing hardware, forever. What will not happen:

    - The models going away. There is no future where people will start doing more coding without AI. - Everyone running all AI on their existing notebook or phone. We have absolutely no indication that the best models are getting smaller and cheaper to run. In fact GPUs are getting bigger.

    This might hurt OpenAI, depending on how good the best available open models are at that point, but this will in no way diminish the continued increased demand for hardware.

    > When this happens

    I think all of this is highly unlikely, and would put a "If". But we will see!

> Millions of todays users will not just decide that they don't want to use claude code or chatgpt anymore

Won’t they? For a great number of people, LLM’s are in the “nice to have” basket. Execs and hucksters foam at the mouth over them, other people find utility but the vast majority are not upending their life in service of them.

I suspect if ChatGPT evaporated tomorrow, the chronically dependent would struggle, most people would shrug and go on with their lives, and any actual use cases would probably spin up a local model and go back to whatever they’re doing.

I’m not denying hardware demand will evaporate, it definitely won’t, but interrupt the cycle and “ehh, good enough” will probably go a very long way for a very large percentage of the userbase.

  • I am not sure I understand. I agree: If AI were to disappear tomorrow, people would adjust (as they did when AI, or the iPhone or the internet appeared). That's what people do.

    But now there is user demand. Who or what would take away AI? What is the scenario?

    • Lots of companies right now have slapped AI features on their products without extra cost. Some are offering it for free (e.g. search engines). If LLM costs significantly increase, I would expect free AI features to disappear or become extra paid features (already started to happen in some SASS), while any free LLMs become simpler and cheaper.

      2 replies →