← Back to context

Comment by iberator

2 days ago

That's out of touch for 90% of developers worldwide

Today. But what about in 5 years? Would you bet we will be paying hundreds of billions to OpenAI yearly or buying consumer GPUs? I know what I will be doing.

  • But the progress goes both ways: In five years, you would still want to use whatever is running on the cloud supercenters. Just like today you could run gpt-2 locally as a coding agent, but we want the 100x-as-powerful shiny thing.

    • That would be great if that was the case but my understanding is that the progress is plateauing. I don't know how much of this is anthorpic / Google / openAI holding itself back to save money and how much is the state of the art improvement slowing down though. I can imagine there could be a 64 GB GPU in five years as absurd as it feels to type that today.

      6 replies →

    • Not really, for many cases I'm happy using Qwen3-8B in my computer and would be very happy if I could run Qwen3-Coder-30B-A3B.

  • Paying for compute in the cloud. That’s what I am betting on. Multiple providers, different data center players. There may be healthy margins for them but I would bet it’s always going to be relatively cheaper for me to pay for the compute rather than manage it myself.

    • > There may be healthy margins for them but I would bet it’s always going to be relatively cheaper for me to pay for the compute rather than manage it myself.

      Depends almost completely on usage. No one is renting out hardware 24x7 and making a loss on it.

      If you only have sporadic use then renting is better. If you're running it almost all the time of purchasing it outright is better.

      4 replies →