Comment by aurareturn

1 day ago

I'm not saying that data center buildouts can't overshoot demand but AI and compute is different than fiber buildout. The more compute you have, the smarter the AI. You can use the compute to let the AI think longer (maybe hours/days/weeks) on a solution. You can run multiple AI agents simultaneously and have them work together or check each other's work. You can train and inference better models with more compute.

So there is always use for more compute to solve problems.

Fiber installations can overshoot relatively easily. No matter how much fiber you have installed, that 4k movie isn't going to change. The 3 hours of watch time for consumers isn't going to change.

Did you pay attention in computer science classes? There are problems you can't simply brute-force. You can throw all the computing power you want at them, but they won't terminate before the heat-death of the universe. An LLM can only output a convolution of its data set. That's its plateau. It can't solve problems, it can only output an existing solution. Compute power can make it faster to narrow down to that existing solution, but it can't make the LLM smarter.

  • Maybe LLMs can solve novel problems, maybe not. We don't know for sure. It's trending like it can.

    There are still plenty of problems that having more tokens would allow them to be solved, and solved faster, better. There is no absolutely no way we've already met AI compute demands for the problems that LLMs can solve today.

    • There is zero evidence that LLMs can do anything novel without a human in the loop. At most LLM is a hammer. Not exactly useless by any stretch of the imagination, but yes you need a human to swing it.

    • Every solution generated by an AI for a novel problem was ultimately rescinded. There is no trend, there is only hope.

  • LLMs are considered Turing complete.

    • Only if you instantiate it once.

      If you use it like an agent and stick it in a loop and run it until it achieves a specific outcome it's not.

  • Not really. You can leverage randomness (and LLMs absolutely do) to generate bespoke solutions and then use known methods to verify them. I'm not saying LLMs are great at this, they are gimped by their inability to "save" what they learn, but we know that any kind of "new idea" is a function of random and deterministic processes mixed together in varying amounts.

    Everything is either random, deterministic, or some shade of the two. Human brain "magic" included.

You can't really use compute more because power is already the bottleneck. Datacenter buildouts are now being measured in GW which tells you everything you need to know. Newer hardware will be a lot more power-efficient but also highly scarce for that reason.

  • Energy is also being scaled up. But the fundamental difference between compute and fiber buildup is different in my opinion.