← Back to context

Comment by davnicwil

10 days ago

this is a good point, and it would be interesting to see the relative value of this building and housing 'plumbing' overhead Vs the chips themselves.

I guess another example of the same thing is power generation capacity, although this comes online so much more slowly I'm not sure the dynamics would work in the same way.

The data centers built in 1998 don't have nearly enough power or cooling capacity to run today's infrastructure. I'd be surprised if very many of them are even still in use. Cheaper to build new than upgrade.

  • How come? I'd expect that efficiency gains would lower power and thus cooling demands - are we packing more servers into the same space now or losing those gains elsewhere?

    • Power limitations are a big deal. I haven't shopped for datacenter cages since web 2.0, but even back then it was a significant issue. Lots of places couldn't give you more than a few kw per rack. State of the art servers can be 2kw each, so you start pushing 60kw per rack. Re rigging a decades old data center for that isn't trivial. Remember you need not just the raw power but cooling, backup generator capacity, enough battery to cover the transition, etc.

      It's hugely expensive, which is why the big cloud infrastructure companies have spent so much on optimizing every detail they can.

    • Yes - blades-of-servers replacing what was 2 or 3 rack mount servers. Both air exchange and power requirements are radically different in order to fill that rack as it was before.

    • It's just an educated guess, but I expect that power density has gone up quite a bit as a form of optimization. Efficiency gains permit both lower power (mobile) and higher compute (server) parts. How tightly you pack those server parts in is an entirely different matter. How many H100s can you fit on average per 1U of space?