Comment by simonw
10 days ago
That's true for the GPUs themselves, but the data centers with their electricity infrastructure and cooling and suchlike won't become obsolete nearly as quickly.
10 days ago
That's true for the GPUs themselves, but the data centers with their electricity infrastructure and cooling and suchlike won't become obsolete nearly as quickly.
this is a good point, and it would be interesting to see the relative value of this building and housing 'plumbing' overhead Vs the chips themselves.
I guess another example of the same thing is power generation capacity, although this comes online so much more slowly I'm not sure the dynamics would work in the same way.
The data centers built in 1998 don't have nearly enough power or cooling capacity to run today's infrastructure. I'd be surprised if very many of them are even still in use. Cheaper to build new than upgrade.
How come? I'd expect that efficiency gains would lower power and thus cooling demands - are we packing more servers into the same space now or losing those gains elsewhere?
3 replies →
How much more of centralized data center capacity we actually need outside AI? And how much more we would need if we used slightly more time on doing things more efficiently?
This is true. It’s probably 2-3 times as long as a GPU chip. But it’s still probably half or a quarter of the depreciation timeline of a carrier fiber line.
Even if the building itself is condemnable, what it took to build it out is still valuable.
To give a different example, right now, some of the most prized sites for renewable energy are former coal plant sites, because they already have big fat transmission lines ready to go. Yesterday's industrial parks are now today's gentrifying urban districts, and so on.
That’s true. The permitting is already dealt with, and that’s substantial lead time on any data center build.
Evnn moreso for carrier lines of course. Nimbyism is a strong block on right-of-way needs (except the undersea ones obviously).