Comment by manquer
12 days ago
While it is more complex to actually build out the center , a lot of that is specific to the regional you are doing it.
Thy will vary by country, by state or even county , setting up a DC in the Bay Area and say one in Ohio or Utah is a very different endeavor with different design considerations.
>Thy will vary by country, by state or even county , setting up a DC in the Bay Area and say one in Ohio or Utah is a very different endeavor with different design considerations.
What point are you trying to make? It does not matter where you are in the world, or what local laws exist or permits are required, racking up servers in a cage is much less difficult than physically building a data center (of which racking up servers is a part).
I meant that the learning from doing actual build outs aren't going to translate in other geographies and regulatory climates, not that the work is less difficult or not interesting and important.
Also people doing the build outs of a DC aren't likely keen on talking about permits and confidential agreements in the industry quite publicly.
Yes the title is click baity, but that is par of the course these days.
Sure, every business has confidential agreements which are usually kept secret but there are even on youtube a few people/companies who gave deep insides in the bits and bytes of building a data center from ground up across multiple hours of documentation. And the confidential business agreements in the data center world are up to a certain level the same as any other businesses.
> Thy will vary by country, by state or even county , setting up a DC in the Bay Area and say one in Ohio or Utah is a very different endeavor with different design considerations.
Regarding data centers that cost 9 figures and up:
For the largest players, there’s not a ton of variation. A combination of evaporative cooling towers and chillers are used to reject heat. This is a consequence of evaporative open loop cooling being 2-3x more efficient than a closed-loop system.
There will be multiple medium-voltage electrical services, usually from different utilities or substations, with backup generators and UPSes and paralleling switchgear to handle failover between normal, emergency, and critical power sources.
There’s not a lot of variation since the two main needs of a data center are reliable electricity and the ability to remove heat from the space, and those are well-solved problems in mature engineering disciplines (ME and EE). The huge players are plopping these all across the country and repeatability/reliability is more important than tailoring the build to the local climate.
FWIW my employer has done billions of dollars of data center construction work for some of the largest tech companies (members of Mag7) and I’ve reviewed construction plans for multiple data centers.
You've got more experience there than me, and I've only seen the plans for a single center.
I'll point out that some of the key thermal and power stuff in those plans you saw may have come from the hyperscalers themselves - our experience a dozen years or so ago was that we couldn't just put it out to bid, as the typical big construction players knew how to build old data centers, not new ones, and we had to hire a (very small) engineering team to design it ourselves.
Heat removal is well-solved in theory. Heat removal from a large office building is well-solved in practice - lots of people know exactly what equipment is needed, how to size, install, and control it, what building features are needed for it, etc. Take some expert MEs without prior experience at this, toss them a few product catalogs, and ask them to design a solution from first principles using the systems available and it wouldn't be so easy.
There are people for whom data center heat removal is a solved problem in practice, although maybe not in the same way because the goalposts keep moving (e.g. watts per rack). Things may be different now, but a while back very few of those people were employed by companies who would be willing to work on datacenters they didn't own themselves.
Finally I'd add that "9 figures" seems excessive for building+power+cooling, unless you're talking crazy sizes (100MW?). If you're including the contents, then of course they're insanely expensive.
Issues in building your own physical data center (based on a 15MW location some people I know built): 1 - thermal. To get your PUE down below say 1.2 you need to do things like hot aisle containment or better yet water cooling - the hotter your heat, the cheaper it is to get rid of.[] 2 - power distribution. How much power do you waste getting it to your machines? Can you run them on 220v, so their power supplies are more efficient? 3 - power. You don't just call your utility company and as them to run 10+MW from the street to your building. 4 - networking. You'll probably need redundant dark fiber running somewhere.
1 and 2 are independent of regulatory domain. 3 involves utilities, not governments, and is probably a clusterfck anywhere; 4 isn't as bad (anywhere in the US; not sure elsewhere) because it's not a monopoly, and you can probably find someone to say "yes" for a high enough price.
There are people everywhere who are experts in site acquisition, permits, etc. Not so many who know how to build the thermals and power, and who aren't employed by hyperscalers who don't let them moonlight. And depending on your geographic location, getting those megawatts from your utility may be flat out impossible.
This assumes a new build. Retrofitting an existing building probably ranges from difficult to impossible, unless you're really lucky in your choice of building.
[*] hmm, the one geographic issue I can think of is water availability. If you can't get enough water to run evaporative coolers, that might be a problem - e.g. dumping 10MW into the air requires boiling off I think somewhere around 100K gallons of water a day.