Comment by tgtweak
5 hours ago
>San Diego has a mild climate and we opted for pure outside air cooling. This gives us less control of the temperature and humidity, but uses only a couple dozen kW. We have dual 48” intake fans and dual 48” exhaust fans to keep the air cool. To ensure low humidity (<45%) we use recirculating fans to mix hot exhaust air with the intake air. One server is connected to several sensors and runs a PID loop to control the fans to optimize the temperature and humidity.
Oh man, this is bad advice. Airborn humidity and contaminants will KILL your servers on a very short horizon in most places - even San Diego. I highly suggest enthalpy wheel coolers (kyotocooling is one vendor - switch datacenters runs very similar units on their massive datacenters in the Nevada desert) as they remove the heat from the indoor air using outdoor air (+can boost slightly with an integrated refrigeration unit to hit target intake temps) without switching the air from one side to the other. This has huge benefits for air control quality and outdoor air tolerance and a single 500KW heat rejection unit uses only 25KW of input power (when it needs to boost the AC unit's output). You can combine this with evaporative cooling on the exterior intakes to lower the temps even further at the expense of some water consumption (typically far cheaper than the extra electricity to boost the cooling through an hvac cycle).
Not knocking the achievement just speaking from experience that taking outdoor air (even filtered + mixed) into a datacenter is a recipe for hardware failure and the mean time to failure for that is highly dependant on your outdoor air conditions. I've run 3MW facilities with passive air cooling and taking outdoor air directly into servers requires a LOT more conditioning and consideration than is outlined in this article.
Yes, it's easy to destroy the servers with a lot of dust and/or high humidity. But with filtering and ensuring humidity never exceeds 45% we've had pretty good results.
I remember visiting a small data center (about half the size of the Comma one) where shoe covers were required. Apparently they were worried about people’s shoes bringing in dust and other contamination.
It's not a static number as it's also based on ambient air temperature in the form of dew point - 45% RH at low temps can be far more dangerous than 65% RH at warm ambient.
Likewise the impact on server longevity is not a finite boundary but rather "exposure over time" gradient that, if exceeding the "low risk" boundary (>-12'C/10'f dew point or >15'C/59'f dry bulb temp) results in lower MTBF than design. This is defined (and server equipment manufacturers conform and build to) ASHRAE TC 9.9. This mean - if you're running your servers above high risk curve for humidity and temperature, you're shortening the life considerably compared to low risk curve.
Generally, 15% RH is considered suboptimal and can be dangerous near freezing temperatures - in San Diego in January there were several 90%+RH scenarios that would have been dangerous for servers even when mixed down with warm exhaust air - furthermore, the outdoor air at 76'f during that period means you have limited capacity to mix in warm exhaust air (which btw came from that same 99%RH input air) without getting into higher-than-ideal intake temps.
Any dew points above 62.5'f are considered high risk for servers - as are any intake temps exceeding 32'C/90'f. You want to be on the midpoint between those and 16'C/65'f temps & -12'C/10'f dew point to have no impact on server longevity or MTBF rates.
As a recent example:
Lastly, air contaminants - in the form of dust (that can be filtered out) and chemicals (which can't without extensive scrubbing) are probably the most detrimental to server equipment if not properly managed, and require very intentional and frequent filter changes (typically high MERV pleated filters changed on a time or pressure drop signal) to prevent server degradation and equipment risks.
The last consideration is fire suppression - permitted datacenters usually require compliance with separate fire code, such that direct outdoor air exchange without active shutdown and dry suppression is not permitted - this is to prevent a scenario where your equipment catches on fire and a constant supply of fresh oxygen-rich outdoor air turns that into an inferno. Smoke detection systems don't operate well with outdoor-mixed air or any level of airborn particulates.
So - for those reasons - among a few others - open air datacenters are not recommended unless you're doing them at google or meta scale, and in those scenarios you typically have much more extensive systems and purpose-designed hardware in order to operate for the design life of the equipment without issues.
I didn't even know this is something you had to worry about. This is why I use the cloud, all the unknown unknowns.