Comment by stackghost

1 day ago

Even if you create a material with surface emissivity of 1.0:

- let's say 8x 800W GPUs and neglect the CPU, that's 6400W

- let's further assume the PSU is 100% efficient

- let's also assume that you allow the server hardware to run at 77 degrees C, or 350K, which is already pretty hot for modern datacenter chips.

Your radiator would need to dissipate those 6400W, requiring it to be almost 8 square meters in size. That's a lot of launch mass. Adding 50 degrees will reduce your required area to only about 4.4 square meters with the consequence that chip temps will rise by 50 degrees also, putting them at 127 degrees C.

No CPU I'm aware of can run at those temps for very long and most modern chips will start to self throttle above about 100

Hence the fancy air conditioning pumps

  • ... on satellites?

    • Yes, that’s what we’re talking about. Data centers in space.

      You put the cold side of the phase change on the internal cooling loop, step up the external cooling loop as high temp as you can and then circulate that through the radiators. You might even do this step up more than once.

      Imagine the data center like a box, you want it to be cold inside, and there’s a compressor, you use to transfer heat from inside to outside, the outside gets hot, inside cold. You then put a radiator on the back of the box and radiate the heat to the darkness of space.

      This is all very dependent on the biggest and cheapest rockets in the world but it’s a tradeoff of convenience and serviceability for unlimited free energy.

      6 replies →