Comment by joshhart

10 hours ago

I thought this wasn't viable due to cooling requirements - how do you cool massive amounts of compute when the only option is to radiate it into space - nothing to convect it with?

Also, the incredible amount of grift here with the left hand paying the right is scarcely believable. Same story as Tesla buying Solarcity. Board of directors should be ashamed IMO.

Yes. It is very cold up there but there is also no matter, or very little matter. So head conduction and convection don't work, it's all radiation. When we are learning to solve heat transfer problems in engineering school we are generally taught to neglect radiation, because it's effect on cooling the system is typically second or third order when compared to the to "big C's"

  • It would take roughly 5000 square meter area to cool a typical small data center heat output (1 MW). Not great, not terrible.

    • Apparently, OpenAI plan to build 250 GW of computing capacity by 2033.

      To put that in space, based on your numbers, that's 1,250 square kilometers of cooling - an area roughly equivalent in size to Los Angeles

      1 reply →

Cooling and maintenance (part swaps, etc.) are one of many obvious reasons why this is bullshit.

Doesn't stop grifters, tough.

  • in actual datacenters you often don't even bother swapping parts and just let things die in place until you replace whole racks

    • Not my experience at a hyperscaler, at least a while back. It definitely made financial sense to swap a small part to get a ~50-100k$ server's capacity back online.