← Back to context

Comment by coppsilgold

7 hours ago

I was actually curious about this myself back when everyone was chiming in about how it was physically impossible.

This is first and foremost an engineering problem as you need to design a system that will both tolerate high heat and be able to pump even more heat to the radiators. The high temperature seems to be the primary objective to design for unless launch costs become absurdly low.

It's not "impossible" but so hard, complex, and expensive that any "gains" you get from being in space are nothing compared to the costs you pay for being in space.

I.e. it's not worth it.

The cost of launching 100K servers, each of which needs 20m^2 each of radiator (for a single H200 server), or 250 m^2 for a GB200 rack!

Ok but these numbers are for a single server or single rack, now what about a standard cluster size of like... 50k GPUs?

You would need (with optimal idealized efficiencies) roughly 64000 m^2 of space to cool down your space data-center. That's 9 American football fields of double sided radiator panels! For a single data-center, and realistically there would be inefficiencies and wastage so it could end up more like 20 American football fields of cooling needed.

How's that going to work?