← Back to context

Comment by kevdev

2 months ago

As someone with a similar background to the writer of this post (I did avionics work for NASA before moving into more “traditional” software engineering), this post does a great job at summing up my thoughts on why space-based data centers won’t work. The SEU issues were my first though followed by the thermal concerns, and both are addressed here fantastically.

On the SEU issue I’ll add in that even in LEO you can still get SEUs - the ISS is in LEO and gets SEUs on occasion. There’s also the South Atlantic Anomaly where spacecraft in LEO see a higher number of SEUs.

As someone with only a basic knowledge of space technology, my first thought when I read the idea was "how the hell are they going to cool it".

> On the SEU issue I’ll add in that even in LEO you can still get SEUs

As a sibling post noted, SEUs are possible all the way down to sea level. The recent Airbus mass intervention was essentially a fix for a badly handled SEU in a corner case.

Single event upsets are already commonplace at sea level well below data center scale.

The section of the article that talks about them isn’t great. At least for FPGAs, the state of the art is to run 2-3 copies of the logic, and detect output discrepancies before they can create side effects.

I guess you could build a GPU that way, but it’d have 1/3 the parallelism as a normal one for the same die size and power budget. The article says it’d be a 2-3 order of magnitude loss.

It’s still a terrible idea, pf course.

  • It strikes me that neutral network inference loads are probably pretty resilient to these kinds of problems (as we see the bits per activation steadily decreasing), and where they aren't, you can add them as augmentations at training time and they will essentially act as regularization.

  • If you're using GPUs, you're running AI workloads. In which case: do you care?

    One of the funniest things about modern AI systems is just how many random bitflips they can tank before their performance begins to really suffer.

The only advantage I can come up with is the background temperature being much colder than Earth surface. If you ignored the capex cost to get this launched and running in orbit, could the cooling cost be smaller? Maybe that's the gimmick being used to sell the idea. "Yes it costs more upfront but then the 40% cooling bill goes away... breakeven in X years"

  • Strictly speaking, the thermosphere is actually much warmer than the atmosphere we experience--on the order of 100's or even a 1000 degrees Celsius, if you're measuring by temperature (the average kinetic energy of molecules). However, since particle density is so low, the number of molecules is quite low, and so total heat content of the thermosphere is low. But since particle count is low, conduction and convection are essentially nonexistent, which means cooling needs to rely entirely on radiation, which is much less efficient than other modes at cooling.

    In other words, a) background temperature (to the extent it's even meaningful) is much warmer than Earth's surface and b) cooling is much, much more difficult than on Earth.

    • Technically radiation cooling is 100% efficient. And remarkably effective, you can cool an inert object to the temperature of the CMBR (4K) without doing anything at all. However it is rather slow and works best if there's no nearby planets or stars.

      Fun fact though, make your radiator hotter and you can dump just as much if not more energy then you would typically via convective cooling. At 1400C (just below the melting point of steel) you can shed 450kW of heat per square meter, all you need is a really fancy heat pump!

      8 replies →

  • Is it an advantage though ? One of the main objections in the article is exactly that.

    There's no atmosphere that helps with heat loss through convection, there's nowhere to shed heat through conduction, all you have is radiation. It is a serious engineering challenge for spacecrafts to getting rid of the little heat they generate, and avoid being overheated by the sun.

    • I think it is an advantage, the question is just how big, and assume we look only at ongoing operation cost.

      - Earth temperatures are variable, and radiation only works at night

      - The required radiator area is much smaller for the space installation

      - The engineering is simple: CPU -> cooler -> liquid -> pipe -> radiator. We're assuming no constraint on capex so we can omit heat pumps

      2 replies →

  • But the cooling cost wouldn’t be smaller. There’s no good way to eliminate the waste heat into space. It’s actually far far harder to radiate the waste heat into space directly than it would be to get rid of it on Earth.

    • Which is why vacuum flask for hot/cold drinks are a thing/work. Empty space is a pretty good insulator as it turns out.

      It’s a little worrying so many don’t know that.

    • I don't know about that. Look at where the power goes in a typical data center, for a 10MW DC you might spend 2MW just to blow air around. A radiating cooler in space would almost eliminate that. The problem is the initial investment is probably impractical.

      6 replies →

  • This question is thoroughly covered in the linked article.

    • Pardon, but the question of "could the operational cost be smaller in space" is almost not touched at all in the article. The article mostly argues that designing thermal management systems for space applications is hard, and that the radiators required would be big, which speaks to the upfront investment cost, not ongoing opex.

      3 replies →

  • Things on earth also have access to that coldness for about half of each day. How many data centers use radiative cooling into the night sky to supplement their regular cooling? The fact that the answer is “zero” should tell you all you need to know about how useful this is.

    • The atmosphere is in the way even at night, and re-radiates the energy. The effective background temperature is the temperature of the air, not to mention it would only work at night. I think there would need to be like 50-ish acres of radiators for a 50MW datacenter to radiate from 60 to 30C. This would be a lot smaller in space due to bigger temp delta. Either way opex would be much much less than average Earth DC (PUE almost 1 instead of run-of-the mill 1.5 or as low as 1.1 for hyperscalers). But yeah the upfront cost would be immense.

      5 replies →

    • Look up Tech Ingredients episode on Radiative Paint.

      The fact that people aren’t using something isn’t evidence that it’s not possible or even a great idea, it could be that a practical application didn’t exist before or someone enterprising enough hasn’t come along yet.

      1 reply →

  • Breakeven in X years probably makes sense for storage (slow depreciation), not GPUs (depreciates in like 4 years)

    • I think by far the most mass in this kind of setup would go into the heat management, which could probably last a long time and could be amortized separately from the electronics.

      1 reply →