← Back to context

Comment by FloorEgg

12 hours ago

I'm not the best person to make that case as I can only speculate (land cost, permitting, latency, etc). /Shrug

In all the conversations I've seen play out on hacker news about compute in space, what comes up every time is "it's unviable because cooling is so inefficient".

Which got me thinking, what if cooling needs dropped by orders of magnitude? Then I learned about photonic chips and spintronics.

If you're considering only viability, the obvious concern would be cooling, yes; because increasingly large radiative cooling systems dominate launch costs because of all the liquid you need to boost into orbit. And one 100MW installation would be 500 times the largest solar power/radiative cooling system we've ever launched, which is the ISS. So get that down 2 orders of magnitude and you're within the realm of something we _know_ is possible to do instead of something we can _speculate_ is possible.

After that frankly society-destabilizing miracle of inventing competitive photonic processing, your goal of operating data centers in space becomes a tractable economic problem:

Pros:

- You get a continuous 1.37 kW/m^2 instead of an intermittent 1.0 kW/m^2

- Any reasonable spatial volume is essentially zero-cost

Cons:

- Small latency disadvantage

- You have to launch all of your hardware into polar orbit

- On-site servicing becomes another economic problem

So it's totally reasonable to expect the conversation to revolve around cooling, because we know SpaceX can probably direct around $1T into converting methane into delta-V to make the economics work, but the cooling issue is the difference between maybe getting one DC up for that kind of money, or 100 DCs.

  • Do you mind expanding on "society-destabalizing"?

    • Well, the primary limit on computation today is heat dissipation (the "power wall"). You either need to limit power so your phone or laptop doesn't destroy itself, or pay more to evacuate heat produced by the chips in your data center, which has its own efficiency curve.

      If we suddenly lose 2 orders of magnitude of heat produced by our chips, that means we can fit 2 orders of magnitude more compute in the same volume. That is going to be destabilizing in some way, at the very least because you will get the same amount of compute in 1% the data center square footage of today; alternatively, you will get 100-900x the compute in today's data center footprint. That's like going from dial-up to fiber.