← Back to context

Comment by gpt5

1 day ago

I'm confused about the level of conversation here. Can we actually run the math on heat dissipation and feasibility?

A Starlink satellite uses about 5K Watts of solar power. It needs to dissipate around that amount (+ the sun power on it) just to operate. There are around 10K starlink satellites already in orbit, which means that the Starlink constellation is already effectively equivalent to a 50 Mega-watt (in a rough, back of the envelope feasibility way).

Isn't 50MW already by itself equivalent to the energy consumption of a typical hyperscaler cloud?

Why is starlink possible and other computations are not? Starlink is also already financially viable. Wouldn't it also become significantly cheaper as we improve our orbital launch vehicles?

Output from radiating heat scales with area it can dissipate from. Lots of small satellites have a much higher ratio than fewer larger satellites. Cooling 10k separate objects is orders of magnitude easier than 10 objects at 1000x the power use, even if the total power output is the same.

Distributing useful work over so many small objects is a very hard problem, and not even shown to be possible at useful scales for many of the things AI datacenters are doing today. And that's with direct cables - using wireless communication means even less bandwidth between nodes, more noise as the number of nodes grows, and significantly higher power use and complexity for the communication in the first place.

Building data centres in the middle of the sahara desert is still much better in pretty much every metric than in space, be it price, performance, maintainance, efficiency, ease of cooling, pollution/"trash" disposal etc. Even things like communication network connectivity would be easier, as at the amounts of money this constellation mesh would cost you could lay new fibre optic cables to build an entire new global network to anywhere on earth and have new trunk connections to every major hub.

There are advantages to being in space - normally around increased visibility for wireless signals, allowing great distances to be covered at (relatively) low bandwidth. But that comes at an extreme cost. Paying that cost for a use case that simply doesn't get much advantages from those benefits is nonsense.

  • Why would they bother to build space data center in such monolithic massive structures at all? Direct cables between semi-independent units the size of a star link v2 satellite. That satellite size is large enough to encompass a typical 42U server rack even without much physical reconfiguration. It doesn't need to be "warehouse sized building, but in space", and neither does it have to be countless objects kilometers apart from each other beaming data wirelessly. A few dozen wired as a cluster is much more than sufficient to avoid incurring any more bandwidth penalties on server-to-server communication with correlated work loads than we already have on earth for most needs.

    Of course this doesn't solve the myriad problems, but it does put dissipation squarely in the category of "we've solved similar problems". I agree there's still no good reason to actually do this unless there's a use for all that compute out there in orbit, but that too is happening with immense growth and demand expected for increased pharmaceutical research and various manufacturing capabilities that require low/no gravity.

    • Not just a 42U rack, but a 42U rack that needs one hundred thousand watts of power, and it also needs to be able to remove one hundred thousand watts of heat out of the rack, and then it needs to dump that one hundred thousand watts of heat into space.

  • > using wireless communication means even less bandwidth between nodes, more noise as the number of nodes grows, and significantly higher power use

    Space changes this. Laser based optical links offer bandwidth of 100 - 1000 Gbps with much lower power consumption than radio based links. They are more feasible in orbit due to the lack of interference and fogging.

    > Building data centres in the middle of the sahara desert is still much better in pretty much every metric

    This is not true for the power generation aspect (which is the main motivation for orbital TPUs). Desert solar is a hard problem due to the need for a water supply to keep the panels clear of dust. Also the cooling problem is greatly exacerbated.

    • You don’t need to do anything to keep panels with a significant angle clear of dust in deserts. The Sahara is near the equator but you can stow panels at night and let the wind do its thing.

      The lack of launch costs more than offset the need for extra panels and batteries.

      2 replies →

    • Space doesn't really change it though because the effective bandwidth between nodes is reduced by the overall size of the network and how much data they need to relay between each other.

      1 reply →

  • Whatever sat datacenter they biuld, it would run better/easier/faster/cheaper sitting on the ground in antarctica than it would in space, or floating on the ocean, without the launch costs. Space is useful for those activities that can only be done from space. For general computing? Not until all the empty parts of the globe are full.

    This is a pump-and-dump bid for investor money. They will line up to give it to him.

    • Yup - my example of the Sahara wasn't really a specific suggestion, so much as an example of "The Most Inconvenient Inhospitable part of the earth's surface is still much better than space for these use cases". This isn't star trek, the world doesn't match sci-fi.

      It's like his "Mars Colony" junk - and people lap it up, keeping him in the news (in a not explicitly negative light - unlike some recent stories....)

    • > Whatever sat datacenter they biuld, it will run better/easier/faster/cheaper sitting on the ground in antarctica than it will in space

      That is clearly not true. How do you power the data center on antarctica? May i remind you it will be in the shadow of earth for half a year.

      7 replies →

Simply put no, 50MW is not the typical hyperscaler cloud size. It's not even the typical single datacenter size.

A single AI rack consumes 60kW, and there is apparently a single DC that alone consumes 650MW.

When Microsoft puts in a DC, the machines are done in units of a "stamp", ie a couple racks together. These aren't scaled by dollar or sqft, but by the MW.

And on top of that... That's a bunch of satellites not even trying to crunch data at top speed. No where near the right order of magnitude.

  • But the focus on building giant monolithic datacenters comes from the practicalities of ground based construction. There are huge overheads involved with obtaining permits, grid connections, leveling land, pouring concrete foundations, building roads and increasingly often now, building a power plant on site. So it makes sense to amortize these overheads by building massive facilities, which is why they get so big.

    That doesn't mean you need a gigawatt of power before achieving anything useful. For training, maybe, but not for inference which scales horizontally.

    With satellites you need an orbital slot and launch time, and I honestly don't know how hard it is to get those, but space is pretty big and the only reasons for denying them would be safety. Once those are obtained done you can make satellite inferencing cubes in a factory and just keep launching them on a cadence.

    I also strongly suspect, given some background reading, that radiator tech is very far from optimized. Most stuff we put into space so far just doesn't have big cooling needs, so there wasn't a market for advanced space radiator tech. If now there is, there's probably a lot of low hanging fruit (droplet radiators maybe).

    • But why would you?

      Space has some huge downsides:

      * Everything is being irradiated all the time. Things need to be radiation hardened or shielded.

      * Putting even 1kg into space takes vast amounts of energy. A Falcon 9 burns 260 MJ of fuel per kg into LEO. I imagine the embodied energy in the disposable rocket and liquid oxygen make the total number 2-3x that at least.

      * Cooling is a nightmare. The side of the satellite in the sun is very hot, while the side facing space is incredibly cold. No fans or heat sinks - all the heat has to be conducted from the electronics and radiated into space.

      * Orbit keeping requires continuous effort. You need some sort of hypergolic rocket, which has the nasty effect of coating all your stuff in horrible corrosive chemicals

      * You can't fix anything. Even a tiny failure means writing off the entire system.

      * Everything has to be able to operate in a vacuum. No electrolytic capacitors for you!

      So I guess the question is - why bother? The only benefit I can think of is very short "days" and "nights" - so you don't need as much solar or as big a battery to power the thing. But that benefit is surely outweighed by the fact you have to blast it all into space? Why not just overbuild the solar and batteries on earth?

      37 replies →

    • > I also strongly suspect, given some background reading, that radiator tech is very far from optimized. Most stuff we put into space so far just doesn't have big cooling needs, so there wasn't a market for advanced space radiator tech. If now there is, there's probably a lot of low hanging fruit (droplet radiators maybe).

      You'd be wrong. There's a huge incentive to optimized radiator tech because of things like the international space station and MIR. It's a huge part of the deployment due to life having pretty narrow thermal bands. The added cost to deploy that tech also incentivizes hyper optimization.

      Making bigger structures doesn't make that problem easier.

      Fun fact, heat pipes were invented by NASA in the 60s to help address this very problem.

      12 replies →

    • There is a lot of hand waiving away of the orders of magnitude more manufacturing, more launches, and more satellites that have to navigate around each other.

      We still don’t have any plan I’ve heard of for avoiding a cascade of space debris when satellites collide and turn into lots of fast moving shrapnel. Yes, space is big, but low Earth orbit is a very tiny subset of all space.

      The amount of propulsion satellites have before they become unable to maneuver is relatively small and the more satellite traffic there is, the faster each satellite will exhaust their propulsion gasses.

      3 replies →

    • All of those “huge overheads” you cite are nothing compared to the huge overhead of building and fueling rockets to launch the vibration- and radiation-hardened versions of the solar panels and GPUs and cooling equipment that you could use much cheaper versions of on Earth. How many permitted, regulated launches would it take to get around the one-time permitting and predictable regulation of a ground-based datacenter?

      Are Earth-based datacenters actually bound by some bottleneck that space-based datacenters would not be? Grid connections or on-site power plants take time to build, yes. How long does it take to build the rocket fleet required to launch a space “datacenter” in a reasonable time window?

      This is not a problem that needs to be solved. Certainly not worth investing billions in, and definitely not when run by the biggest scam artist of the 21st century.

  • New GPU dense racks are going up to 300kW, but I believe the normal at moment for hyperscalers is somewhere around ~150kW, can someone confirm?

    The energy demand of these DCs is monstrous, I seriously can't imagine something similar being deployed in orbit...

    • Most of the OEMs are past 300kW racks, planning on 600kW racks within a year or two, with realistic plans to hit a megawatt

    • Could this be about bypassing government regulation and taxation? Silkroad only needed a tiny server, not 150kW.

      The Outer Space Treaty (1967) has a loophole. If you launch from international waters (planned by SpaceX) and the equipment is not owned by a US-company or other legal entity there is significant legal ambiguity. This is Dogecoin with AI. Exploiting this accountability gap and creating a Grok AI plus free-speech platform in space sounds like a typical Elon endeavour.

      7 replies →

  • How much of that power is radiated as the radio waves it sends?

    • Good point - the comms satellites are not even "keeping" some of the energy, while a DC would. I _am_ now curious about the connection between bandwidth and wattage, but I'm willing to bet that less than 1% of the total energy dissipation on one of these DC satellites would be in the form of satellite-to-earth broadcast (keeping in mind that s2s broadcast would presumably be something of a wash).

      4 replies →

It's like this. Everything about operating a datacenter in space is more difficult than it is to operate one on earth.

1. The capital costs are higher, you have to expend tons of energy to put it into orbit

2. The maintenance costs are higher because the lifetime of satellites is pretty low

3. Refurbishment is next to impossible

4. Networking is harder, either you are ok with a relatively small datacenter or you have to deal with radio or laser links between satellites

For starlink this isn't as important. Starlink provides something that can't really be provided any other way, but even so just the US uses 176 terawatt-hours of power for data centers so starlink is 1/400th of that assuming your estimate is accurate (and I'm not sure it is, does it account for the night cycle?)

  • What about sourcing and the cost of energy? Solar Panels more efficient, no bad weather, and 100% in sunlight (depending on orbit) in space. Not that it makes up for the items you listed, but it may not be true that everything is more difficult in space.

    • Let's say with no atmosphere and no night cycle, a space solar panel is 5x better. Deploying 5x as many solar panels on the ground is still going to come in way under the budget of the space equivalent.

      7 replies →

    • just take cost of getting kg in space and compare it to how much solar panel will generate

      Current satellites get around 150W/kg from solar panels. Cost of launching 1kg to space is ~$2000. So we're at $13.3(3)/Watt. We need to double it because same amount need to be dissipated so let's round it to $27

      One NVidia GB200 rack is ~120kW. To just power it, you need to send $3 240 000 worth of payload into space. Then you need to send additional $3 106 000 (rack of them is 1553kg) worth of servers. Plus some extra for piping

      3 replies →

    • Solar panels in space are more efficient, but on the ground we have dead dinosaurs we can burn. The efficiency gain is also more than offset by the fact that you can't replace a worn out panel. A few years into the life of your satellite its power production drops.

      5 replies →

  • The cost might be the draw (if there is one). Big tech isn't afraid of throwing money at problems, but the AI folk and financiers are afraid of waiting and uncertainty. A satellite is crazy expensive but throwing more money at it gets you more satellites.

    At the end of the day I don't really care either way. It ain't my money, and their money isn't going to get back into the economy by sitting in a brokerage portfolio. To get them to spend money this is as good a way as any other, I guess. At least it helps fund a little spaceflight and satellite R&D on the way.

  • >1. The capital costs are higher, you have to expend tons of energy to put it into orbit

    putting 1KW of solar on land - $2K, putting it into orbit on Starship (current ground-based heavy solar panels, 40kg for 4m2 of 1KW in space) - anywhere between $400 and $4K. Add to that that the costs on Earth will only be growing, while costs in space will be falling.

    Ultimately Starship's costs will come down to the bare cost of fuel + oxidizer, 20kg per 1kg in LEO, i.e. less than $10. And if they manage streamlined operations and high reuse. Yet even with $100/kg, it is still better in space than on the ground.

    And for cooling that people so complain about without running it in calculator - https://news.ycombinator.com/item?id=46878961

    >2. The maintenance costs are higher because the lifetime of satellites is pretty low

    it will live those 3-5 years of the GPU lifecycle.

    • Current cost to LEO is $1500 per kg

      That would make your solar panel (40kg) around $60K to put into space.

      Even being generous and assuming you could get it to $100 per kg that's still $4000

      There's a lot of land in the middle of nowhere that is going to be cheaper than sending shit to space.

      7 replies →

    • > putting 1KW of solar on land - $2K, putting it into orbit on Starship (current ground-based heavy solar panels, 40kg for 4m2 of 1KW in space) - anywhere between $400 and $4K.

      What starship? The fantasy rocket Musk has been promising for 10 years or the real one that has thus far delivered only one banana worth of payload into orbit?

      4 replies →

    • 1 KW of solar panels is 150€ retail right now. You are probably at 80€ or less if you buy a few MW.

      (I'm ignoring installation costs etc. because actually creating the satellites is ignored here, too)

      1 reply →

    • > will come down to the bare cost of fuel + oxidizer

      And maintenance and replacing parts and managing flights and ... You're trying to yadda-yadda so much opex here!

      7 replies →

    • The bean counters at NVidia recently upped the expected lifecycle from 5 years to 6. On paper, you are expected now to get 6 years out of a GPU for datacenter use, not 3-5.

    • My car costs far more per mile than the bare cost of the fuel. Why would starship not have similar costs?

  • > The maintenance costs are higher because the lifetime of satellites is pretty low

    Presumably they're planning on doing in-orbit propellant transfer to reboost the satellites so that they don't have to let their GPUs crash into the ocean...

    • Another significant factor is that radiation makes things worse.

      Ionizing radiation disrupts the crystalline structure of the semiconductor and makes performance worse over time.

      High energy protons randomly flip bits, can cause latchup, single event gate rupture, destroy hardware immediately, etc.

      1 reply →

    • > Presumably they're planning on doing in-orbit propellant transfer to reboost the satellites so that they don't have to let their GPUs crash into the ocean

      Hell, you're going to lose some fraction of chips to entropy every year. What if you could process those into reaction mass?

      6 replies →

    • And just like that you've added another not never done before, and definitely not at scale problem to the mix.

      These are all things which add weight, complexity and cost.

      Propellant transfer to an orbital Starship hasn't even been done yet and that's completely vital to it's intended missions.

    • Or maybe they want to just use them hard and deorbit them after three yesrs?

  • > Everything about operating a datacenter in space is more difficult than it is to operate one on earth

    Minus one big one: permitting. Every datacentre I know going up right now is spending 90% of their bullshit budget on battlig state and local governments.

    • But since building a datacenter almost anywhere on the planet is more convenient than outer space, surely you can find some suitable location/government. Or put it on a boat, which is still 100 times more sensible than outer space.

      6 replies →

    • I mean, you don't have zoning in space, but you have things like international agreements to avoid, you know, catastrophic human development situations like kessler syndrome.

      All satellites launched into orbit these days are required to have de-orbiting capabilities to "clean up" after EOL.

      I dunno, two years ago I would have said municipal zoning probably ain't as hard to ignore as international treaties, but who the hell knows these days.

      1 reply →

    • What counts towards a bullshit budget? Permitting is a drop in the bucket compared to construction costs.

    • that may have been the case before but it is not anymore. I live in Northern VA, the capital of the data centers and it is easier to build one permit-wise than a tree house. also see provisions in OBBB

    • This is a huge one. What Musk is looking for is freedom from land acquisition. Everything else is an engineering and physics problem that he will somehow solve. The land acquisition problem is out of his hands and he doesn't want to deal with politicians. He learned from building out the Memphis DC.

      21 replies →

Amazon’s new campus in Indiana is expected to use 2.2GW when complete. 50Mw is nothing, and that’s ignoring the fact that most of that power wouldn't actually be used for compute.

> A Starlink satellite uses about 5K Watts of solar power. It needs to dissipate around that amount (+ the sun power on it) just to operate.

The “+ solar power” part is the majority of the energy. Solar panel efficiency is only about 25-30% at beginning-of-life whereas typical albedos are effectively 100%. So your estimate is off by at least a factor of three.

Also, I’m not sure where you got 5 kw from. The area of the satellite is ~100 m2, which means they are intercepting over 100 kw of bolometric solar power.

Starlink provides a service that couldn't exist without the satellite infrastructure.

Datacenters already exist. Putting datacenters in space does not offer any new capabilities.

  • This is the main point I think. I am very much convinced that SpaceX is capbable to put a datacenter into space. I am not convinced they can do it cheaper than building a datacenter on earth.

    • I would be a lot more convinced they had found a way to solve the unit economics if it was being used to secure billion dollar deposits from other companies rather than as the narrative for rolling a couple of Elon's loss making companies into SpaceX and IPOing...

> Isn't 50MW already by itself equivalent to the energy consumption of a typical hyperscaler cloud?

xAI’s first data center buildout was in the 300MW range and their second is in the Gigawatt range. There are planned buildouts from other companies even bigger than that.

So data center buildouts in the AI era need 1-2 orders of magnitude more power and cooling than your 50MW estimate.

Even a single NVL72 rack, just one rack, needs 120kW.

I ran the math the last time this topic camps up

The short answer is that ~100m2 of steel plate at 1400C (just below its melting point) will shed 50MW of power in black body radiation.

https://news.ycombinator.com/item?id=46087616#46093316

  • The temperature of space datacenters will be limited to 100 Celsius degrees, because otherwise the electronic equipment will be destroyed.

    So your huge metal plate would radiate (1673/374)^4 = 400 times less heat, i.e. only 125 kW.

    In reality, it would radiate much less than that, even if made of copper or silver covered with Vantablack, because the limited thermal conductivity will reduce the temperature for the parts distant from the body.

  • Which GPU runs at 1400C?

    • One made of steel presumably.

      I would assume such a setup involves multiple stages of heat pumps to from GPU to 1400C radiatoe. Obviously that's going to impact efficiency.

      Also I'm not seriously suggesting that 1400C radiators is a reasonable approach to cooling a space data centre. It's just intended to demonstrate how infeasible the idea is.

      1 reply →

Starlink satellites also radiate a non-trivial amount of the energy they consume from their phased arrays

Not related to heat, but a com satellite is built from extremely durable HW/SW that's been battle-tested to run flawlessly over years with massive MTBF numbers.

A data center is nowhere near that and requires constant physical interventions. How do they suggest to address this?

50MW is on the small side for an AI cluster - probably less than 50k gpus.

if the current satellite model dissipates 5kW, you can't just add a GPU (+1kW). maybe removing most of the downlink stuff lets you put in 2 GPUs? so if you had 10k of these, you'd have a pretty high-latency cluster of 20k GPUs.

I'm not saying I'd turn down free access to it, but it's also very cracked. you know, sort of Howard Hughesy.

Are starlink satellites in sun synchronous orbits? Doesn't constant solar heating change the energy balance quite a bit?

A Starlink satellite is mainly just receiving and sending data, the bare minimum of a data center-satellite's abilities; everything else comes on top and would be the real power drain.

> A Starlink satellite uses about 5K Watts of solar power. It needs to dissipate around that amount (+ the sun power on it) just to operate.

This isn't quite true. It's very possible that the majority of that power is going into the antennas/lasers which technically means that the energy is being dissipated, but it never became heat in the first place. Also, 5KW solar power likely only means ~3kw of actual electrical consumption (you will over-provision a bit both for when you're behind the earth and also just for safety margin).

> A Starlink satellite uses about 5K Watts of solar power

Is that 5kW of electrical power input at the terminals, or 5kW irradiation onto the panels?

Because that sounds like kind of a lot, for something the size of a fridge.

Because 10K satellites have a FAR greater combined surface area than a single space-borne DC would. Stefan-Boltzman law: ability to radiate heat increase to the 4th power of surface area.

  • It's linear to surface area, but 4th power to temperature.

    • Also worth noting that if computing power scales with volume then surface area (and thus radiation) scales like p^2/3. In other words, for a fixed geometry, the required heat dissipation per unit area goes like p^1/3. This is why smaller things can just dissipate heat from their surface, whereas larger things require active cooling.

      I'm not a space engineer but I'd imagine that smaller satellites can make due with a lot of passive cooling on the exterior of the housing, whereas a shopping-mall sized computer in space would will require a lot of extra plumbing.

    • Thanks for the correction. Last time I looked at it was in 2nd year Thermodynamics in 1985.

Why would anyone think the unit cost would be competitive with cheap power / land on earth? If that doesn't make sense how could anything else?

A typical desktop/tower PC will consume 400 watts. So 12 PC's equals 1 starlink satellite.

A single server in a data center will consume 5-10 kW.

> Why is starlink possible and other computations are not?

Aside from the point others have made that 50 MW is small in the context of hyperscalers, if you want to do things like SOTA LLM training, you can't feasibly do it with large numbers of small devices.

Density is key because of latency - you need the nodes to be in close physical proximity to communicate with each other at very high speeds.

For training an LLM, you're ideally going to want individual satellites with power delivery on the order of at least about 20 MW, and that's just for training previous-generation SOTA models. That's nearly 5,000 times more power than a single current Starlink satellite, and nearly 300 times that of the ISS.

You'd need radiator areas in the range of tens of thousands of square meters to handle that. Is it theoretically technically possible? Sure. But it's a long-term project, the kind of thing that Musk will say takes "5 years" that will actually take many decades. And making it economically viable is another story - the OP article points out other issues with that, such as handling hardware upgrades. Starlink's current model relies on many cheap satellites - the equation changes when each one is going to be very, very expensive, large, and difficult to deploy.

Sure, we can run the math on heat dissipation. The law of Stefan-Boltzman is free and open source and it application is high school level physics. You talk about 50 MW. You are going to need a lot of surface area to radiate that off at somewhere close to reasonable temperatures.

  • > The law of Stefan-Boltzman is free and open source... What do you mean by "open source"? Can we contribute changes to it?