Comment by shetaye

1 day ago

I assume the idea is to have the entire constellation be the data center in question. Laser back haul transceiver bandwidth is in the same order of magnitude of rack-to-rack bandwidths [1][2]. I could see each sat being a rack and the entire mesh being a cluster.

[1] https://hackaday.com/2024/02/05/starlinks-inter-satellite-la... (and this is two years ago!) [2] https://resources.nvidia.com/en-us-accelerated-networking-re...

This is how Starlink works however, you would need orders of magnitude more compute than those router pucks. Orders of magnitude more power needs unless you combined a nuclear reactor to it. It’s just such a fever dream at this stage that he’s really doing it to muddy accounting and consolidate debts from Grok failures.

For AI training, latency is one of the limiting factors, which needs to be kept in the nanoseconds. And a light-nanosecond is famously almost exactly 1 foot.

That's why Lumen/Starcloud's designs all assume it'll be a space station with all containers connected to one central networking spine.