Comment by mikkupikku
5 days ago
You shouldn't be worried about it, these satellites are in Low Earth Orbits that readily decay if the satellites don't regularly reboost themselves using their electric thrusters. And performing collision avoidance maneuvers is just part of how they're designed to work. Note that its 300,000 avoidances, not collisions. These are more like ballerinas than careening billiard balls.
True, but at scale of 10k, chances of collision due to malfunction are not 0.
Nobody says the chance of a collisions is zero. That's why it being in LEO is relevant. Internet fools who just get scared by the big number without considering the details of the situation always get this wrong.
So, because the 10,000+ Starlinks launched so far (and the countless future satellites Bezos and others want to launch for their own constellations) are in LEO, nothing bad can happen (it can only good happen)?
That is, if you disregard the following quote from the article:
> Each re-entry deposits about 30 kg of aluminum oxide into the upper atmosphere--an uncontrolled chemistry experiment on a planetary scale.
2 replies →
And so what if they collide? This isn’t Kessler syndrome territory, it’s low enough orbit that debris would re-enter and burn up rapidly. You’d lose the colliding satellites, and that’s likely all.
Not that there has been a single starlink collision, but y’know.
> Not that there has been a single starlink collision
How sure are you that that would be made public?
Would it be always observed and caught outside of SpaceX?
If not, is that proof that if there such collisions they don't matter?
4 replies →
Wait until multiple, non-coordinated copy-cat constellations are sent up there ...
1 reply →
It's an LLM spambot, it is incapable of worrying. I'm much more worried about another instance of nobody noticing what they're replying to.
E.g. gowinston.ai gives 98% probability that the comment is human written. LLM detectors of course aren't always correct, but generally their detection performance for pure LLM text can be high (accuracy % in high 90s).
Do you have some specific techniques or strategies for LLM text detection? Have you validated them?
No no their profile says “software dev.”
Software decentralized evolved version ?
Can I ask how you're so certain? The first two sentences reads human-typed to me.