Comment by topherhaddad

13 hours ago

Founder/CEO of Albedo here. We published a detailed write-up of our first VLEO satellite mission (Clarity-1) — including imagery, what worked, what broke, and learnings we're taking forward. Happy to answer questions.

https://albedo.com/post/clarity-1-what-worked-and-where-we-g...

How did you manage meaningful attitude control with only torque rods? They would need to big (read: heavy) to be useful — was this just stabilising in inertial frame or active pointing? Mag dipoles in chassis and components tend to lock tumbling satellites into the Earth’s magnetic field. Did you see this? Or did you see atmospheric drag dominate at this altitude?

  • I'm AyJay, Topher's co-founder and Albedo's CTO. We'll actually be publishing a paper here in a few weeks detailing how we got 3-axis torque rod control so you can get the real nitty gritty details then.

    We got here after stacking quite a few capabilities we'd developed on top of one another and realizing we were beginning to see behavior we should be able to wrap up into a viable control strategy.

    Traditional approaches to torque rod control rely on convergence over long time horizons spanning many orbits, but this artificially restricts the control objectives that can be accomplished. Our momentum control method reduced convergence time by incorporating both current and future magnetic field estimates into a special built Lyapunov-based control law we'd be perfecting for VLEO. By the time the issue popped up, we already had a lot of the ingredients needed and were able to get our algorithms to control within an orbit or two of initialization and then were able to stay coarsely stable for most inertial ECI attitudes albeit with wide pointing error bars as stated in the article. For what we needed though, it was perfect.

    • I'd love to read this paper! This was on my mind when I was GNC lead for an undergraduate project at Michigan Tech (Oculus-ASR - Nanosat-6 winner). We had a combined controller for reaction wheels and magtorque rods.

      1 reply →

The diffraction limit (under 1.22 h* lambda/d) of a 1m optic at 250km in visible light is about 17cm. How can you achieve 10cm resolution?

  • Clarity is designed for a GSD (ground sample distance) of 10 cm. Generally the industry uses resolution<>GSD interchangeably. Agree it's not the true definition of resolution. But I'd argue the diffraction limit is an incomplete metric as well, like how spatial sampling is balanced with other MTF contributors (e.g. jitter/smear). For complete metrics, we like 1) NIIRS or 2) % contrast for a given object size on the ground (i.e. system MTF translated to ground units, not image-space units).

    The main performance goal for us was NIIRS 7, and we decomposed GSD/MTF/SNR contributors optimized for affordability when we architected the system

    • How do you manage along-track smear? At those altitudes you're pushing close to 8km/s. Traditionally you'd either need to keep the satellite rotating through the collect or somehow keep the integration time in the single digit microseconds.

      2 replies →

>The drag coefficient was the headline: 12% better than our design target.

Is the drag much better than a regular cubesat? It doesn't look tremendously aerodynamic. From the description I was kind of expecting a design that minimized frontal area.

>Additional surface treatments will improve drag coefficient further.

Is surface drag that much of a contributor at orbital velocity?

  • Ultimately it's about the ballistic coefficient. You want high mass, low cross-sectional area, and low drag coefficient (Cd). With propulsion for station-keeping, it's challenging to capture the VLEO benefits with a regular cubesat. That said, there are VLEO architectures different than Clarity that make sense for other mission areas.

    Yes it's a big contributor. The atmosphere in VLEO behaves as free molecular flow instead of a continuous fluid.

    • Cue the ultimate low orbit satellite

      > It is undesirable to have a definition that will change with improving technology, so one might argue that the correct way to define space is to pick the lowest altitude at which any satellite can remain in orbit, and thus the lowest ballistic coefficent possible should be adopted - a ten-meter-diameter solid sphere of pure osmium, perhaps, which would have B of 8×10^−6 m^2/kg and an effective Karman line of z(-4) at the tropopause

      from https://arxiv.org/abs/1807.07894

Can you tell us some war stories about the software your group wrote for the satellite?

Stacks? Testing? Firmware Updates? Programming languages?

Thank you!

  • First - they never want to use someone else software framework again (an early SW architect decided that would accelerate things but we ended up re-writing almost all of it) and it was all C++ on the satellite. We ran linux with preempt_rt.

    We wrote everything from low level drivers to the top level application and the corresponding ground software for commanding and planning as well. Going forward, we're writing everything top to bottom, just to simplify and have total ownership since we're basically there already.

    For testing we hit it at multiple levels: unit test, hardware in the loop, a custom "flight software in test" we called "FIT" which executed a few different simulated mission scenarios, and we tried to hit as many fault cases as we could too. It was pretty stressful for the team tbh but they were super stoked to see how well it worked on orbit.

    A big one for us in a super high resolution mission like this is the timing determinism (low latency/low jitter) of the guidance, navigation, and control (GNC) thread. Basically it needs execute on time, every cycle, for us to achieve the mission. Getting enough timing instrumentation was tough with the framework we had selected and we eventually got there, but making sure the "hot loop" didn't miss deadlines was more a function of working with that framework than any limitation of linux operating well enough in a RTOS fashion for us.

  • Moving fast to make launch, we had missed a harness checkout step that would’ve caught a missing comms connection into an FPGA, and it was masked because our redundant comms channel made everything look nominal.

    On orbit, we fixed it by pushing an FPGA update and adding software-level switching between the channels to prove the update applied and isolate the hardware path — which worked. Broader lesson, it is possible to design a sw stack capable of making updates to traditionally burned-in components.

    • > it was masked because our redundant comms channel made everything look nominal.

      Hah, this has bitten me often enough I check for it in test suites now - ok, you’ve proven the system works and the backup works, have you proven the primary works? Another in the long list of ways you don’t expect a system to bite you until it does…

      1 reply →

    • > it is possible to design a sw stack capable of making updates to traditionally burned-in components.

      This is interesting - is the software stack essentially acting as "light" translation layer or abstraction layer on components?

s/learnings/lessons/g

  • From my perspective, the number one reason we had a well functioning satellite out of the gate is my philosophy of testing "safe mode first". What that means is in a graduated fashion, test that the hardware and software together can always get you into a safe mode - which is usually power positive, attitude stable, and communicative. So our software integration flows hit this mission thread over and over and over with each update. If we shipped a new software feature, make sure you got to safe mode. If we found a bug that prevented it, it's the first thing to triage. We build out our pipelines to simulate this as much as we could and then ran it again on the development hardware and eventually would load a release onto flight once we were confident this was always solid. If you're going to develop for space, start here.

  • At least it wasn't "learns", like "we had five learns from the project". Like, say, "ad spend". There's already a noun form of a verb, it's called a gerund: "ad spending".

    • As with genes, duplication creates opportunity for specialization. Regardless of what drives the duplication and early divergence.

      AIchat "compare and contrast the subtle implications of phrase/X with phrase/Y" suggests using "ad spend" for a number (like "budget"), and "ad spending" for activity and trend ("act of spending").

      "Learns" has implications of discovery, smaller size, iterative, informality, individual/team scale, messy, and more. For illustration, to my ear, "Don't be stupid" fits as a "lesson", but not as a "learn" or a "takeaway". Nor as a "lesson learned", with its implication of formality and reflection. "Software X is flaky" fits better "learn" than "lesson". And "unmonitored vendor excursions are biting us" more a "takeaway" (actionable; practical vs process).