Comment by deathanatos

3 days ago

> Why doesn't GitHub just enforce immutable versioning for actions?

I always wish these arguments came with a requirement to include a response to "well, what about the other side of the coin?", otherwise, you've now forced me to ask: well?

The two sides of the coin: Security wants pinned versions, like you have, so that compromises aren't pulled in. Security does not want¹ pinned versions, so that security updates are pulled in.

The trick, of course, is some solution that allows the latter without the former, that doesn't just destroy dev productivity. And remember, …there is no evil bit.

(… I need to name this Law. "The Paradox of Pinning"?)

(¹it might not be so explicitly state, but a desire to have constant updated-ness w/ security patches amounts to an argument against pinning.)

> it might not be so explicitly state, but a desire to have constant updated-ness w/ security patches amounts to an argument against pinning

When you want to update, you update the hashes too. This isn’t an issue in any other packaging ecosystem, where locking (including hashing) is a baseline expectation. The main issue is developer ergonomics, which comes back to GitHub Actions providing very poor package management primitives out of the box.

(This is the key distinction between updating and passively being updated because you have mutable pointers to package state. The latter gets confused for the former, but you almost always want the former.)

  • This isn't a bad distinction that you've made, I just think even lockfiles (what you're suggesting, essentially) still fall prey to the same paradox I'm suggesting.

    Yes, lockfiles prevent "inadvertent" upgrades, in the sense that you get the "pinned" version in the lockfile. So if we go with the lockfile, we're now on the "pinned" side of the paradoxical coin. Yes, we no longer get auto-pwned by supply chain, but security's problem is "why are we not keeping up to date with patches?" now, since the lockfile effectively prevents them.

    And then you see tooling get developed, like what Github has in the form of Dependabot, which will automatically update that lockfile. Now we're just back to the other side of the paradoxical coin, just with more steps.

    (This isn't to say we shouldn't do lockfiles. Lockfiles bring a lot of other benefits, and I am generally in favor of them. But I don't think they solve this problem.)

    • I don’t think this is a paradox, it’s just a process. You use lockfiles to establish consistent resolutions, and then you use dependency management tooling to update those lockfiles according to various constraints/policies like compatibility, release age, known vulnerabilities, etc.

      (Another framing is that you might want floating constraints for compatibility reasons, but when actually running software you basically never want dependencies changing implicitly beneath you, even if they fix things. Fixes should always be legible, whether they’re security relevant or not.)

  • Honestly what I really want is the latter (mutable references), but pointing to aliases that I own and update manually (the former).

Their question isn't about pinned versions, it's about immutable versions. The question is why it is possible to change what commit "v5" refers to, not "why would you want to write v5".

You already don't get updates pulled in with the system unless they swap the version out from under you, which is not a normal way to deploy.

Version tags should obviously be immutable, and if you want to be automatically updated you can select 1.0.*, if you don't you just pick the version tag.

It amounts to an argument against pinning in a (IMO) weird world view where the package maintainer is responsible for the security of users' systems. That feels wrong. The user should be responsible for the security of their system, and for setting their own update policy. I don't want a volunteer making decisions about when I get updates on my machine, and I'm pretty security minded. Sure, make the update available, but I'll decide when to actually install it.

In a more broad sense I think computing needs to move away from these centralised models where 'random person in Nebraska'[0] is silently doing a bunch of work for everyone, even with good intentions. Decisions should be deferred to the user as much as possible.

[0]: https://xkcd.com/2347/

Auto upgrade to version deemed OK by security team. Basically you need to get updates that patch exploits then wait and be more patient for feature upgrades.

  • So, in the context of me questioning "yes, but exactly how is this supposed to work", you're essentially punting the question into a black box that won't betray us.

    In the real world, though, we don't have a magic little black box: we have to actually implement that.

    The only answer I have seen from real world security teams is variations of "why wouldn't we be keeping up with updates?", and that's an unpinned dep.