← Back to context

Comment by sluongng

4 days ago

The most concerning part about modern CI to me is how most of it is running on GitHub Actions, and how GitHub itself has been deprioritizing GitHub Actions maintenance and improvements over AI features.

Seriously, take a look at their pinned repo: https://github.com/actions/starter-workflows

> Thank you for your interest in this GitHub repo, however, right now we are not taking contributions.

> We continue to focus our resources on strategic areas that help our customers be successful while making developers' lives easier. While GitHub Actions remains a key part of this vision, we are allocating resources towards other areas of Actions and are not taking contributions to this repository at this time.

The last time the company I worked for was hosting code on Github, Actions did not exist yet and for personal stuff copying some 3 liners was fine, I'd hardly call that "using".

"Github Actions might be over, so not worth engaging" was not on my bingo card.

They are instead focusing on Agentic Workflows which used natural language instead of YAML.

https://github.com/githubnext/gh-aw

  • Know what I love in a good build system? Nondeterminism! Who needs coffee when you can get your thrills from stochastic processes. Why settle for just non-repeatable builds when you can have non-repeatable build failures!

    • Would a smart AI accept such foolishness? I doubt it. It'll still use something deterministic under the hood - it'll just have a conversational abstraction layer for talking to the Product person writing up requirements.

      We used to have to be able to communicate with other humans to build something. It seems to me that's what they're trying to take out of the loop by doing the things that humans do: talk to other humans and give them what they're asking for.

      I too am not a fan of the dystopias we're ending up in.

      2 replies →

  • I personally find this pretty concerning: GitHub Actions already has a complex and opaque security model, and adding LLMs into the mix seems like a perfect way to keep up the recent streak of major compromises driven by vulnerable workflows and actions.

    I would hope that this comes with major changes to GHA’s permissions system, but I’m not holding my breath for that.