Comment by jcranmer
4 days ago
> I wish the author gave more concrete examples about what kinds of workflows they want to dynamically construct and remotely execute (and why a separate step of registering the workflow up front with the service before running it is such a dealbreaker), and what a sufficiently generic and unopinionated definition schema for workflows and tasks would look like as opposed to what a service like GitHub Actions defines.
Something that comes up for me a lot at my work: running custom slices of the test suite. The full test suite probably takes CPU-days to run, and if I'm only interested in the results of something that takes 5 CPU-minutes to run, then I shouldn't have to run all the tests.
We do this at work, it’s started off as a simple build graph that used git content hashes and some simple logic to link things together. The result being that for any given pair of commits you can calculate what changed so you can only run those tests/builds etc.
We’ve paired this with buildkite which allows uploading pipeline steps at any point during the run, so our CI pipeline is one step, that generates the rest of the pipeline and uploads that.
I’m working on open sourcing this meta-build tool as I think it is niche that has no current implementation and it is not our core business.
It can build a dependency graph across many systems (terraform, go, python, nix) by parsing from those systems what they depend on. Smashes them all together, so you can have a terraform module that depends on a go binary that embeds some python; and if you change any of it then each parts can have tasks that are run (go test/build, tf plan, pytest, and etc)
I suppose it's just a matter of perspective - I see that as a case for parameterization of a common test-run workflow, not for a one-off definition.