Comment by BobbyTables2
3 days ago
You’re not wrong but your suggestion also throws away a lot of major benefits of CI. I agree jobs should be one liners but we still need more than one…
The single job pipeline doesn’t tell you what failed. It doesn’t parallelize unit and integration test suites while dealing with the combinatorial matrix of build type, target device, etc.
At some point, a few CI runners become more powerful than a developer’s workstation. Parallelization can really matter for reducing CI times.
I’d argue the root of the problem is that we are stuck on using “make” and scripts for local build automation.
We need something descriptive enough to describe a meaningful CI pipeline but also allow local execution.
Sure, one can develop a bespoke solution, but reinventing the wheel each time gets tiring and eventually becomes a sizable time sink.
In principle, we should be able to execute pieces of .gitlab-ci.yml locally, but even that becomes non trivial with all the nonstandard YAML behaviors done in gitlab, not to mention the varied executor types.
Instead we have a CI workflow and a local workflow and hope the two are manually kept in sync.
In some sense, the current CI-only automation tools shouldn’t even need to exist (gitlab, Jenkins, etc) — why didn’t we just use a cron job running “build.sh” ?
I argue these tools should mainly only have to focus on the “reporting/artifacts” with the pipeline execution parts handled elsewhere (or also locally for a developer).
Shame on you GitLab!
You are mistaking a build for a pipeline. I still believe in pipelines and configuring the right hosts/runners to produce your artifacts. Your actual build on that host/runner should be a one-liner.