Comment by DrBazza
4 days ago
Your build should be this:
build.bash <debug|release>
and that's it (and that can even trigger a container build).
I've spent far too much time debugging CI builds that work differently to a local build, and it's always because of extra nonsense added to the CI server somehow. I've yet to find a build in my industry that doesn't yield to this 'pattern'.
Your environment setup should work equally on a local machine or a CI/CD server, or your devops teams has identically set it up on bare metal using Ansible or something.
Agreed with this sentiment, but with one minor modification: use a Makefile instead. Recipes are still chunks of shell, and they don’t need to produce or consume any files if you want to keep it all task-based. You get tab-completion, parallelism, a DAG, and the ability to start anywhere on the task graph that you want.
It’s possible to do all of this with a pure shell script, but then you’re probably reimplementing some or all of the list above.
Just be aware of the "Makefile effect"[1] which can easily devolve into the Makefile also being "over there", far from the application, just because it's actually a patchwork of copy-paste targets stitched together.
[1] https://news.ycombinator.com/item?id=42663231
> use a Makefile instead
I was making a general comment that your build should be a single 'command'. Personally, I don't care what the command is, only that it should be a) one command, and b) 100% runnable on a dev box or a server. If you use make, you'll soon end up writing... shell scripts, so just use a shell script.
In an ideal world your topmost command would be a build tool:
Unfortunately, the second you do that ^^^, someone edits your CI/CD to add a step before the build starts. It's what people do :(
All the cruft that ends up *in CI config*, should be under version control, and inside your single command, so you can debug locally.
That's exactly why the "main" should be shell, not make (see my sibling reply). So when someone needs to add that step, it becomes:
This is better so you can run the whole thing locally, and on different CI providers
In general, a CI is not a DAG, and not completely parallel -- but it often contains DAGs
Make is not a general purpose parallel DAG engine. It works well enough for small C projects and similar, but for problems of even medium complexity, it falls down HARD
Many years ago, I wrote 3 makefiles from scratch as an exploration of this (and I still use them). I described the issues here: https://lobste.rs/s/yd7mzj/developing_our_position_on_ai#c_s...
---
The better style is in a sibling reply -- invoke Make from shell, WHEN you have a problem that fits Make.
That is, the "main" should be shell, not Make. (And it's easy to write a dispatcher to different shell functions, with "$@", sometimes called a "task file" )
In general, a project's CI does not fit entirely into Make. For example, the CI for https://oils.pub/ is 4K lines of shell, and minimal YAML (portable to Github Actions and sourcehut).
https://oils.pub/release/latest/pub/metrics.wwz/line-counts/...
It invokes Make in a couple places, but I plan to get rid of all the Make in favor of Python/Ninja.
Portability to other CI/CDs systems is an understated reason to use a single build command.
You invoke CMake/qmake/configure/whatever from the bash script.
I hate committing makefiles directly if it can be helped.
You can still call make in the script after generating the makefile, and even pass the make target as an argument to the bash script if you want. That being said, if you’re passing more than 2-3 arguments to the build.sh you’re probably doing it wrong.
Yes to calling CMake/etc. No to checking in generated Makefiles. But for your top-level “thing that calls CMake”, try writing a Makefile instead of a shell script. You’ll be surprised at how powerful it is. Make is a dark horse.
1 reply →
I have experienced horror build systems where the Makefile delegates to a shell script which then delegates to some sub-module Makefile, which then delegates to a shell script...
The problem is that shell commands are very painful to specify in a Makefile with weird syntactical rules. Esp when you need them to run in one shell - a lot of horror quoting needed.
There are various things that can be a reasonable candidate for the "top level" build entrypoint, including Nix, bazel, docker bake, and probably more I'm not thinking of. They all have an entrypoint that doesn't have a ton of flags or nonsense, and operate in a pretty self contained environment that they set up and manage themselves.
Overall I'm not a fan of wrapping things; if there are flags or options on the top-level build tool, I'd rather my devs explore those and get used to what they are and can do, rather than being reliant on a project-specific script or make target to just magically do the thing.
Anyway, other than calling the build tool, CI config can have other steps in it, but it should be mostly consumed with CI-specific add-ons, like auth (OIDC handshake), capturing logs, uploading artifacts, sending a slack notification, whatever it is.
Fortunately most CI/CD systems expose an environment variable during the build so you can detect most of those situations and still write a script that runs locally on a developer box.
Our wrapping is 'minimal', in that you can still run
or
and get the same build artefacts as running:
My current company is fanatical about read-only for just about every system we have (a bit like Nix, I suppose), and that includes CI/CD. Once the build is defined to run debug or release, rights are removed so the only thing you can edit are the build scripts you have under your control in your repo. This works extremely well for us.
Interestingly despite being pretty hard-nosed about a lot of things, Nix does not insist on a read-only source directory at build time— the source is pulled into a read-only store path, but from there it is copied into the build sandbox, not bind-mounted.
I expect this is largely a concession to the reality that most autotools projects still expect an in-source build, not to mention Python wanting to spray pyc files and build/dist directories all over the place.
I tried to drive this approach at a previous job but nobody else on the team cared so I ended up always having to mirror all the latest build changes into my bash script.
The reason it didn't catch on? Everyone else was running local builds in a proprietary IDE, so to them the local build was never the same anyway.
I always use, no matter what I am using underneath, a bootstrap script, a configure script and a build step.
That keeps the cli interface easy, expectable and guessable.