Comment by Wilder7977
8 hours ago
For certain things, makefiles are great options. For others though they are a nightmare. From a security perspective, especially if you are trying to reach SLSA level 2+, you want all the build execution to be isolated and executed in a trusted, attestable and disposable environment, following predefined steps. Having makefiles (or scripts) with logical steps within them, makes it much, much harder to have properly attested outputs.
Using makefiles mixes execution contexts between the CI pipeline and the code within the repository (that ends up containing the logic for the build), instead of using - centrally stored - external workflows that contains all the business logic for the build steps (e.g., compiler options, docker build steps etc.).
For example, how can you attest in the CI that your code is tested if the workflow only contains "make test"? You need to double check at runtime what the makefile did, but the makefile might have been modified by that time, so you need to build a chain of trust etc. Instead, in a standardized workflow, you just need to establish the ground truth (e.g., tools are installed and are at this path), and the execution cannot be modified by in-repo resources.
That doesn't make any sense. Nothing about SLSA precludes using make instead of some other build tool. Either inputs to a process are hermetic and attested or they're not. Makefiles are all about executing "predefined steps".
It doesn't matter whether you run "make test" or "npm test whatever": you're trusting the code you've checked out to verify its own correctness. It can lie to you either way. You're either verifying changes or you're not.
You haven't engaged with what I wrote, of course it doesn't make sense.
The easiest and most accessible way to attest what has been done is to have all the logic of what needs to be done in a single context, a single place. A reusable workflow that is executed by hash in a trusted environment and will execute exactly those steps, for example. In this case, step A does x, and step B attests that x has been done, because the logic is immutably in a place that cannot be tampered with by whoever invokes that workflow.
In the case of the makefile, in most cases, the makefile (and therefore the steps to execute) will be in a file in the repository, I.e., under partial control of anybody who can commit and under full control of those who can merge. If I execute a CI and step A now says "make x", the semantic actually depends on what the makefile in the repo includes, so the contexts are mixed between the GHA workflow and the repository content. Any step of the workflow now can't attest directly that x happened, because the logic of x is not in its context.
Of course, you can do everything in the makefile, including the attestation steps, bringing them again in the same context, but that makes it so that once again the security relevant steps are in a potentially untrusted environment. My thinking specifically hints at the case of an organization with hundreds of repositories that need to be brought under control. Even more, what I am saying make sense if you want to use the objectively convenient GH attestation service (probably one of the only good feature they pushed in the last 5 years).
Usually, the people writing the Makefile are the same that could also be writing this stuff out in a YAML (lol) file as the CI instructions, often located in the same repository anyway. The irony in that is striking. And then we have people who can change environment variables for the CI workflows. Usually, also developers, often the same people that can commit changes to the Makefile.
I don't think it changes much, aside from security theater. If changes are not properly reviewed, then all fancy titles will not help. If anything, using Make will allow for a less flaky CI experience, that doesn't break the next time the git hoster changes something about their CI language and doesn't suffer from YAMLitis.
2 replies →