Comment by westurner
6 hours ago
Blaze instead of make, ant, maven. But now there's cmake and ninjabuild. gn wraps ninjabuild wraps cmake these days fwiu.
Blaze is/was integrated with Omega scheduler, which is not open.
Bazel is open source.
By the time Bazel was open sourced, Twitter had pantsbuild and Facebook had buck.
OpenWRT's Makefiles are sufficient to build OpenWRT and the kernel for it. (GNU Make is still sufficient to build the Linux kernel today, in 2026.)
Make compares files to determine whether to rebuild them if they already exist; by comparing file modification time (mtime) unless the task name is in the .PHONY: list at the top of the Makefile. But the task names may not contain slashes or spaces.
`docker build` and so also BuildKit archive the build chroot after each build step that modifies the filesystem (RUN, ADD, COPY) as a cacheable layer identified by a hash of its content.
Other Dockerfile instructions add metadata: CMD, ENTRYPOINT, LABEL, ENV, ARG, WORKDIR, USER, EXPOSE <port/tcp>, VOLUME <path>.
The FROM instruction creates a build stage from scratch or from a different container layer.
Dockerfile added support for Multi-stage builds with multiple `FROM` instructions in 2017 (versions 17.05, 17.06CE).
`docker build` is now moby and there is also buildkit? `podman buildx` seems to work.
nerdctl supports a number of features that have not been merged back to docker or to podman.
> it obviously was designed for a system where most dependencies are vendored, and worked better for languages that google used like c++, java, and python.
Those were the primary languages at google at the time. And then also to build software? Make, shell scripts, python, that Makefile calls git which calls perl so perl has to be installed, etc.
Also gtests and gflags.
"Compiler Options Hardening Guide for C and C++" https://news.ycombinator.com/item?id=43551959 :
>> There are default gcc and/or clang compiler flags in distros' default build tools; e.g. `make` specifies additional default compiler flags (that e.g. cmake, ninja, gn, or bazel/buck/pants may not also specify for you).
Which CPU microarchitectures and flags are supported?
ld.so --help | grep "supported"
cat /proc/cpuinfo | grep -E '^(flags|bugs)'`
AVX-512 is in x86-64-v3. By utilizing features like AVX-512, we would save money (by utilizing features in processors newer than Pentium 4 (x86-64-v1)).
How to add an `-march=x86-64-v3` argument to every build?
How to add build flags to everything for something like x86-64-v4?
Which distros support consistent build parametrization to make adding global compiler build flags for multiple compilers?
- Gentoo USE flags
- rebuild a distro and commit to building the core and updates and testing and rawhide with your own compiler flags and package signatures and host mirrored package repos
- Intel Clear Linux was cancelled.
- CachyOS (x86-64-v3, x86-64-v4, Zen4)
- conda-forge?
Gentoo:
- ChromiumOS was built on gentoo and ebuild IIRC
- emerge app-portage/cpuid2cpuflags, CPU_FLAGS_X86=, specify -march=native for C/[C++] and also target-cpu=native for Rust in /etc/portage/make.conf
- "Gentoo x86-64-v3 binary packages available" (2024) https://news.ycombinator.com/item?id=39250609
Google, Facebook, and Twitter have a monorepo to build packages from.
Google had a monorepo at the time that blaze was written.
Twitter ("X") is moving from pantsbuild to blaze BUILD files.
TIL there is a buck2. How does facebook/buck2 compare to google/bazel (compare to what is known about blaze)?
Should I build containers (chroot fs archives) with ansible? Then there is no buildkit.
FWIW `podman-kube-play` can run some kubernetes yaml.
The ansible-in-containers thing is very much an unsolved problem. Basically right now you have three choices:
- install ansible in-band and run it against localhost (sucks because your playbook is in a final image layer; you might not want Python at all in the container)
- use packer with ansible as your provisioner and a docker container export, see: https://alex.dzyoba.com/blog/packer-for-docker/
- copy a previous stage's root into a subdirectory and then run ansible on that as a chroot, afterward copy the result back to a scratch container's root.
All of these options fall down when you're doing anything long-running though, because they can't work incrementally. As soon as you call ansible (or any other tool), then from Docker's point of view it's now a single step. This is really unfortunate because a Dockerfile is basically just shell invocations, and ansible gives a more structured and declarative-ish way to do shell type things.
I have wondered if a system like Dagger might be able to do a better job with this, basically break up the playbook programmatically into single task sub-playbooks and call each one in its own Dagger task/layer. This would allow ansible to retain most of its benefits while not being as hamstrung by the semantics of the caller. And it would be particularly nice for the case where the container is ultimately being exported to a machine image because then if you've defined everything in ansible you have a built-in story for freshening that deployed system later as the playbook evolves.