My experience asking this question is that effectively no one understands how cross-compilation works (as is also seen here from the response involving nested virtualization)... which is really disappointing given that it causes even more chaos when people fail to understand that even deploying to their same architecture on Linux should be set up more similar to a cross-compiled build (to avoid any properties of the system bleeding into the resulting binary). As far as I can tell, people just think that compilers only can target the system they are on, and if they want to target other architectures, other operating systems, or even merely older systems, they have to run their build system on a machine equivalent to their eventual deployment.
I work for a healthcare company, and one of the things that we have to be able to do is reproduce our software for investigations. As a result, we build static cross-compilers pointing to a small system root extracted from the distribution we're building for but targeting the same architecture we're building on. In that way we can ensure that the host system dependencies are not embedded in the built result which means we can pull our compiler and system root out of archive and run it on practically any Linux system.
We usually keep archives of the software releases (even ones that are really, REALLY old and not out in service for the most part except for refurbs of old product), but being able to rebuild them and more importantly build a fixed version targeting the OS it originally targeted is really nice.
Sure you know what I meant. It’s an emulated compiler compiling natively. But the point is that building Aarch64 containers under emulation sucks and it doesn’t suck under a native build.
Why would it need virtualization at all? The point of cross-compiling is that you build binaries for a different arch/platform, ex. running gcc as an x86_64 binary on an x86_64 host turning C into aarch64 binaries.
You can do cross compilation in GitHub actions and testing on QEMU is straightforward. I have a repo that builds for and tests half a dozen emulated targets.
My experience asking this question is that effectively no one understands how cross-compilation works (as is also seen here from the response involving nested virtualization)... which is really disappointing given that it causes even more chaos when people fail to understand that even deploying to their same architecture on Linux should be set up more similar to a cross-compiled build (to avoid any properties of the system bleeding into the resulting binary). As far as I can tell, people just think that compilers only can target the system they are on, and if they want to target other architectures, other operating systems, or even merely older systems, they have to run their build system on a machine equivalent to their eventual deployment.
What to expect when many don't even understand how linkers work, and include files scripting style to avoid learning them?
Installing a cross-platform targeting compiler toolchain is next level.
I work for a healthcare company, and one of the things that we have to be able to do is reproduce our software for investigations. As a result, we build static cross-compilers pointing to a small system root extracted from the distribution we're building for but targeting the same architecture we're building on. In that way we can ensure that the host system dependencies are not embedded in the built result which means we can pull our compiler and system root out of archive and run it on practically any Linux system.
We usually keep archives of the software releases (even ones that are really, REALLY old and not out in service for the most part except for refurbs of old product), but being able to rebuild them and more importantly build a fixed version targeting the OS it originally targeted is really nice.
Nix would handle this trivially.
Somewhat tangential, cross-compilation seems to have been frowned upon in Unix historically. A lots of things out there just assume HOST==TARGET.
Our workload took nearly 18 minutes to cross-compile on their AMD64 runners. It builds on the AArch64 runners in 4 minutes. (Whole container I mean)
Thats probably not a cross compile then, its an emulated compile. Cross compiling is basically the same speed.
Sure you know what I meant. It’s an emulated compiler compiling natively. But the point is that building Aarch64 containers under emulation sucks and it doesn’t suck under a native build.
1 reply →
It is very slow
I'm probably wrong, but I think this kind of cross-compilation requires a nested virtualization and GHA hosted runners don't support it.
Why would it need virtualization at all? The point of cross-compiling is that you build binaries for a different arch/platform, ex. running gcc as an x86_64 binary on an x86_64 host turning C into aarch64 binaries.
GHA can do nested virt via KVM. Here is an action that runs a test that boots up a VM running NixOS: https://github.com/aksiksi/compose2nix/blob/main/.github/wor...
You can also run QEMU if you want to build for ARM (although this announcement makes this unnecessary): https://github.com/aksiksi/ncdmv/blob/aa108a1c1e2c14a13dfbc0...
They've done for a while.
My OSS Go project runs tests in 18 different OS/architecture combinations.
Some native, some using QEMU binfmt (user mode emulation on Linux), others launching a VM. In particular, that's how I test the BSDs and Solaris.
https://github.com/ncruces/go-sqlite3/wiki/Support-matrix
You can do cross compilation in GitHub actions and testing on QEMU is straightforward. I have a repo that builds for and tests half a dozen emulated targets.