Comment by le-mark
6 days ago
> I had originally configured the server phoenix with only 12GB swap. I then had to restart ./build_all_fast_glibc.sh a few times because the Fil-C compilation ran out of memory. Switching to 36GB swap made everything work with no restarts; monitoring showed that almost 19GB swap (plus 12GB RAM) was used at one point. A larger server, 128 cores with 512GB RAM, took 8 minutes for Fil-C plus 6 minutes for musl, with no restarts needed.
Yikes that’s a lot of memory! Filc is doing a lot of static analysis apparently.
I think that's the build of LLVM+Clang itself.
Yes, linking LLVM takes up a lot of memory. The documented guidance is to allow one link job per 15 GB of RAM [1].
[1] https://llvm.org/docs/CMake.html#frequently-used-llvm-relate...
And, fairly uniquely, LLVM has a LLVM_PARALLEL_LINK_JOBS setting that is distinct from the number of parallel jobs for everything else. I think I was using that 15 years ago.
I wish GCC had it. I have a quad core machine with 16 GB RAM that OOMs on building recent GCC -- 15 and HEAD for sure, can't remember whether 14 is affected. Enabling even 1 GB of swap makes it work. The culprit is four parallel link jobs needing ~4 GB each.
There are only four of them, so a -j8 build (e.g., with HT) is no worse.
Is that why the Rust toolchain can't be compiled on a 32-bit system?
1 reply →