Comment by CrendKing

3 days ago

Why can't Cargo have a system like PyPI where library author uploads compiled binary (even with their specific flags) for each rust version/platform combination, and if said binary is missing for certain combination, fallback to local compile? Imagine `cargo publish` handle the compile+upload task, and crates.io be changed to also host binaries.

> Why can't Cargo have a system like PyPI where library author uploads compiled binary

Unless you have perfect reproducible builds, this is a security nightmare. Source code can be reviewed (and there are even projects to share databases of already reviewed Rust crates; IIRC, both Mozilla and Google have public repositories with their lists), but it's much harder to review a binary, unless you can reproducibly recreate it from the corresponding source code.

  • Yet other ecosystems handle it just fine, regardless of security concerns, by having signed artifacts and configurable hosting as an option.

  • > Unless you have perfect reproducible builds

    Or a trusted build server doing the builds. There is a build-bot building almost every Rust crate already for docs.rs.

    • docs.rs is just barely viable because it only has to build crates once (for one set of features, one target platform etc.).

      What you propose would 1) have to build each create for at least the 8 Tier 1 targets, if not also the 91 Tier 2 targets. That would be either 8 or 99 binaries already.

      Then consider that it's difficult to anticipate which feature combinations a user will need. For example, the tokio crate has 14 features [1]. Any combination of 14 different features gives 2^14 = 16384 possible configurations that would all need to be built. Now to be fair, these feature choices are not completely independent, e.g. the "full" feature selects a bunch of other features. Taking these options out, I'm guessing that we will end up with (ballpark) 5000 reasonable configurations. Multiply that by the number of build targets, and we will need to build either 40000 (Tier 1 only) or 495000 binaries for just this one crate.

      Now consider on top that the interface of dependency crates can change between versions, so the tokio crate would either have to pin exact dependency versions (which would be DLL hell and therefore version locking is not commonly used for Rust libraries) or otherwise we need to build the tokio crate separately for each dependency version change that is ABI-incompatible somewhere. But even without that, storing tens of thousands of compiled variants is very clearly untenable.

      Rust has very clearly chosen the path of "pay only for what you use", which is why all these library features exist in the first place. But because they do, offering prebuilt artifacts is not viable at scale.

      [1] https://github.com/tokio-rs/tokio/blob/master/tokio/Cargo.to...

  • I don’t think it’s that much of a security nightmare: the basic trust assumption that people make about the packaging ecosystem (that they trust their upstreams) remains the same whether they pull source or binaries.

    I think the bigger issues are probably stability and size: no stable ABI combined with Rust’s current release cadence means that every package would essentially need to be rebuilt every six weeks. That’s a lot of churn and a lot of extra index space.

    • If you have reproducible builds it's no different. Without those binaries are a nightmare in that you can't easily link a given binary back to a given source snapshot. Deciding to trust my upstream is all well and good but if it's literally impossible to audit them that's not a good situation to be in.

      1 reply →

    • > remains the same whether they pull source or binaries.

      I don't think that's exactly true, it's definitely _easier_ to sneak something into a binary without people noticing than it is to sneak it into rust source, but there hasn't been an underhanded rust competition for a while so I guess it's hard to be objective about that.

      9 replies →

It runs counter to Cargos curreat model where the top-level workspace has complete control over compilation, including dependencies and compiler flags. I've been floating an idea of "opaque dependencies" that are like python depending on C libraries or a C++ library dependening on a dynamic library.

That would work for debug builds (and that's something that I would appreciate) but not for release, as most of the time you want to compile for the exact CPU you're targeting not just for say “x86 Linux” to make sure your code is optimized properly using SIMD instructions.

A trustworthy distributed cache would also work very well for this in practice. Cargo works with sccache. Using bazel + rbe can work even better.