Comment by duped

1 day ago

I think this is mostly a myth. If you look at Rust compiler benchmarks, while typechecking isn't _free_ it's also not the bottleneck.

A big reason that amalgamation builds of C and C++ can absolutely fly is because they aren't reparsing headers and generating exactly one object file so the linker has no work to do.

Once you add static linking to the toolchain (in all of its forms) things get really fucking slow.

Codegen is also a problem. Rust tends to generate a lot more code than C or C++, so while the compiler is done doing most of its typechecking work, the backend and assembler has a lot of things to chuck through.

The meme that static linking is slow or produces anything other than the best executables is demonstrably false and the result of surprisingly sinister agendas. Get out readelf and nm and PS sometime and do the arithematic: most programs don't link much of glibc (and its static link is broken by design, musl is better at just about everything). Matt Godbolt has a great talk about how dynamic linking actually works that should give anyone pause.

DLLs got their start when early windowing systems didn't quite fit on the workstations of the era in the late 80s / early 90s.

In about 4 minutes both Microsoft and GNU were like, "let me get this straight, it will never work on another system and I can silently change it whenever I want?" Debian went along because it gives distro maintainers degrees of freedom they like and don't bear the costs of.

Fast forward 30 years and Docker is too profitable a problem to fix by the simple expedient of calling a stable kernel ABI on anything, and don't even get me started on how penetrated everything but libressl and libsodium are. Protip: TLS is popular with the establishment because even Wireshark requires special settings and privileges for a user to see their own traffic, security patches my ass. eBPF is easier.

Dynamic linking moves control from users to vendors and governments at ruinous cost in performance, props up bloated industries like the cloud compute and Docker industrial complex, and should die in a fire.

Don't take my word for it, swing by cat-v.org sometimes and see what the authors of Unix have to say about it.

I'll save the rant about how rustc somehow manages to be slower than clang++ and clang-tidy combined for another day.

  • …surprisingly sinister agendas.

    Dynamic linking moves control from users to vendors and governments at ruinous cost in performance, props up bloated industries...

    This is ridiculous. Not everything is a conspiracy!

    • That's an even more reasonable fear than trusting trust, and people seem to take that seriously.

    • I didn't say anything was a conspiracy, let alone everything. I said inferior software is promoted by vendors on Linux as well as on MacOS and Windows with unpleasant consequences for users in a way that serves those vendors and the even more powerful institutions to which they are beholden. Sinister intentions are everywhere in this business (go read the opinions of the people who run YC), that's not even remotely controversial.

      If fact, if there was anything remotely controversial about a bunch of extremely specific, extremely falsifiable claims I made, one imagines your rebuttal would have mentioned at least one.

      I said inflmatory things (Docker is both arsonist and fireman at ruinous cost), but they're fucking true. That Alpine in the Docker jank? Links musl!

  • I think you're confused about my comment and this thread - I'm talking about build times.

    • You said something false and important and I took the opportunity to educate anyone reading about why this aspect of their computing experience is a mess. All of that is germane to how we ended up in a situation where someone is calling rustc with a Dockerfile and this is considered normal.

      2 replies →

Not only does it generate more code, the initially generated code before optimizations is also often worse. For example, heavy use of iterators means a ton of generics being instantiated and a ton of call code for setting up and tearing down call frames. This gets heavily inlined and flattened out, so in the end it's extremely well-optimized, but it's a lot of work for the compiler. Writing it all out classically with for loops and ifs is possible, but it's harder to read.

  • For loops are sugar around an Iterator instantiation:

      for i in 0..10 {}
    

    translates to roughly

      let mut iter = Range { start: 0, end: 10 }.into_iter();
      while let Some(i) = iter.next() {}

> Once you add static linking to the toolchain (in all of its forms) things get really fucking slow.

Could you expand on that, please? Every time you run dynmically linked program, it is linked at runtime. (unless it explicitly avoids linking unneccessary stuff by dlopening things lazily; which pretty much never happens). If it is fine to link on every program launch, linking at build time should not be a problem at all.

If you want to have link time optimization, that's another story. But you absolutely don't have to do that if you care about build speed.

The swift compiler is definitely bottle necked by type checking. For example, as a language requirement, generic types are left more or less in-tact after compilation. They are type checked independent of what is happening. This is unlike C++ templates which are effectively copy-pasting the resolved type with the generic for every occurrence of type resolution.

This has tradeoffs: increased ABI stability at the cost of longer compile times.

  • > This has tradeoffs: increased ABI stability at the cost of longer compile times.

    Nah. Slow type checking in Swift is primarily caused by the fact that functions and operators can be overloaded on type.

    Separately-compiled generics don't introduce any algorithmic complexity and are actually good for compile time, because you don't have to re-type check every template expansion more than once.

  • A lot can be done by the programmer to mitigate slow builds in Swift. Breaking up long expressions into smaller ones and using explicit types where type inference is expensive for example.

    I’d like to see tooling for this to pinpoint bottlenecks - it’s not always obvious what’s making builds slow.

    • >I’d like to see tooling for this to pinpoint bottlenecks - it’s not always obvious what’s making builds slow.

      I second this enthusiastically.

      1 reply →

    • > Breaking up long expressions into smaller ones

      If it improves compile time, that sounds like a bug in the compiler or the design of the language itself.

  • >This is unlike C++ templates which are effectively copy-pasting the resolved type with the generic for every occurrence of type resolution.

    Even this can lead to unworkable compile times, to the point that code is rewritten.

>Codegen is also a problem. Rust tends to generate a lot more code than C or C++

Wouldn't you say a lot of that comes from the macros and (by way of monomorphisation) the type system?

  • Modern C++ in particular does a lot of similar, albeit not identical, codegen due to its extensive metaprogramming facilities. (C is, of course, dead simple.) I've never looked into it too much but anecdotally Rust does seem to generate significantly more code than C++ in cases where I would intuitively expect the codegen to be similar. For whatever reason, the "in theory" doesn't translate to "in practice" reliably.

    I suspect this leaks into both compile-time and run-time costs.