Comment by dhosek
21 hours ago
Because Russt and Swift are doing much more work than a C compiler would? The analysis necessary for the borrow checker is not free, likewise with a lot of other compile-time checks in both languages. C can be fast because it effectively does no compile-time checking of things beyond basic syntax so you can call foo(char) with foo(int) and other unholy things.
The borrow checker is usually a blip on the overall graph of compilation time.
The overall principle is sound though: it's true that doing some work is more than doing no work. But the borrow checker and other safety checks are not the root of compile time performance in Rust.
While the borrow checker is one big difference, it's certainly not the only thing the rust compiler offers on top of C that takes more work.
Stuff like inserting bounds checking puts more work on the optimization passes and codegen backend as it simply has to deal with more instructions. And that then puts more symbols and larger sections in the input to the linker, slowing that down. Even if the frontend "proves" it's unnecessary that calculation isn't free. Many of those features are related to "safety" due to the goals of the language. I doubt the syntax itself really makes much of a difference as the parser isn't normally high on the profiled times either.
Generally it provides stricter checks that are normally punted to a linter tool in the c/c++ world - and nobody has accused clang-tidy of being fast :P
It truly is not about bounds checks. Index lookups are rare in practical Rust code, and the amount of code generated from them is miniscule.
But it _is_ about the sheer volume of stuff passed to LLVM, as you say, which comes from a couple of places, mostly related to monomorphization (generics), but also many calls to tiny inlined functions. Incidentally, this is also what makes many "modern" C++ projects slow to compile.
In my experience, similarly sized Rust and C++ projects seem to see similar compilation times. Sometimes C++ wins due to better parallelization (translation units in Rust are crates, not source files).
These languages do more at compile time, yes. However, I learned from Ryan's discord server that he did a unity build in a C++ codebase and got similar results (just a few seconds slower than the C code). Also, you could see in the article that most of the time was being spent in LLVM and linking. With a unity build, you nearly cut out link step entirely. Rust and Swift do some sophisticated things (hinley-milner, generics, etc.) but I have my doubts that those things cause the most slowdown.
That’s not a good example. Foo(int) is analyzed by compiler and a type conversion is inserted. The language spec might be bad, but this isn’t letting the compiler cut corners.
If you'd like the rust compiler to operate quickly:
* Make no nested types - these slow compiler time a lot
* Include no crates, or ones that emphasize compiler speed
C is still v. fast though. That's why I love it (and Rust).
>Make no nested types
I wouldn't like it that much
This explanation gets repeated over and over again in discussions about the speed of the Rust compiler, but apart from rare pathological cases, the majority of time in a release build is not spent doing compile-time checks, but in LLVM. Rust has zero-cost abstractions, but the zero-cost refers to runtime, sadly there's a lot of junk generated at compile-time that LLVM has to work to remove. Which is does, very well, but at cost of slower compilation.
Is it possible to generate less junk? Sounds like compiler developers took a shortcuts, which could be improved over time.
Well, zero-cost abstractions are still abstractions. It’s not junk per-se, but things that will be optimized out if the IR has enough information to safely do so, so basically lots of extra metadata to actually prove to LLVM that these things are safe.
You can address the junk problem manually by having generic functions delegate as much of their work as possible to non-generic or "less" generic functions (Where a "less" generic function is one that depends only on a known subset of type traits, such as size or alignment. Then delegating can help the compiler generate fewer redundant copies of your code, even if it can't avoid code monomorphization altogether.)
3 replies →
Probably, but it's the kind of thing that needs a lot of fairly significant overhauls in the compiler architecture to really move the needle on, as far as I understand.