← Back to context

Comment by uecker

1 day ago

If you say the header model makes it slower than it could be, you need to compare it to something. I do not see how it causes significant slow downs in C projects (in contrast to C++). And yes, I wrote compilers and (incomplete) preprocessors. I do not understand what you mean by your second point. What separation of interface and implementation allows you to do is updating the implementation without having to recompile other TUs. You can achieve this is also in different ways, but in C this works by in this way.

I am not sure how it works in Rust as you need to monomorphize a lot of things, which come from other crates. It seems this would inevitably entangle the compilations.

> I do not see how it causes significant slow downs in C projects

It's that textual inclusion is just a terrible model. You end up reprocessing the same thing over and over again, everywhere it is used. If you #include<foo.h> 100 times, the compiler has to reparse those contents 100 times. Nested headers end up amplifying this effect. It's also at a file-level granularity, if you change a header, every single .c that imports it must be recompiled, even if it didn't use the thing that was changed. etc etc. These issues are widely known.

> I do not understand what you mean by your second point. What separation of interface and implementation allows you to do is updating the implementation without having to recompile other TUs.

Sure, but you don't need to have header files to do this. Due to issues like the above, they cause more things to be recompiled than necessary, not less.

> You can achieve this is also in different ways, but in C this works by in this way.

Right, my point is, those other ways are better.

> I am not sure how it works in Rust as you need to monomorphize a lot of things, which come from other crates. It seems this would inevitably entangle the compilations.

The fact that there are "other crates" is because Rust supports separate compilation: each crate is compiled independently, on its own.

The rlib contains the information that, when you link two crates together, the compiler can use for monomorphization. And it's true that monomorphization can cause a lot of rebuilding.

But to be clear, I am not arguing that Rust compilation is fast. I'm arguing that C could be even faster if it didn't have the preprocessor.

  • > It's that textual inclusion is just a terrible model. You end up reprocessing > the same thing over and over again, everywhere it is used.

    One could certainly store the interfaces in some binary format, but is it really worth it? This would also work with headers by using a cache, but nobody does it for C because there is not much to gain. Parsing is fast anyhow, and compilers are smart enough not to look at headers multiple times when protected by include guards. According to some quick measurements, you could save a couple of percent at most.

    The advantages of headers is that they are simple, transparent, discoverable, and work with outside tools in a modular way. This goes against the trend of building frameworks that tie everything together in a tightly integrated way. But I prefer the former. I do not think it is a terrible model, quite the opposite. I think it is a much better and nicer model.