← Back to context

Comment by WhereIsTheTruth

3 days ago

We have solved the better C issue, but nobody seems keen on solving the better compiler issue

Why do we still have to recompile the whole program everytime we make a change, the only project i am aware of who wants to tackle this is Zig with binary patching, and that's imo where we should focus our effort on..

C3 does look interesting tho, the idea of ABI compatibility with C is pretty ingenious, you get to tap into C's ecosystem for free

> Why do we still have to recompile the whole program everytime we make a change

That problem was solved decades ago via object files and linkers. Zig needs a different approach because its language features depend on compiling the entire source code as a single compilation unit, but I don't think that C3 has that same "restriction" (not sure though).

To a large extent, this problem is primarily due to slow compilation. It is possible to write a direct to machine code compiler that compiles at greater than one million lines per second. That is more code than I am likely to write in my lifetime. A fast compiler with no need for incremental compilation is a superior default and can always be adapted to add incrementalism when truly needed.

C3 doesn't have a recompile everything model, in fact it's pretty much designed around supporting separate compilation and dynamic linking (unlike Zig and Odin), it even supports Objective-C style dynamic calls.

Separate compilation is one solution to the problem of slow compilation.

Binary patching is another one. It feels a bit messy and I am sceptical that it can be maintained assuming it works at all.

I think a much better approach would be too make the compilers faster. Why does compiling 1M LOC take more than 1s in unoptimized mode for any language? My guess is part of blame lies with bloated backends and meta programming (including compile time evaluation, templates, etc.)

  • Ha, I did not see your post before making mine. You are correct in your assessment of the blame.

    Moreover, I view optimization as an anti-pattern in general, especially for a low level language. It is better to directly write the optimal solution and not be dependent on the compiler. If there is a real hotspot that you have identified through profiling and you don't know how to optimize it, then you can run the hotspot through an optimizing compiler and copy what it does.

> Why do we still have to recompile the whole program everytime we make a change

Are you talking about compiling, or linking, or both?

GNU ld has supported incremental linking for ages, and make systems only recompile things based on file level dependencies.

I guess recompilation could perhaps be smarter based on what changed or was added/deleted to a module definition (e.g C header file), but this would seem difficult to get right. Maybe you just add a new function to a module, so no need to recompile other modules that use it, right? Except what if there is now a name clash and they would fail if recompiled?

> Why do we still have to recompile the whole program everytime we make a change, the only project i am aware of who wants to tackle this is Zig

Lisp solved that problem 60 years ago.

A meta answer to your question, I guess.