Tiny C Compiler

9 hours ago (bellard.org)

The code of TCC (0.9.26) is kind hard to compile, I have discovered in the past year, while developing a minimal C compiler to compile the TCC sources [1]. For that reason, I have concluded that TCC is its own test set. It uses the constant 0x80000000, which is an edge case for if you want to print it as a signed integer only using 32-bit operators. There is a switch statement with an post-increment operator in the switch expression. There are also switch statements with fall throughs and with goto statements in the cases. It uses the ## operator where the result is the name of a macro. Just to name a few.

[1] https://github.com/FransFaase/MES-replacement

  • You have simply made one tiny step, that the guys who used AI and $25,000 to write a C compiler in Rust, could not make:

    You are using the compiler to compile itself.

    "TCC is its own test set." Absolutely brilliant.

    • Back in the 90s gcc did a three-stage build to isolate the result from weakness in the vendor native compiler (so, vendor builds gcc0, gcc0 builds gcc1, gcc1 builds gcc2 - and you compare gcc2 to gcc1 to look for problems.) It was popularly considered a "self test suite" until someone did some actual profiling and concluded that gcc only needed about 20% of gcc to compile itself :-)

  • To be honest, these all seem like pretty basic features.

    Goto is easier to implement than an if statement. Postincrement behaves no differently in a switch statement than elsewhere.

One of the coolest tricks is using tcc to compile "on demand." This allows you to use a compiled language like Nim for scripting, with almost no noticeable performance difference compared to interpreted languages.

  #!/usr/bin/env -S nim r --cc:tcc -d:useMalloc --verbosity:0 --hints:off --tlsEmulation:on --passL:-lm
  echo "Hello from Nim via TCC!"

Here's a comparison (bash script at [1]) of a minimal binary compiled this way with different interpreters. First line is the noise. Measured by tim[2] written by @cb321.

  1.151 +- 0.028 ms     (AlreadySubtracted)Overhead
  1.219 +- 0.037 ms     bash -c exit
  2.498 +- 0.040 ms     fish --no-config --private -c exit
  1.682 +- 0.058 ms     perl   -e 'exit 0'
  1.621 +- 0.043 ms     gawk   'BEGIN{exit 0}'
  15.8 +- 2.2 ms     python3 -c 'exit(0)'
  20.0 +- 5.7 ms     node   -e 'process.exit(0)'
  -2.384 +- 0.041 ms tcc -run x.c
  153.2 +- 4.6 ms     nim r --cc:tcc  x.nim
  164.5 +- 1.2 ms     nim r --cc:tcc -d:release x.nim

Measured on a laptop without any care to clean the environment, except turning the performance governor. Even with `-d:release` compiling nim code is comparable.

The fact that tcc compilation cycle measures negative here is a nice punchline.

[1]: https://gist.github.com/ZoomRmc/58743a34d3bb222aa5ec02a5e2b6...

[2]: https://github.com/c-blake/bu/blob/main/doc/tim.md

  • It's worth pointing out that Nim is going to cache all of the compilation up to the linking step. If you want to include the full compilation time, you'd need to add --forceBuild to the Nim compiler.

    (Since a lot of the stuff you'd use this for doesn't change often, I don't think this invalidates the "point", since it makes "nim r" run very quickly, but still)

    There's also the Nim interpreter built into the compiler, "NimScript", which can be invoked like:

      #!/usr/bin/env -S nim e --hints:off
      echo "Hello from Nim!"
    

    The cool thing is that, without --forceBuild, Nim + TCC (as a linker) has a faster startup time than NimScript. But if you include compile time, NimScript wins.

    • Yep, always forget about '--forceBuild'. You can see in the script above the nimcache directory was overriden to tmpfs for the measurement, though. Caching will be helpful in real usecases, of course.

      Nimscript is cool but very limited, not being able to use parts of the stdlib taht depend on C. Hope this will change with Nimony/Nim 3.

The unofficial repo continuing tcc has geoblocked the UK.

https://repo.or.cz/tinycc.git

There is an actively maintained fork with RISC-V support and such

https://repo.or.cz/w/tinycc.git

https://github.com/TinyCC/tinycc

  • I've never seen another repo with public commit access like that. I guess the project is niche enough that you don't get spammed with bad or malicious commits.

    • Yeah it's basically anarchy (to some extent)

      https://repo.or.cz/h/mob.html

      >The idea is to provide unmoderated side channel for random contributors to work on a project, with similar rationale as e.g. Wikipedia - that given enough interested people, the quality will grow rapidly and occassional "vandalism" will get fixed quickly. Of course this may not work nearly as well for software, but here we are, to give it a try.

    • When pugs (a perl6 implementation in Haskell) was a thing, you gained commit access by asking and it was immediately granted to everyone. It was insane and awesome.

  • I would be interested in contributing to this but the UK is geoblocked.

    • Are you sure you are geoblocked, and that's it's just not the updated SSH host key change from 2022?

      Actual, geoblocks can be confounding of course. After brexit I've personally thought of blocking UK phone numbers from calling me though... So could just as well be intentional

  • It is also interesting to note that while the repository is quite active, there has not been any release for _8 years_, and the website is the same one at the top of this conversation, i.e. the one where the old maintainer says he quit and the benchmarks are from 20 years ago.

    A small and minimalistic C compiler is actually a very important foundational project for the software world IMNSHO.

    I'm definitely reminded of: https://xkcd.com/2347/

Does anyone use libtcc for a scripting language backend? Smaller and faster than llvm. You'd have to transpile to a C ast I imagine.

  • Years ago I built a scripting language that transpiled to TCC and then compiled to machine code in memory. It produced human-readable C code so it was very easy to get going: when debugging the compiler I could just look at the generated C code without having to learn any special infrastructure/ecosystem/syntax etc. Plus basically zero-overhead interop with C out of the box => immediate access to a lot of existing libraries (although a few differences in calling conventions between TCC and GCC did bite me once). Another feature I had was "inline C" if you wanted to go low level, it was super trivial to add, too. It was pretty fast, maybe two times slower than GCC, IIRC, but more than enough for a scripting language.

  • libtcc doesn't give you much control AST wise, you basically just feed it strings. I'm using it for the purpose you mentioned though--scripting language backend--since for my current "scripting-language" project I can emit C89, and it's plenty fast enough for a REPL!

        /* add a file (either a C file, dll, an object, a library or an ld script). Return -1 if error. */
        int tcc_add_file(TCCState *s, const char *filename);
    
        /* compile a string containing a C source. Return non zero if error. */
        int tcc_compile_string(TCCState *s, const char *buf);

What's the quality of the generated code like? Does it use explicit stack frames and all local variables live there? Does it move loop-invariant operations out of a loop? Does it store variables in registers?

Anyone know a good resource for getting started writing a compiler? I'm not trying to write a new LLVM, but being a "software engineer" writing web-based APIs for a living is leaving me wanting more.

I recall, there where similar items back in late 70s and early 80s .

Tiny C, Small C are names I seem to recall, buts its very fuzzy - Not sure if they were compilers, may have been interpreters....

TCC is my go-to for keeping builds lean. on windows specifically, you get a functional C compiler in a few hundred KB, whereas the standard alternatives require gigabytes of disk space (that I don't have to spare) and complex environment setups

This was the compiler I was required to use for my courses in university. GCC was forbidden. The professor just really liked tcc for some reason.

  • > The professor just really liked tcc for some reason.

    Perhaps, or maybe they just got tired of students coming in and claiming that their program worked perfectly on such-and-such compiler.[1] It looks like tcc would run on most systems from the time of its introduction, and perhaps some that are a great deal older. When I took a few computer science courses, they were much more restrictive. All code had to be compiled with a particular compiler on their computers, and tested on their computers. They said it was to prevent cheating but, given how trivial it would have been to cheat with their setup, I suspect it had more to do with shutting down arguments with students who came in to argue over grades.

    [1] I was a TA in the physical sciences for a few years. Some students would try to argue anything for a grade, and would persist if you let them.

    • The prof could have just said "Use GCC <version>" then, which would run on even more systems than TCC. Professor probably just really liked TCC.

  • Seems like a good way to get students to write C rather than GNU C.

    • The professor could have just insisted on `-std=c99` or a similar GCC flag which disallows GNU extensions.

      When I taught programming (I started teaching 22 years ago), the course was still having students either use GCC with their university shell accounts, or if they were Windows people, they would use Borland C++ we could provide under some kind of fair use arrangement IIANM, and that worked within a command shell on Windows.

      1 reply →

Currently striving towards my own TypeScript to native x86_64 physical compiler quine bootstrapped off of TCC and QuickJS. Bytecode and AST are there!

  • This sounds like a really cool project. What challenges have you encountered so far?

    • Thanks. The hardest part has been slogging through the segfaults and documenting all the unprincipled things I've had to add. Post-bootstrap, I have to undo it all because my IR is a semantically rich JSON format that is turing-incomplete by design. I'm building a substrate for rich applications over bounded computation, like eBPF but for applications and inference.

What a blast from the past TCC!

Sad but not surprised to see it's no longer maintained (8 years ago!).

Even in the era of terabyte NVMe drives my eyes water when I install MSVC (and that's usually just for the linker!)

  • That is, I believe, one the points of AI and Open Source many contacts. Something like TCC, with a good coding agent and a developer that cares about the project, and knows enough about it, can turn into a project that can be maintained without the otherwise large efforts needed, that resulted into the project being abandoned. I'm resurrecting many projects of mine I had no longer the time to handle, like dump1090, linenoise, ...

  • Still maintained. You have the mob repo in another comment.

    Debian, Fedora, Arch and others pull their package from the mob repo. They're pretty good at pulling in CVE fixes almost immediately.

    Thomas Preud'homme is the new maintainer lead, though the code is a mob approach.

TCC is fantastic! I use it a lot to do fast native-code generation for language projects, and it works really really well.

Man I can't wait for tcc to be reposted for the 4th time this week with the license scrubbed and the comment of "The Latest AI just zero-shotted an entire C compiler in 5 minutes!"