Comment by littlestymaar
2 days ago
You're conflating language design and compiler architecture. It's hard to increment on a compiler to get massive performance improvement, and rearchitecture can help, but you don't necessarily need to change anything to the language itself in that regard.
Roslyn (C#) is the best example of that.
It's a massive endeavor and would need significant fundings to happen though.
Language design can have massive impact on compiler architecture. A language with strict define-before-use and DAG modules has the potential to blow every major compiler out of the water in terms of compile times. ASTs, type checking, code generation, optimization passes, IR design, linking can all be significantly impacted by this language design choice.
No, language design decisions absolutely have a massive impact the performance envelope of compilers. Think about things like tokenization rules (Zig is designed such that every line can be tokenized independently, for example), ambiguous grammars (most vexing parse, lexer hack etc.), symbol resolution (e.g. explicit imports as in Python, Java or Rust versus "just dump eeet" imports as in C#, and also things whether symbols can be defined after being referenced) and that's before we get to the really big one: type solving.
The lexer hack is a C thing, and Ive rarely heard anyone complain about C compiler performance. That seems more like an argument that the grammar doesn't have that much of an impact on compiler performance as other things.
Yeah. It's exactly backwards, because good language design doesn't make anything except parsing faster. The problem is that some languages have hideously awful grammars that make things slower than they ought to be.
The preprocessor approach also generates a lot of source code that then needs to be parsed over and over again. The solution to that isn't language redesign, it's to stop using preprocessors.
1 reply →
This kind of comment is funny because it reveals how uninformed people can be while having a strong opinion on a topic.
Yes grammar can impact how theoretically fast a compiler can be, and yes the type system ads more or less works depending on how it's designed, but none of these are what makes Rust compiler slow. Parsing and lexing are negligible fraction of compile time, and typing isn't particularly heavy in most cases (with the exception of niches crates who abuse the Turing completeness of the Trait system). You're not going to make big gains by changing these.
The massive gains are to be made later in the pipeline (or earlier, by having a way to avoid re-compiling pro macros and their dependencies before the actual compilation can even start).
Hard agree. Practically all the bottlenecks we run into with Rust compilation have to do with the LLVM passes. The frontend doesn't even come close. (e.g. https://www.feldera.com/blog/cutting-down-rust-compile-times...)
5 replies →