← Back to context

Comment by panstromek

2 days ago

There are many more mundane examples of language design choices in rust that are problematic for compile time. Polymorphization (which has big potential to speed up compile time) has been blocked on pretty obscure problems with TypeId. Procedural macros require double parsing. Ability to define items in function bodies prevents skipping parsing bodies. Those things are not essential, they could pretty easily be tweaked to be less problematic for compile time without compromising anything.

This is an oversimplification. Automatic polymorphization is blocked on several concerns, e.g. dyn safety (and redesigning the language to make it possible to paper over the difference between dyn and non-dyn safe traits imposes costs on the static use case), and/or obscure LLVM implementation deficiencies (which was the blocker for the last time I proposed a Swift-style ABI to address this). Procedural macros don't require double-parsing; many people do use syn to parse the token stream, but 1) parsing isn't a performance bottleneck, 2) providing a parsed AST rather than a token stream freezes the AST, which is something that the Rust authors deliberately wanted to avoid, rather than being some kind of accident of design, 3) at any point in the future the Rust devs could decide to stabilize the AST and provide a parsed representation, so this isn't anything unfixable that would cause any sort of trauma in the community, 4) proc macro expansions are trivially cacheable if you know you're not doing arbitrary I/O, which is easy to achieve manually today and should absolutely be built-in to the compiler (if for no other reason than having a sandboxed dev environment), but once again this is easy to tack on in future versions. As for allowing item definitions in function bodies, I want to reiterate that parsing is not a bottleneck.

  • AIUI, "Swift-style" ABI mechanisms are heavily dependent on alloca (dynamically-sized allocations on the stack) which the Rust devs have just proposed backing out of a RFC for (i.e. give up on it as an approved feature for upcoming versions of Rust) because it's too complex to implement, even with existing LLVM support for it.

    • Indeed, an alloca-heavy ABI was what I proposed, and I'm aware that the Rust devs have backed away from unsized locals, but these are unrelated. I was never totally clear on the specific LLVM-related problem with the former (it can't be totally insurmountable, because Swift), but people more knowledgeable in LLVM than I seemed uneasy about the prospect. As for the latter, it's because the precise semantics of unsized locals are undetermined, and it's not clear how to specify them soundly (which carries a lot of weight coming from Ralf Jung).

      3 replies →

  • Just to clarify - polymorphization is not automatic dyn dyspatch or anything related. I'm talking about compile time optimization that avoids duplicating generic functions.

    • Yes, I bring up dyn because the most straightforward way to implement polymorphization would be to conceptually replace usages of T: Trait with dyn Trait.

Macros themselves are a terrible hack to work around support for proper reflection.

The entire Rust ecosystem would be reshaped in such fascinating ways if we had support for reflection. I'd love to see this happen one day.

  • > Macros themselves are a terrible hack to work around support for proper reflection.

    No, I'm not sure where you got this idea. Macros are a disjoint feature from reflection. Macros exist to let you implement DSLs and abstract over syntax.

    • If you look at how macros are mostly used, though, a lot of that stuff could be replaced directly with reflection. Most derive macros, for example, aren't really interested in the syntax of the type they're deriving for, they're interested in its shape, and the syntax is being used as a proxy for that. Similarly, a lot of macros get used to express relationships between types that cannot be expressed at the type system level, and are therefore expressed at a syntactic level - stuff like "this trait is derived for all tuples based on a simple pattern".

      There are also proc macros just for creating DSLs, but Rust is already mostly expressive enough that you don't really need this. There are some exceptions, like sqlx, that really do embed a full, existing DSL, but these are much rarer and - I suspect - more of a novelty than a deeply foundational feature of Rust.

      4 replies →

    • They are disjoint, but the things you can use them for overlap a lot. In particular, I'd dare say a majority of existing `#[derive]`-style macros might be easier to implement in a hypothetical reflection layer instead.

      Instead of taking a raw token stream of a struct, parsing it with `syn` (duplicating the work the compiler does later), generating the proper methods and carefully generating trait checks for the compiler to check in a later phase (for example, `#[derive(Eq)] struct S(u16)` creates an invisible never-called method just to do `let _: ::core::cmp::AssertParamIsEq<u16>;` so the compiler can show an error 20s after an incorrectly used macro finished), just directly iterate fields and check `field.type.implements_trait(Eq)` inside the derive macro itself.

      That said, that's just wishful thinking - with how complex trait solving is, supporting injecting custom code in the middle of it (checking existing traits and adding new trait impls) might make compile time even worse, assuming it's even possible at all. It’s also not a clear perf win if a reflection function were to run on each instantiation of a generic type.

  • You got it the other way around. Macros can implement (nearly) anything. Including some form of opt-in type introspection via derive macros.

    They weren't a hack to get reflection. They are a way to codegen stuff easily.