← Back to context

Comment by pizlonator

3 days ago

This is a good write up and I agree with pretty much all of it.

Two comments:

- LLVM IR is actually remarkably stable these days. I was able to rebase Fil-C from llvm 17 to 20 in a single day of work. In other projects I’ve maintained a LLVM pass that worked across multiple llvm versions and it was straightforward to do.

- LICM register pressure is a big issue especially when the source isn’t C or C++. I don’t think the problem here is necessarily licm. It might be that regalloc needs to be taught to rematerialize

> It might be that regalloc needs to be taught to rematerialize

It knows how to rematerialize, and has for a long time, but the backend is generally more local/has less visibility than the optimizer. This causes it to struggle to consistently undo bad decisions LICM may have made.

  • > It knows how to rematerialize

    That's very cool, I didn't realize that.

    > but the backend is generally more local/has less visibility than the optimizer

    I don't really buy that. It's operating on SSA, so it has exactly the same view as LICM in practice (to my knowledge LICM doesn't cross function boundary).

    LICM can't possibly know the cost of hoisting. Regalloc does have decent visibility into cost. Hence why this feels like a regalloc remat problem to me

    • > to my knowledge LICM doesn't cross function boundary

      LICM is called with runOnLoop() but is called after function inlining. Inlining enlarges functions, possibly revealing more invariants.

      1 reply →

"LLVM IR is actually remarkably stable these days."

I'm by no means an LLVM expert but my take away from when I played with it a couple of years ago was that it is more like the union of different languages. Every tool and component in the LLVM universe had its own set of rules and requirements for the LLVM IR that it understands. The IR is more like a common vocabulary than a common language.

My bewilderment about LLVM IR not being stable between versions had given way to understanding that this freedom was necessary.

Do you think I misunderstood?

  • > like the union of different languages

    No. Here are two good ways to think about it:

    1. It's the C programming language represented as SSA form and with some of the UB in the C spec given a strict definition.

    2. It's a low level representation. It's suitable for lowering other languages to. Theoretically, you could lower anything to it since it's Turing-complete. Practically, it's only suitable for lowering sufficiently statically-typed languages to it.

    > Every tool and component in the LLVM universe had its own set of rules and requirements for the LLVM IR that it understands.

    Definitely not. All of those tools have a shared understanding of what happens when LLVM executes on a particular target and data layout.

    The only flexibility is that you're allowed to alter some of the semantics on a per-target and per-datalayout basis. Targets have limited power to change semantics (for example, they cannot change what "add" means). Data layout is its own IR, and that IR has its own semantics - and everything that deals with LLVM IR has to deal with the data layout "IR" and has to understand it the same way.

    > My bewilderment about LLVM IR not being stable between versions had given way to understanding that this freedom was necessary.

    Not parsing this statement very well, but bottom line: LLVM IR is remarkably stable because of Hyrum's law within the LLVM project's repository. There's a TON of code in LLVM that deals with LLVM IR. So, it's super hard to change even the smallest things about how LLVM IR works or what it means, because any such change would surely break at least one of the many things in the LLVM project's repo.

    • > 1. It's the C programming language represented as SSA form and with some of the UB in the C spec given a strict definition.

      This is becoming steadily less true over time, as LLVM IR is growing somewhat more divorced from C/C++, but that's probably a good way to start thinking about it if you're comfortable with C's corner case semantics.

      (In terms of frontends, I've seen "Rust needs/wants this" as much as Clang these days, and Flang and Julia are also pretty relevant for some things.)

      There's currently a working group in LLVM on building better, LLVM-based semantics, and the current topic du jour of that WG is a byte type proposal.

      5 replies →

    • Thanks for your detailed answer. You encouraged me to give it another try and have closer look this time.

  • This take makes sense in the context of MLIR creation which introduces dialects which are namespaces within the IR. Given it was created by Chris Lattner I would guess he saw these problems with LLVM as well.

There is a rematerialize pass, there is no real reason to couple it with register allocation. LLVM regalloc is already somewhat subpar.

What would be neat is to expose all right knobs and levers so that frontend writers can benchmark a number of possibilities and choose the right values.

I can understand this is easier said than done of course.

  • > There is a rematerialize pass, there is no real reason to couple it with register allocation

    The reason to couple it to regalloc is that you only want to remat if it saves you a spill

    • Remat can produce a performance boost even when everything has a register.

      Admittedly, this comes up more often in non-CPU backends.

      3 replies →