Comment by jcalvinowens

3 days ago

> On the one hand, compilers would like to exploit the strong guarantees of the type system—particularly those pertaining to aliasing of pointers—in order to unlock powerful intraprocedural optimizations.

How true is this really?

Torvalds has argued for a long time that strict aliasing rules in C are more trouble than they're worth, I find his arguments compelling. Here's one of many examples: https://lore.kernel.org/all/CAHk-=wgq1DvgNVoodk7JKc6BuU1m9Un... (the entire thread worth reading if you find this sort of thing interesting)

Is Rust somehow fundamentally different? Based on limited experience, it seems not (at least, when unsafe is involved...).

I would agree that C's strict aliasing rules are terrible. The rules we are proposing for Rust are very different. They are both more useful for compilers and, in my opinion, less onerous for programmers. We also have an actual in-language opt-out: use raw pointers. And finally, we have a tool you can use to check your code.

But in the end, it's a trade-off, like everything in language design. (In life, really. ;) We think that in Rust we may have found a new sweet spot for this kind of optimizations. Time will tell whether we are right.

  • As someone who has been writing a lot of unsafe Rust (mostly in an embedded context), I'm thrilled about and thankful for the work that you, your students, and the opsem team are doing.

    When you're working with anything below the application level, C's confusing and underspecified rules about UB are almost impossible to keep track of, especially when it comes to aliasing and volatile/MMIO. The spec is so difficult to read and full of complicated cross-references that to actually get a practical answer you have to look for a random Stack Overflow post that may or may not have a correct interpretation of the spec, and may or may not address your specific problem.

    Rust right now feels a lot harder to work with, because the spec isn't done. When you have a concrete question about a piece of code, like "is this conversion from an &mut to a *mut and back sound", and you try to look for documentation on it, you get either "Nobody knows, Rust aliasing model isn't defined"; a hand-wavy explanation that is not rigorous or specific; or a model like Stack Borrows or Tree Borrows that's defined a little too formally for easy digestion :)

    But when I really started digging, I realized just how much cleaner Rust's semantics are. References aren't actually hard, Tree Borrows basically boils down to "while an &mut reference is live, you can only access the value through pointers or references derived from that reference". Pointer operations have straightforward semantics, there's no confusing notions of typed memory, and no UB "just because" for random things like integer overflow. It's just so much less complicated to understand than C's abstract machine.

    I'm really looking forward to things like MiniRust, and to an aliasing model making it into the Reference / other documentation, because at that point I feel like unsafe Rust will be way easier to write confidently and correctly than C.

    Congrats on the publication, and thanks again for the work you all have put into this.

  • Thanks for replying Ralf. I'm barely qualified to have an opinion about these things, if I am at all :)

    > The rules we are proposing for Rust are very different. They are both more useful for compilers and, in my opinion, less onerous for programmers.

    My question was too vague, what I meant to ask was: what aliasing optimizations will be possible in Rust that aren't possible in C?

    Example 18 in the paper is one, if I'm understanding it. But that specific example with a pointer passed to a function seems analogous to what is possible with 'restrict' in C, I'm struggling to come up with more general examples that don't involve gratuitous globals.

    It seems to me like having to allow for the possibility of unsafe constrains the ability to do "heroic" optimizations such as what jcranmer described elsewhere in the thread. If that is true to some extent, is there a future where Rust might optimize more aggressively if the programmer promises not to use unsafe anywhere in the program? That's something I've always been curious about.

Rust's aliasing rules are very different from C's.

In C you have a nuclear `restrict` that in my experience does anything only when applied to function arguments across clang & gcc, and type-based aliasing which is both not a generally-usable thing (don't have infinite different copies of the int64_t type (and probably wouldn't want such either)), and annoying (forces using memcpy if you want to reinterpret to a different type).

Whereas with Rust references you have finely-bounded lifetimes and spans and mutability, and it doesn't actually care about the "physical" types, so it is possible to reinterpret memory as both `&mut i32`/`&i32` and `&mut i64`/`&i64` and switch between the two for the same memory, writing/reading halves of/multiple values by the most bog standard Rust safe reads & writes, as long as the unsafe abstraction never gives you overlapping `&mut` references at the same time, or split a `&mut` into multiple non-overlapping `&mut`s.

  • > don't have infinite different copies of the int64_t type

    You can make some, though!

    Basically, the idea is to define a class template NoAlias<T, Tag> that contains a single value of type T. Implement operators etc. to make the type useful in practice for working with the wrapped value. Type-based alias rules mean that an access to the value wrapped in NoAlias<int64_t, Tag1> can never alias a value wrapped in NoAlias<int64_t, Tag2>. (Tag1 and Tag2 can just be forward-declared structs that are never defined.)

    One time, I even encountered a situation where this was mildly useful.

Take anything Linus says about compilers with a grain of salt--he writes OS kernels, not compilers, and those are pretty different domains.

Alias analysis is extremely important for getting good performance these days--but it should also be remembered that the biggest benefits accrue from the simplest heuristics (like two loads that use the same SSA value as the pointer must alias each other). In LLVM terms, that's BasicAA: a collection of very simple heuristics that primarily amounts to "if we can track down the allocation sites of objects, we can definitively resolve these alias queries; otherwise, we don't know."

The real question that you're trying to ask, though, is what is the value of alias analyses that go beyond the most basic, obvious tests. At the point where the alias queries are no longer trivial to solve, then it's generally the case that what you can do as a result of those queries also shrinks dramatically, pretty much to looking for code motion hazards, and the benefits you get from that are much reduced. One of the experiments I would like to do is measure the total speedup you'd get from a theoretically perfect alias analysis, and my guess is that it's somewhere in the 20% range even on non-HPC code like the Linux kernel [1].

[1] This doesn't account for the heroic optimizations, such as data-layout transformations, that you wouldn't attempt to write without a very high-quality alias analysis. But since we already know that alias analysis doesn't exist in practice, we're not going to attempt those optimizations anyways, so it's not worth including such stuff in prospective speed gains.

  • > Take anything Linus says about compilers with a grain of salt

    I think he's making an argument about CPU behavior here more than about compilers: if we call loads and stores which aliasing optimizations might remove "redundant", he's saying that modern machines with big caches and store buffers make those "redundant" operations so cheap they don't matter in practice for most workloads.

    Of course, it's admittedly an oversimplification to reduce aliasing optimizations to simply eliminating loads and stores, you described things which go beyond that.

    However, I just ran a silly little test where I added `-fno-strict-aliasing` to CFLAGS and rebuilt the world on one of my gentoo build machines, it only got 1.5% slower at compiling an allmodconfig linux kernel (gcc-14):

        Before: 14m39.101s 14m42.462s 14m41.497s 14m44.540s
        After:  14m54.354s 14m54.415s 14m55.580s 14m55.793s
    

    That's on a shiny new znver5.

  • Here's another data point: https://lobste.rs/s/yubalv/pointers_are_complicated_ii_we_ne...

    > I spoke to Apple folks when their compiler team switched the default to strict aliasing. They reported that it made key workloads 5-10% faster and the fixes were much easier to do and upstream than I would have expected. My view of -fstrict-aliasing at the time was that it was a flag that let you generate incorrect code that ran slightly faster. They had actual data that convinced me otherwise.

    • OTOH we have https://web.ist.utl.pt/nuno.lopes/pubs/ub-pldi25.pdf which shows that on a range of benchmarks, disabling all provenance-based reasoning (no strict aliasing, no "derived from" reasoning, only straight-forward "I know the addresses must be actually different") has a much smaller perf impact than I would ever have expected.

    • I don't understand how they convinced him that code with limited set of aliasing patches remained 'correct'. Did they do 'crash ratio' monitoring? Disclaimer I had to do that in gamedev on 'postlaunch' for couple of projects. Pretty nasty work.

    • The counterpoint to this is that if you're willing to accept patching upstream as an option to make "key workloads" (read: partial benchmarks) perform better, what's stopping you from adding the necessary annotations instead?

      The answer is basically that you were never going to, you're just externalizing the cost of making it look like your compiler generates faster code.

      1 reply →

  • Maybe Linus by writing kernels, not compilers, has even more to say, because his use-case is much more practical than anything that compiler designers could imagine.

    • That's kind of a rude thing to say, suggesting that compilers aren't real programmers. Nevertheless, it is the case that most optimizations in the compiler are motivated by real issues with somebody's code, as opposed to compiler engineers thinking up optimizations in their heads.

      3 replies →

    • Compilers are practical too, software tools used all the time. They're just as fundamental to building software as an operating system kernel is to running software. I don't know what point you're trying to make.

>How true is this really?

I’d be interested to see a more thorough analysis, but there is a simple way to gauge this - rip out all the parts of the compiler where aliasing information is propagated to LLVM, and see what happens to performance.

I found a claim that noalias contributes about 5% performance improvement in terms of runtimes[0], though the data is obviously very old.

https://github.com/rust-lang/rust/issues/54878#issuecomment-...

  • > I found a claim that noalias contributes about 5% performance improvement

    Note that that comment is implying a range, from 0% improvement on some benchmarks to 5% improvement on others. It suggests that 5% is generally in the ballpark of the upper bound of what you should expect from putting noalias on mutable references, but that some specific cases could see better results.

While both involve aliasing, C's strict aliasing and Rust's aliasing are two different things. Rust pretty explicitly did not adopt the C style.

C's aliasing is based on type alone, hence its other name "type based alias analysis" or TBAA.

  • And TBAA is much easier for programmers to wrangle than the aliasing of Rust, right? The corresponding aliasing feature for C would be _restrict_, which is rarely used.

    Though Linus and Linux turns off even strict aliasing/TBAA.

    • I don't think TBAA is easier to wrangle at all, or rather, it's almost never relevant except for allowing the compiler to make obviously, trivially true assumptions. Without it (all pointers could alias all the time), any C function taking more than one pointer argument would compile to a very surprising series of instructions, reloading from memory every single time any pointer is dereferenced.

      Rust's rules are very simple and easy to get right - not least because breaking them is a compiler error.

      1 reply →

While I can't name the product I work on, we also use -fno-strict-aliasing. The problem with these optimisations is that they can only be done safely if you can prove aliasing never happens, which is equivalent to solving the halting problem in C++. In Rust I suspect the stronger type system can actually prove that aliasing doesn't happen in select cases. In any case, I can always manually do the optimisations enabled by strict aliasing in hot code, but I can never undo a customer losing data due to miscompilation.

  • > actually prove that aliasing doesn't happen in select cases

    In the safe subset of Rust it's guaranteed in all cases. Even across libraries. Even in multi-threaded code.

    • To elaborate on that some more, safe Rust can guarantee that mutable aliasing never happens, without solving the halting program, because it forbids some programs that could've been considered legal. Here's an example of a function that's allowed:

          fn foo() {
              let mut x = 42;
              let mut mutable_references = Vec::new();
              let test: bool = rand::random();
              if test {
                  mutable_references.push(&mut x);
              } else {
                  mutable_references.push(&mut x);
              }
          }
      

      Because only one if/else branch is ever allowed to execute, the compiler can see "lexically" that only one mutable reference to `x` is created, and `foo` compiles. But this other function that's "obviously" equivalent doesn't compile:

          fn bar() {
              let mut x = 42;
              let mut mutable_references = Vec::new();
              let test: bool = rand::random();
              if test {
                  mutable_references.push(&mut x);
              }
              if !test {
                  mutable_references.push(&mut x); // error: cannot borrow `x` as mutable more than once at a time
              }
          }
      

      The Rust compiler doesn't do the analysis necessary to see that only one of those branches can execute, so it conservatively assumes that both of them can, and it refuses to compile `bar`. To do things like `bar`, you have to either refactor them to look more like `foo`, or else you have to use `unsafe` code.

      1 reply →

  • > which is equivalent to solving the halting problem

    Most worthwhile undefined behaviour is. If the compiler could tell whether it was happening or not, it would be defined as an error. Surely detecting whether a tree borrow violation happens is also equivalent to solving the halting problem?

When unsafe is involved, sure, but that's a pretty small fraction of Rust code as a whole. As far as I can tell, a lot of his argument seems to be more against needing language changes in C to take advantage of strict aliasing that come at the cost of expanding the circumstances where UB can occur, but I don't really see how those would apply to safe Rust when it already has a perfectly safe way to express a reference guaranteed to not have any alias, i.e. `&mut T`. If the compiler authors came up with better ways to generate optimized code with unaliased pointers, I don't see why Rust would need to make any language changes in order to take advantage of them. That doesn't necessarily mean that there there is any significant untapped potential for these sorts of optmizations of course, or that the amount of effort to identify and implement them would be worthwhile for a toolchain like LLVM that is used for far more than just Rust, but it's not clear to me why the arguments he gives would be relevant to Rust.

Personally, I would like compilers to better exploit vectorization, which can get you 2x to 10x faster on random things within typical workloads, rather than worry about dubious optimizations that have performance improvements that may or may not be caused by changing the alignment of code and data blocks.

I would like to see more effort dedicated to basic one liners that show up in real code like counting how many of a given character are in a string. E.g. `for (str) |e| count += e == '%'`. For this, LLVM spits out a loop that wants to do horizontal addition every iteration on x86-64 targets with vectors, at least. Let's focus on issues that can easily net a 2x performance gain before going after that 1-2% that people think pointer aliasing gets you.

  • Pointer aliasing information is often crucial for vectorization. So in that sense TB is a big boost for vectorization.

    Also, note that the paper discussed here is about the model / language specification that defines the envelope of what the compiler is allowed to optimize. Actually getting the compiler to optimize concrete real-world cases is an entirely separate line of work, and needs completely different expertise -- expertise me and my group do not have. We can only lay the framework for compiler wizards to have fun with. :)

  • Pointer aliasing is necessary for auto vectorization, because you can't perform SIMD if the data you're modifying overlaps with the data you're reading and since the compiler is only allowed to modify the code in a way that is legal for all inputs, it will be conservative and refuse to vectorize your code rather than break it in situations with pointer aliasing.

    Maybe this was a too convoluted way of saying this:

    Loading something from main memory into a register creates a locally cached copy. This mini cache needs to be invalidated whenever a pointer can potentially write to the location in main memory that this copy is caching. In other words, you need cache synchronization down to the register level, including micro architectural registers that are implementation details of the processor in question. Rather than do that, if you can prove that you are the exclusive owner of a region in memory, you know that your copy is the most up to date version and you know when it gets updated or not. This means you are free to copy the main memory into your vector register and do anything you want, including scalar pointer writes to main memory, since you know they are unrelated and will not invalidate your vector registers.

Strict aliasing rules are useful conditional on them being sufficiently expressive and sensible, otherwise they just create pointless headaches that require kludgy workarounds or they are just disabled altogether. I don't think there is much disagreement that C strict aliasing rules are pretty broken. There is no reason a language like Rust can't be designed with much more sensible strict aliasing rules. Even C++ has invested in providing paths to more flexibility around strict aliasing than C provides.

But like Linus, I've noticed it doesn't seem to make much difference outside of obvious narrow cases.

  • >There is no reason a language like Rust can't be designed with much more sensible strict aliasing rules.

    The aliasing rules of Rust for mutable references are different and more difficult than strict aliasing in C and C++.

    Strict aliasing in C and C++ are also called TBAA, since they are based on compatible types. If types are compatible, pointers can alias. This is different in Rust, where mutable references absolutely may never alias, not even if the types are the same.

    Rust aliasing is more similar to C _restrict_.

    The Linux kernel goes in the other direction and has strict aliasing optimization disabled.

    • > The aliasing rules of Rust for mutable references are different and more difficult than strict aliasing in C and C++.

      "more difficult" is a subjective statement. Can you substantiate that claim?

      I think there are indications that it is less difficult to write aliasing-correct Rust code than C code. Many major C codebeases entirely give up on even trying, and just set "-fno-strict-aliasing" instead. It is correct that some types are compatible, but in practice that doesn't actually help very much since very few types are compatible -- a lot of patterns people would like to write now need extra copies via memcpy to avoid strict aliasing violations, costing performance.

      In contrast, Rust provides raw pointers that always let you opt-out of aliasing requirements (if you use them consistently); you will never have to add extra copies. Miri also provides evidence that getting aliasing right is not harder than dealing with other forms of UB such as data races and uninitialized memory (with Tree Borrows, those all occur about the same amount).

      I would love to see someone write a strict aliasing sanitizers and run it on popular C codebases. I would expect a large fraction to be found to have UB.

      4 replies →

> Compiler people think aliasing matters. It very seldom does. And VLIW will never become practical for entirely unrelated reasons (read: OoO is fundamentally superior to VLIW in general purpose computing).

I love how long Linus has been right about this.

  • Linus worked for Transmeta in 2003. That company built a VLIW CPU.

    I don’t think you were a visionary if you stated VLIW is not practical for general purpose computing after that time.

    Also “VLIW will never become practical” is a stronger claim than “VLIW will never become practical for general purpose computing”, and may be up for debate. https://en.wikipedia.org/wiki/Very_long_instruction_word says

    “Since the number of transistors on a chip has grown, the perceived disadvantages of the VLIW have diminished in importance. VLIW architectures are growing in popularity, especially in the embedded system market, where it is possible to customize a processor for an application in a system-on-a-chip.”

    That may be untrue or no longer true, but clicking around looking a the Wikipedia pages for several of the mentioned VLIW designs, I couldn’t find conclusive evidence for that (but, as with that VLIW page, the pages I looked at may be out of date, weren’t clear as to whether chips still were in production, whether newer generations actually still are VLIW, etc.)

It is mostly useful on arrays/numeric code, probably next to useless otherwise. Numerics people was the ones who sponsored much of compiler/optimization work in the first place, that's how strict aliasing came to be.

  • I don't think the usefulness is that skewed towards numerics?

    Both clang/llvm and gcc can do alias checking at runtime if they can't at compile-time, which makes loops vectorizable without alias info, at the cost of a bit of constant overhead for checking aliasing. (there's the exception of gather loads though, where compile-time aliasing info is basically required)

    And on the other hand there's good potential for benefit to normal code (esp. code with layers of abstractions) - if you have a `&i32`, or any other immutable reference, it's pretty useful for compiler to be able to deduplicate/CSE loads/computations from it from across the whole function regardless of what intermediate writes to potentially-other things there are.

    • > pretty useful for compiler to be able to deduplicate/CSE loads/computations

      Yes, but is it a performance improvement significant enough? L1 latency is single cycle. Is the performance improvement from eliminating that worth the trouble it brings to the application programmer?

      1 reply →