← Back to context

Comment by JonChesterfield

2 days ago

Pointer provenance was certainly not here in the 80s. That's a more modern creation seeking to extract better performance from some applications at a cost of making others broken/unimplementable.

It's not something that exists in the hardware. It's also not a good idea, though trying to steer people away from it proved beyond my politics.

Pointer provenance probably dates back to the 70s, although not under that name.

The essential idea of pointer provenance is that it is somehow possible to enumerate all of the uses of a memory location (in a potentially very limited scope). By the time you need to introduce something like "volatile" to indicate to the compiler that there are unknown uses of a variable, you have to concede the point that the compiler needs to be able to track all the known uses within a compiler--and that process, of figuring out known uses, is pointer provenance.

As for optimizations, the primary optimization impacted by pointer provenance is... moving variables from stack memory to registers. It's basically a prerequisite for doing any optimization.

The thing is that traditionally, the pointer provenance model of compilers is generally a hand-wavey "trace dataflow back to the object address's source", which breaks down in that optimizers haven't maintained source-level data dependency for a few decades now. This hasn't been much of a problem in practice, because breaking data dependencies largely requires you to have pointers that have the same address, and you don't really run into a situation where you have two objects at the same address and you're playing around with pointers to their objects in a way that might cause the compiler to break the dependency, at least outside of contrived examples.

  • My grievance isn't with aliasing or dataflow, it's with a pointer provenance model which makes assumptions which are inconsistent with reality, optimises based on it, then justifies the nonsense that results with UB.

    When the hardware behaviour and the pointer provenance model disagree, one should change the model, not change the behavior of the program.

    • Give me an example of a program that violates pointer provenance (and only pointer provenance) that you think should be allowed under a reasonable programming model.

      4 replies →

> It's not something that exists in the hardware

This is sort of on the one hand not a meaningful claim, and then on the other hand not even really true if you squint anyway?

Firstly the hardware does not have pointers. It has addresses, and those really are integers. Rust's addr() method on pointers gets you just an address, for whatever that's worth to you, you could write it to a log maybe if you like ?

But the Morello hardware demonstrates CHERI, an ARM feature in which a pointer has some associated information that's not the address, a sort of hardware provenance.

I'm not a compiler writer, but I don't know how you would be able to implement any optimization while allowing arbitrary pointer forging and without whole-program analysis.

  • It's an interesting question.

    Say you're working with assembly as your medium, on a von neumann machine. Writing to parts of the code section is expected behaviour. What can you optimise in such a world? Whatever cannot be observed. Which might mean replacing instructions with sequences of the same length, or it might mean you can't work out anything at all.

    C is much more restricted. The "function code" isn't there, forging pointers to the middle of a function is not a thing, nor is writing to one to change the function. Thus the dataflow is much easier, be a little careful with addresses of starts of functions and you're good.

    Likewise the stack pointer is hidden - you can't index into the caller's frame - so the compiler is free to choose where to put things. You can't even index into your own frame so any variable whose address is not taken can go into a register with no further thought.

    That's the point of higher level languages, broadly. You rule out forms of introspection, which allows more stuff to change.

    C++ has taken this too far with the object model in my opinion but the committee disagrees.

  • Why? What specific optimization do you have in mind that prevents me from doing an aligned 16/32/64-byte vector load that covers the address pointed to by a valid char*?

    • Casting a char pointer to a vector pointer and doing vector loads doesn't violate provenance, although it might violate TBAA.

      Regarding provenance, consider this:

        void bar();
        int foo() {
          int * ptr = malloc(sizeof(int));
          *ptr = 10;
          bar();
          int result = *ptr;
          free(ptr);
          return result;
        }
      

      If the compiler can track the lifetime of the dynamically allocated int, it can remove the allocation and covert this function to simply

        int foo() { 
            bar();
            return 10;
        }
      

      It can't if arbitrary code (for example inside bar()) can forge pointers to that memory location. The code can seem silly, but you could end up with something similar after inlining.

      6 replies →

    • Can't reply to the sibling comment, for some reason.

      If you don't know the extents of the object pointed to by the char*, using an aligned vector load can reach outside the bounds of the object. Keeping provenance makes that undefined behavior.

      Using integer arithmetic, and pointer-to-integer/integer-to-pointer conversions would make this implementation defined, and well defined in all of the hardware platforms where an aligned vector load can never possibly fail.

      So you can't do some optimizations to functions where this happens? Great. Do it. What else?

      As for why you'd want to do this. C makes strings null-terminated, and you can't know their extents without strlen first. So how do you implement strlen? Similarly your example. Seems great until you're the one implementing malloc.

      But I'm sure "let's create undefined behavior for a libc implemented in C" is a fine goal.

      3 replies →

It very much is something that exists in hardware. One of the major reasons why people finally discovered the provenance UB lurking in the standard is because of the CHERI architecture.

  • People keep forgetting that SPARC ADI did it first with hardware memory tagging for C.

  • So it's something that exists in some hardware. Are you claiming that it exists in all hardware, and we only realized that because of CHERI? Or are you claiming that it exists in CHERI hardware, but not in others.

    If it only exists in some hardware, how should the standard deal with that?

    • > If it only exists in some hardware, how should the standard deal with that?

      Generally seems to me the C standard makes things like that UB. Signed integer overflow, for example. Implemented as wrapping two's-complement on modern architectures, defined as such in many modern languages, but UB in C due to ongoing support for niche architectures.

      The issues around pointer provenance are inherent to the C abstract machine. It's a much more immediate show-stopper on architectures that don't have a flat address space, and the C abstract machine doesn't assume a flat address space because it supports architecture where that's not true. My understanding is that reflects some oddball historical architectures that aren't relevant anymore, nowadays that includes CHERI.

      1 reply →