← Back to context

Comment by ncruces

2 days ago

The current standard still says integer-to-pointer conversions are implementation defined (not undefined) and furthermore "intended to be consistent with the addressing structure of the execution environment" (that's a direct quote).

I have an execution environment, Wasm, where doing this is pretty well defined, in fact. So if I want to read the memory at address 12345, which is within bounds of the linear memory (and there's a builtin to make sure), why should it be undefined behavior?

And regarding pointer provenance, why should going through a pointer-to-integer and integer-to-pointer conversions try to preserve provenance at all, and be undefined behavior in situations where that provenance is ambiguous?

The reason I'm using integer (rather than pointer) arithmetic is precisely so I don't have to be bound by pointer arithmetic rules. What good purpose does it serve for this to be undefined (rather than implementation defined) beyond preventing certain programs to be meaningfully written at all?

I'm genuinely curious.

I fully agree with your analysis but compilers writers did think the could bend the rules, hence it was necessary to clarify that pointer-to-integer casts do work as intended. This still not in ISO C 23 btw because some compiler vendors did argue against it. But it is a TS now. If you are, please file bugs against your compilers.

  • Do you fully agree? I finally went and read n3005.pdf. The important item there is that a cast to integer exposes the pointer and now the compiler must be conservative and assume that the pointed object might be changed via non trackable pointers. This seems quite a reasonable compromise to make existing code work without affecting the vast majority of objects whose address is never cast to an integer. But ncruces wants defined semantics for arbitrary forged pointers.

    • You are right, I wasn't thinking straight. I do not fully agree. Creating arbitrary pointers can not work. Forging pointers to implementation-defined memory region would be ok though.

      6 replies →

> I have an execution environment, Wasm, where doing this is pretty well defined, in fact. So if I want to read the memory at address 12345, which is within bounds of the linear memory (and there's a builtin to make sure), why should it be undefined behavior?

How would you define it? Especially in a way that is consistent with the rest of the language and allows common optimizations (remember that C supports variables, which may or may not be stored in memory)?

  • Just read whatever is at address 12345 of the linear memory. Doesn't matter what that is. If it's an object, if it was malloc'ed, if it's the "C stack", a "global".

    It's the only way to interpret *(uint64_t*)(12345) when the standard says that a integer-to-pointer conversion is "intended to be consistent with the addressing structure of the execution environment".

    There exists an instruction to do that load in Wasm, there's a builtin to check that 12345 points to addressable memory, the load is valid at the assembly level, the standard says the implementation should define this to be consistent with the addressing structure of the execution environment, why the heck are we playing games and allowing the compiler to say, "nope, that's not valid, so your entire program is invalid, and we can do what ever we want, no diagnostic required"?

    • If a newer version of that value is also stored in a register and not yet flushed to memory, should the compiler know to insert that flush for your or is reading a stale value ok?

      For what it’s worth there’s a reason you’re supposed to do this kind of access through memcpy, not by dereferencing made up pointers.

      > There exists an instruction to do that load in Wasm, there's a builtin to check that 12345 points to addressable memory, the load is valid at the assembly level, the standard says the implementation should define this to be consistent with the addressing structure of the execution environment, why the heck are we playing games and allowing the compiler to say, "nope, that's not valid, so your entire program is invalid, and we can do what ever we want, no diagnostic required"?

      Because the language standard is defined to target a virtual machine as output, not any given implementation. That virtual machine is then implemented on various platforms, but the capabilities of the underlying system aren’t directly accessible - they are only there to implement the C virtual machine. That’s why C can target so many different target machines.

      2 replies →

It is important to understand why undefined behaviour has proliferated over the past ~25 years. Compiler developers are (like the rest of us) under pressure to improve metrics like the performance of compiled code. Often enough that's because a CPU vendor is the one paying for the work and has a particular target they need to reach at time of product launch, or there's a new optimization being implemented that has to be justified as showing a benefit on existing code.

The performance of compilers is frequently measured using the SPEC series of CPU benchmarks, and one of the main constraints of the series SPEC series of tests is that the source code of the benchmark cannot be changed. It is static.

As a result, compiler authors have to find increasingly convoluted ways to make it possible for various new compiler optimizations to be applied to the legacy code used in SPEC. Take 403.gcc: it's based on gcc version 3.2 which was released on August 14th 2002 -- nearly 23 years ago.

By making certain code patterns undefined behaviour, compiler developers are able to relax the constraints and allow various optimizations to be applied to legacy code in places which would not otherwise be possible. I believe the gcc optimization to eliminate NULL pointer checks when the pointer is dereferenced was motivated by such a scenario.

In the real world code tends to get updated when compilers are updated, or when performance optimizations are made, so there is no need for excessive compiler "heroics" to weasel its way into making optimizations apply via undefined behaviour. So long as SPEC is used to measure compiler performance using static and unchanging legacy code, we will continue to see compiler developers committing undefined behaviour madness.

The only way around this is for non-compiler developer folks to force language standards to prevent compilers from using undefined behaviour to do that which normal software developers considers to be utterly insane code transformations.

  • Language standards have much less power than people think and compiler-vendors are of course present in the standard working groups. Ultimately, the users need to put pressure on the compiler vendors. Please file bugs - even if this often has no effect, it takes away the argument "this is what our users want". Also please support compilers based on how they deal with UB and not on the latest benchmark posted somewhere.

    • Language standards have plenty of power over compiler vendors, however, very few people that are not involved in writing compilers tend to participate in the standards process. Standards bodies bend to the will of those participating.

      1 reply →

In a compiler, you essentially need the ability to trace all the uses of an address, at least in the easy cases. Converting a pointer to an integer (or vice versa) isn't really a deal-breaker; it's essentially the same thing as passing (or receiving) a pointer to an unknown external function: the pointer escapes, whelp, nothing more we can do in that case for the most part.

But converting an integer to a pointer creates a problem if you allow that pointer to point to anything--it breaks all of the optimizations that assumed they could trace all of the uses of an address. So you need something like provenance to say that certain back-conversions are illegal. The most permissive model is a no-address-taken model (you can't forge a pointer to a variable whose address was never taken). But most compilers opt instead for a data-dependency-based model: essentially, even integer-based arithmetic of addresses aren't allowed to violate out-of-bounds at the point of dereference. Or at least, they claim to--the documentation for both gcc and llvm have this claim, but both have miscompilation bugs because they don't actually allow this.

The proposal for pointer provenance in C essentially looks at how compilers generally implement things and suggests a model that's closer to their actual implementation: pointer-to-integer exposes the address such that any integer-to-pointer can point to it. Note this is more permissive than the claimed models of compilers today--you're explicitly able to violate out-of-bounds rules here, so long as both objects have had their addresses exposed. There's some resistance to this because adhering to this model also breaks other optimizations (for example, (void*)(uintptr_t)x is not the same as x).

As a practical matter, pointer provenance isn't that big of a deal. It's not hard to come up with examples that illustrate behaviors that cause miscompilation or are undefined specifically because of pointer provenance. But I'm not aware of any application code that was actually miscompiled because the compiler implemented its provenance model incorrectly. The issue gets trickier as you move into systems code that exists somewhat outside the C object model, but even then, most of the relevant code can ignore their living outside the object model since resulting miscompiles are prevented by inherent optimization barriers anyways (note that to get a miscompile, you generally have to simultaneously forge the object's address, have the object's address be known to the compiler already, and have the compiler think the object's address wasn't exposed by other means).