Comment by pizlonator
6 hours ago
I don’t buy the “it’s because of optimization argument”.
And I especially don’t buy that UB is there for register allocation.
First of all, that argument only explains UB of OOB memory accesses at best.
Second, you could define the meaning of OOB by just saying “pointers are integers” and then further state that nonescaping locals don’t get addresses. Many ways you could specify that, if you cared badly enough. My favorite way to do it involves saying that pointers to locals are lazy thunks that create addresses on demand.
No, it's absolutely because of optimization. For instance, C++20 defined signed integer representation as having two's complement, but signed integer overflow is still undefined behaviour. The reason is that if you compile with flags that make it defined, you lose a few percentage points of performance (primarily from preventing loop unrolling and auto-vectorization).
Same thing with e.g. strict aliasing or the various UB that exists in the standard library. For instance, it's UB to pass a null pointer to strlen. Of course, you can make that perfectly defined by adding an `if` to strlen that just returns 0. But then you're adding a branch to every strlen, and C is simply not willing to do that for performance reasons, so they say "this is UB" instead.
Pretty much instance of UB in standard C or C++ is because making it defined would either hamper the optimizer, or it would make standard library functions slower. They don't just make things UB for fun.
This isn’t the reason why the UB is in the spec in the first place. The spec left stuff undefined to begin with because of lack of consensus over what it should do.
For example the reason why 2s complement took so long is because of some machine that ran C that still existed that was 1s complement.
> The reason is that if you compile with flags that make it defined, you lose a few percentage points of performance (primarily from preventing loop unrolling and auto-vectorization).
I certainly don’t lose any perf on any workload of mine if I set -fwrapv
If your claim is that implementers use optimization as the excuse for wanting UB, then I can agree with that.
I don’t agree that it’s a valid argument though. The performance wins from UB are unconvincing, except maybe on BS benchmarks that C compilers overtune for marketing reasons.
I wish there was a way to opt into undefined behavior for unsigned overflow. Its rare that wraparound is actually what you want and in many cases overflow is still a bug. Sucks to have to either miss out on potential optimizations or miss out on the guarantee that the value can't be negative.
I recently filed a bug for this: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=116193
You do need some way to overflow properly, because sometimes that is what you want. A common example would be PRNGs, which frequently rely on overflow (the classic LCG, for instance). You could argue that should just be a library function or something (e.g. `add_with_overflow`), though that's more C++ than C.
You are absolutely, 100% correct though: I've never seen a case where accidental overflow doesn't start causing bugs anyway. Like, the Pac-Man kill screen is caused by a byte overflowing (it happens on level 256), and the game goes insane. Pac-Man was written in assembly where overflow is defined behavior, but that doesn't matter at all, the game is still broken. If signed overflow is essentially always a bug anyway, why not make it UB and optimize around it? Especially since it is super-valuable in being able to unroll loops.
People always bring up signed integer overflow as an argument for why UB is scary, and it always seemed like such a bad argument to me. Like, I can understand why people think UB has gone too far in C/C++, but signed overflow is such a bad example. It's one of the most sensible bits of UB in the entire standard, IMHO.
-fwrapv
> First of all, that argument only explains UB of OOB memory accesses at best.
It explains many loop-unroll and integer overflow as well.
> nonescaping locals don’t get addresses
inlining, interprocedural optimizations.
For example, something as an trivial accessor member function would be hard to optimize.
Inlining doesn’t require UB
Safer languages manage similar optimizations without having to rely on UB.
Well, yes, safer languages prevent pointer forging statically, so provenance is trivially enforced.
And I believe that provenance is an issue in unsafe rust.
> Second, you could define the meaning of OOB by just saying “pointers are integers"
This means losing a lot of optimisations, so in fact when you say you "don't buy" this argument you only mean that you don't care about optimisation. Which is fine, but this does mean the "improved" C isn't very useful in a lot of applications, might as well choose Java.
> This means losing a lot of optimisations
You won’t lose “a lot” of optimizations and you certainly won’t lose enough for it to make a noticeable difference in any workload that isn’t SPEC