Comment by adwn
1 day ago
> I for one am glad that compilers can assume that things that can't happen according to the language do in fact not happen and don't bloat my programs with code to handle them.
Yes, unthinkable happenstances like addition on fixed-width integers overflowing! According to the language, signed integers can't overflow, so code like the following:
int new_offset = current_offset + 16;
if (new_offset < current_offset)
return -1; // Addition overflowed, something's wrong
can be optimized to the much leaner
int new_offset = current_offset + 16;
Well, I sure am glad the compiler helpfully reduced the bloat in my program!
Garbage in, garbage out. Stop blaming the compiler for your bad code.
You're objectively wrong. This code isn't bad, it's concise and fast (even without the compiler pattern-matching it to whatever overflow-detecting machine instructions happen to be available), and it would be valid and idiomatic for unsigned int. Stop blaming the code for your bad language spec.
The language spec isn't bad just because it doesn't allow you to do what you want. Are you also upset that you need to add memory barriers where the memory model of the underlying platform doesn't need them?
Again, this isn't undefined behavior to fuck you over and compilers don't use it for optimizations because they hate you. It's because it makes a real difference for performance which is the primary reason low level languages are used.
If you for some reason want less efficient C++ then compilers even provide you flags to make this particular operation defined. There is no reason the rest of us have to suffer worse performance because of you.
Personally I would prefer if unsigned ints had the same undefined behavior by default with explicit functions for wrapping overflow. That would make developer intent much clearer and give tools a chance to diagnose unwanted overflow.
1 reply →