Comment by oconnor663
2 days ago
Here's another data point: https://lobste.rs/s/yubalv/pointers_are_complicated_ii_we_ne...
> I spoke to Apple folks when their compiler team switched the default to strict aliasing. They reported that it made key workloads 5-10% faster and the fixes were much easier to do and upstream than I would have expected. My view of -fstrict-aliasing at the time was that it was a flag that let you generate incorrect code that ran slightly faster. They had actual data that convinced me otherwise.
OTOH we have https://web.ist.utl.pt/nuno.lopes/pubs/ub-pldi25.pdf which shows that on a range of benchmarks, disabling all provenance-based reasoning (no strict aliasing, no "derived from" reasoning, only straight-forward "I know the addresses must be actually different") has a much smaller perf impact than I would ever have expected.
I don't understand how they convinced him that code with limited set of aliasing patches remained 'correct'. Did they do 'crash ratio' monitoring? Disclaimer I had to do that in gamedev on 'postlaunch' for couple of projects. Pretty nasty work.
The counterpoint to this is that if you're willing to accept patching upstream as an option to make "key workloads" (read: partial benchmarks) perform better, what's stopping you from adding the necessary annotations instead?
The answer is basically that you were never going to, you're just externalizing the cost of making it look like your compiler generates faster code.
Because they require changes in thousands or millions of lines of code.
That's really interesting, thanks.