Comment by physicsguy
4 hours ago
> every combination of compiler, optimization setting and platform I intend to support, disassemble all the resulting binaries, and analyze the disassembly to try to figure out if it did autovectorization in the way I expect
I just used to fire up VTune and inspect the hot loops... typically if you care about this you're only really working on hardware targeting the latest instruction sets anyway in my experience. It's only if you're working on low level libraries I would bother doing intrinsics all over the place.
For most consumer software you want to be able to fall back to some lowest-common-denominator hardware anyway otherwise people using it run into issues - same reason that Debian, Conda, etc. only go up to really old instructions sets.
I work on games sometimes, where the goal is: "run as fast as possible on everyone's computer, whether it's a 15 year old netbook with an Intel Atom or a freshly built beast of a gaming desktop". As a result, the best approach is to discover supported instructions at runtime and dispatch to a function that's using those instructions (maybe populating a global vector function table at launch?). Next best is to assume some base level vector support (maybe the original AVX for x86, Neon for ARM) and unconditionally use those. Targeting only the latest instruction sets is a complete non-starter.