Comment by grishka
1 day ago
This is more of a problem of the C/C++ standard that it allows uninitialized variables but doesn't give them defined values, considering it "undefined behavior" to read from an uninitialized variable. Java, for example, doesn't have this particular problem because it does specify default values for variables.
But it's this and many other features of C/C++ that make it faster than Java. C/C++ developers really don't want to "pay" for something for safety.
Though, I really like the _mm_undefined_ps() intrinsics for SSE that make it clear that you're purposefully not initialising a variable. Something like that for ints and floats would be pretty sweet.
It is definitely not the case that magically safer is slower. IMO too often the attitude from WG21 (the c++ language committee) has been "Some fast things are unsafe, therefore if we make our language more unsafe it will go faster" which... that's not how implication works.
As a very high level example, take sorting. Rust's standard library provides you both a stable and unstable sort, as does your C++ standard library.
The C++ standard promises these sorts have O(n log n) performance, it's unclear in modern C++ if having a nonsensical ordering† is Undefined Behaviour (as it was in older versions) or outright IFNDR (much worse than UB) but the real world effect will be similar anyway
Rust promises that these sorts work as expected, if you provide nonsensical ordering, obviously it can't very well "sort" things the way you asked, but we don't need to kill your neighbour's cats and wipe the hard disk either, so, it will either give you back the same things in... some order or it will report the fatal error in your software.
The Rust option here is clearly much safer right? So, how much performance is this costing? Actually, it's faster. So C++ is choosing slower and worse. What's the upside?
† For example what about if I insist that Red < Green, but also Green < Red, and furthermore Red == Green is true, but so is Red != Green, however neither Green == Red nor Green != Red are true!
Statically proving the variables get initialized wouldn't change the performance except by making sure you check the return value of sscanf, or turning refusal to check into a couple register wipes. Either way, that's a negligible increase to a hefty function call. It wouldn't require default initializing variables in all circumstances.
When I think of the "no runtime cost" mentality of C/C++ I don't think that normally extends to ignoring errors in I/O functions.
And yet, there is a good chance that C++ will start doing exactly this [1]. Because [2]:
> The performance impact is negligible (less that 0.5% regression) to slightly positive (that is, some code gets faster by up to 1%). The code size impact is negligible (smaller than 0.5%). Compile-time regressions are negligible. Were overheads to matter for particular coding patterns, compilers would be able to obviate most of them.
> The only significant performance/code regressions are when code has very large automatic storage duration objects. We provide an attribute to opt-out of zero-initialization of objects of automatic storage duration. We then expect that programmer can audit their code for this attribute, and ensure that the unsafe subset of C++ is used in a safe manner.
> This change was not possible 30 years ago because optimizations simply were not as good as they are today, and the costs were too high. The costs are now negligible.
[1] https://github.com/cplusplus/papers/issues/1401
[2] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p27...
Thanks for the references - that was interesting reading, particularly that initialisation can be good for instruction pipelining.
A trick we were using with SSE was something like
__m128 zero = _mm_undefined_ps(); zero = _mm_xor_ps(zero, zero);
Now we were really careful with viewing our ops as data dependencies to reason about pipelining efficiency. But our profiling tools were not measuring this.
We did avoid _mm_set_ps(0.0f) which was actually showing up as cache misses.
I wonder if we were actually slower because cache misses are something we can measure?!