← Back to context

Comment by nullc

2 days ago

meh, the compiler can almost always eliminate the spurious default initialization because it can prove that first use is the variable being set by the real initialization. The only time the redundant initialization will be emitted by an optimizing compiler is when it can't prove its redundant.

I think the better reason to not default initialize as a part of the language syntax is that it hides bugs.

If the developers intent is that the correct initial state is 0 they should just explicitly initialize to zero. If they haven't, then they must intend that the correct initial state is the dynamic one in their code and the compiler silently slipping in a 0 in cases the programmer overlooked is a missed opportunity to detect a bug due to the programmer under-specifying the program.

It only works for simple variables where initialisation to 0 is counter productive because you lose a useful compiler warning (about using initialised variable).

The main case is about arrays. Here it's often impossible to prove some part of it is used before initialisation. There is no warning. It becomes a tradeoff: potentially costly initialisation (arrays can be very big) or potentially using random values other than 0.

  • Fair point though compilers could presumably do much better warning there on arrays-- at least treating the whole array like a single variable and warning when it knows you've read it without ever reading for it.

    • C has pointers. It's often very difficult or impossible to deduct if an array was written to or not. It's possible in some cases (local array and lack of pointers of the same type in the scope) though so yeah, a warning would be useful in those cases.

In recent years I've come to rely on this non-initialization idiom. Both because as code paths change the compiler can warn for simple cases, and because running tests under Valgrind catches it.