Undefined Behavior in C and C++ (2024)

6 months ago (russellw.github.io)

One has to add that from the 218 UB in the ISO C23, 87 are in the core language. From those we already removed 26 and are in progress of removing many others. You can find my latest update here (since then there was also some progress): https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3529.pdf

  • A lot of that work is basically fixing documentation bugs, labelled "ghosts" in your text. Places where the ISO document is so bad as a description of C that you would think there's Undefined Behaviour but it's actually just poorly written.

    Fixing the document is worthwhile, and certainly a reminder that WG21's equivalent effort needs to make the list before it can even begin that process on its even longer document, but practical C programmers don't read the document and since this UB was a "ghost" they weren't tripped by it. Removing items from the list this way does not translate to the meaningful safety improvement you might imagine.

    There's not a whole lot of movement there towards actually fixing the problem. Maybe it will come later?

    • > practical C programmers don't read the document and since this UB was a "ghost" they weren't tripped by it

      I would strongly suspect that C compiler implementers very much do read the document, though. Which, as far as I can see, means "ghosts" could easily become actual UB (and worse, sneaky UB that you wouldn't expect.)

      13 replies →

    • Fixing the actual problems is work-in-progress (as my document also indicates), but naturally it is harder.

      But the original article also complains about the number of trivial UB.

  • And yet, I see P1434R0 seemingly trying to introduce new undefined behavior, around integer-to-pointer conversions, where previously you had reasonably sensible implementation defined behavior (the conversions “are intended to be consistent with the addressing structure of the execution environment").

    https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p14...

    • Pointer provenance already existed before, but the standards were contradictory and incomplete. This is an effort to more rigorously nail down the semantics.

      i.e., the UB already existed, but it was not explicit had to be inferred from the whole text and the boundaries were fuzzy. Remember that anything not explicitly defined by the standard, is implicitly undefined.

      Also remember, just because you can legally construct a pointer it doesn't mean it is safe to dereference.

      51 replies →

Undefined behavior only means that ISO C doesn't give requirements, not that nobody gives requirements. Many useful extensions are instances where undefined behavior is documented by an implementation.

Including a header that is not in the program, and not in ISO C, is undefined behavior. So is calling a function that is not in ISO C and not in the program. (If the function is not anywhere, the program won't link. But if it is somewhere, then ISO C has nothing to say about its behavior.)

Correct, portable POSIX C programs have undefined behavior in ISO C; only if we interpret them via IEEE 1003 are they defined by that document.

If you invent a new platform with a C compiler, you can have it such that #include <windows.h> reformats all the attached storage devices. ISO C allows this because it doesn't specify what happens if #include <windows.h> successfully resolves to a file and includes its contents. Those contents could be anything, including some compile-time instruction to do harm.

Even if a compiler's documentationd doesn't grant that a certain instance of undefined behavior is a documented extension, the existence of a de facto extension can be inferred empirically through numerous experiments: compiling test code and reverse engineering the object code.

Moreover, the source code for a compiler may be available; the behavior of something can be inferred from studying the code. The code could change in the next version. But so could the documentation; documentation can take away a documented extension the same way as a compiler code change can take away a de facto extension.

Speaking of object code: if you follow a programming paradigm of verifying the object code, then undefined behavior becomes moot, to an extent. You don't trust the compiler anyway. If the machine code has the behavior which implements the requirements that your project expects of the source code, then the necessary thing has been somehow obtained.

  • > Undefined behavior only means that ISO C doesn't give requirements, not that nobody gives requirements. Many useful extensions are instances where undefined behavior is documented by an implementation.

    True, most compilers have sane defaults in many cases for things that are technically undefined (like take sizeof(void) or do pointer arithmetic on something other than a char). But not all of these cases can be saved by sane defaults.

    Undefined behavior means the compiler can replace the code with whatever. So if you e.g. compile optimizing for size, the compiler will rip out the offending code, as replacing it with nothing yields the greatest size optimization.

    See also John Regehr's collection of UB-Canaries: https://github.com/regehr/ub-canaries

    Snippets of software exhibiting undefined behavior, executing e.g. both the true and the false branch of an if-statement or none etc. UB should not be taken lightly IMO...

    • > [...] undefined behavior, executing e.g. both the true and the false branch of an if-statement or none etc.

      Or replacing all you mp3s with a Rick Roll. Technically legal.

      (Some old version of GHC had a hilarious bug where it would delete any source code with a compiler error in it. Something like this would technically legal for most compiler errors a C compiler could spot.)

  • Unfortunely it also means that when the programmer fails to understand what undefined behaviour is exposed on their code, the compiler is free to take advantage of that to do the ultimate performance optimizations as means to beat compiler benchmarks.

    The code change might come in something as innocent as a bug fix to the compiler.

    • Ah yes, the good old "compiler writers only care about benchmarks and are out to hurt everyone else" nonsense.

      I for one am glad that compilers can assume that things that can't happen according to the language do in fact not happen and don't bloat my programs with code to handle them.

      11 replies →

  • > Including a header that is not in the program, and not in ISO C, is undefined behavior.

    What is this supposed to mean? I can't think of any interpretation that makes sense.

    I think ISO C defines the executable program to be something like the compiled translation units linked together. But header files do not have to have any particular correspondence to translation units. For example, a header might declare functions whose definitions are spread across multiple translation units, or define things that don't need any definitions in particular translation units (e.g. enum or struct definitions). It could even play macro tricks which means it declares or defines different things each time you include it.

    Maybe you mean it's undefined behaviour to include a header file that declares functions that are not defined in any translation unit. I'm not sure even that is true, so long as you don't use those functions. It's definitely not true in C++, where it's only a problem (not sure if it's undefined exactly) if you ODR-rule use a function that has been declared but not defined anywhere. (Examples of ODR-rule use are calling or taking the address of the function, but not, for example, using sizeof on an expression that includes it.)

    • > I can't think of any interpretation that makes sense

      Start with a concrete example. A header that is not in our program, or described in ISO C. How about:

        #include <winkle.h>
      

      Defined behavior or not? How can an implementation respond to this #include while remaining conforming? What are the limits on that response?

      > But header files do not have to have any particular correspondence to translation units.

      A header inclusion is just a mechanism that brings preprocessor tokens into a translation unit. So, what does the standard tell us about the tokens coming from #include <winkle.h> into whatever translation unit we put it into?

      Say we have a single file program and we made that the first line. Without that include, it's a standard-conforming Hello World.

      14 replies →

  • You are basically trying to explain the difference between a conforming program and a strictly conforming one.

A couple of solutions in development (but already usable) that more effectively address UB:

i) "Fil-C is a fanatically compatible memory-safe implementation of C and C++. Lots of software compiles and runs with Fil-C with zero or minimal changes. All memory safety errors are caught as Fil-C panics." "Fil-C only works on Linux/X86_64."

ii) "scpptool is a command line tool to help enforce a memory and data race safe subset of C++. It's designed to work with the SaferCPlusPlus library. It analyzes the specified C++ file(s) and reports places in the code that it cannot verify to be safe. By design, the tool and the library should be able to fully ensure "lifetime", bounds and data race safety." "This tool also has some ability to convert C source files to the memory safe subset of C++ it enforces"

  • Fil-C is interesting because as you'd expect it takes a significant performance penalty to deliver this property, if it's broadly adopted that would suggest that - at least in this regard - C programmers genuinely do prioritise their simpler language over mundane ideas like platform support or performance.

    The resulting language doesn't make sense for commercial purposes but there's no reason it couldn't be popular with hobbyists.

    • Well, you could also treat Fil-C as a sanitiser, like memory-san or ub-san:

      Run your test suite and some other workloads under Fil-C for a while, fix any problems report, and if it doesn't report any problems after a while, compile the whole thing with GCC afterwards for your release version.

      3 replies →

>Uninitialized data

They at least fixed this in c++26. No longer UB, but "erroneous behavior". Still some random garbage value (so an uninitialized pointer will likely lead to disastrous results still), but the compiler isn't allowed to fuck up your code, it has to generate code as if it had some value.

  • It won't be a "random garbage value" but is instead a value the compiler chose.

    In effect if you don't opt out your value will always be initialized but not to a useful value you chose. You can think of this as similar to the (current, defanged and deprecated as well as unsafe) Rust std::mem::uninitialized()

    There were earlier attempts to make this value zero, or rather, as many 0x00 bytes as needed, because on most platforms that's markedly cheaper to do, but unfortunately some C++ would actually have worse bugs if the "forgot to initialize" case was reliably zero instead.

  • C also fixed it in its way.

    Access to an uninitialized object defined in automatic storage, whose address is not taken, is UB.

    Access to any uninitialized object whose bit pattern is a non-value, likewise.

    Otherwise, it's good: the value implied by the bit pattern is obtained and computation goes on its merry way.

Rust here rust there. We are just talking about C not rust. Why we have to using rust. If you talking memory safety why there is no one recommends Ada language instead of rust.

We have zig, Hare, Odin, V too.

  • > Ada language instead of rust

    Because it never achieved mainstream success?

    And Zig for example is very much not memory safe. Which a cursory search for ”segfault” in the Bun repo quickly tells you.

    https://github.com/oven-sh/bun/issues?q=is%3Aissue%20state%3...

    • More accurately speaking, Zig helps spatial memory safety (e.g. out-of-bound access) but doesn't help temporal memory safety (e.g. use-after-free) which Rust excels at.

      2 replies →

    • > Because it never achieved mainstream success?

      And with this attitude it never will. With Rust's hype, it would.

  • None of them solve use after free, for example.

    Ada would rather be a nice choice, but most hackers love their curly brackets.

  • Even within the rust OSS community it's irritating. They will try to cancel people for writing libs using `unsafe`, and makes APIs difficult to use by wrapping things in multiple layers of traits, then claim using other patters are unsafe/unsound/UB. They make claims that things like DMA are "advanced topics", and "We haven't figured it out yet/found a good solution yet". Love rust/hate the Satefy Inquisition. Or say things like "Why use rust if you don't use all the safety-features and traits"... which belittles rust as a one-trick lang!

A small nit: the development of Unix began on the PDP-7 in assembly, not the PDP-11.

(The B language was implemented for the PDP-7 before the PDP-11, which are rather different machines. It’s sometimes suggested that the increment and decrement operators in C, which were inherited from B, are due to the instruction set architecture of the PDP-11, but this could not have been the case. Per Dennis Ritchie:¹

> Thompson went a step further by inventing the ++ and -- operators, which increment or decrement; their prefix or postfix position determines whether the alteration occurs before or after noting the value of the operand. They were not in the earliest versions of B, but appeared along the way. People often guess that they were created to use the auto-increment and auto-decrement address modes provided by the DEC PDP-11 on which C and Unix first became popular. This is historically impossible, since there was no PDP-11 when B was developed. The PDP-7, however, did have a few “auto-increment” memory cells, with the property that an indirect memory reference through them incremented the cell. This feature probably suggested such operators to Thompson; the generalization to make them both prefix and postfix was his own.

Another person puts it this way:²

> It's a myth to suggest C’s design is based on the PDP-11. People often quote, for example, the increment and decrement operators because they have an analogue in the PDP-11 instruction set. This is, however, a coincidence. Those operators were invented before the language [i.e. B] was ported to the PDP-11.

In any case, the PDP-11 usually gets all the love, but I want to make sure the other PDPs get some too!)

[1] https://www.bell-labs.com/usr/dmr/www/chist.html

[2] https://retrocomputing.stackexchange.com/questions/8869

We switched to Rust. Generally, are there specific domains or applications where C/C++ remain preferable? Many exist—but are there tasks Rust fundamentally cannot handle or is a weak choice?

  • Yes, all the industries where C and C++ are the industry standards like Khronos APIs, POSIX, CUDA, DirectX, Metal, console devkits, LLVM and GCC implementation,....

    Not only you are faced with creating your own wrappers, if no one else has done it already.

    The tooling, for IDEs and graphical debuggers, assumes either C or C++, so it won't be there for Rust.

    Ideally the day will come where those ecosystems might also embrace Rust, but that is still decades away maybe.

  • Advantages of C are short compilation time, portability, long-term stability, widely available expertise and training materials, less complexity.

    IMHO you can today deal with UB just fine in C if you want to by following best practices, and the reasons given when those are not followed would also rule out use of most other safer languages.

    • This is a pet peeve, so forgive me: C is not portable in practice. Almost every C program and library that does anything interesting has to be manually ported to every platform.

      C is portable in the least interesting way, namely that compilers exist for all architectures. But that's where it stops.

      6 replies →

    • > short compilation time

      > IMHO you can today deal with UB just fine in C if you want to by following best practices

      In the other words, short compilation time has been traded off with wetware brainwashing... well, adjustment time, which makes the supposed advantage much less desirable. It is still an advantage, I reckon though.

      7 replies →

  • Rust encourages a rather different "high-level" programming style that doesn't suit the domains where C excels. Pattern matching, traits, annotations, generics and functional idioms make the language verbose and semantically-complex. When you follow their best practices, the code ends up more complex than it really needs to be.

    C is a different kind of animal that encourages terseness and economy of expression. When you know what you are doing with C pointers, the compiler just doesn't get in the way.

    • Pattern matching should make the language less verbose, not more. (Similar for many of the other things you mentioned.)

      > When you know what you are doing with C pointers, the compiler just doesn't get in the way.

      Alas, it doesn't get in the way of you shooting your own foot off, too.

      Rust allows unsafe and other shenanigans, if you want that.

      2 replies →

  • Yes, based on a few attempts chronicled in articles from different sources, Rust is a weak choice for game development, because it's too time-consuming to refactor.

  • Rust forces you to code in the Rust way, while C or C++ let you do whatever you want.

    • > C or C++ let you do whatever you want.

      C and C++ force you to code in the C and C++ ways. It may that that's what you want, but they certainly dont let me code how I want to code!

      7 replies →

  • If you wanted to develop a cross-platform native desktop / mobile app in one framework without bundling / using a web browser, only QT comes to mind, which is C++. I think there are some bindings though.

  • An application domain where C++ is notably better is when the ownership and lifetimes of objects are not knowable at compile-time, only being resolvable at runtime. High-performance database kernels are a canonical example of code where this tends to be common.

    Beyond that, recent C++ versions have much more expressive metaprogramming capability. The ability to do extensive codegen and code verification within C++ at compile-time reduces lines of code and increases safety in a significant way.

  • I haven't used Rust extensively so I can't make any criticism besides that I find compilation times to be slower than C

    • I find with C/++ I have to compile to find warnings and errors, while with Rust I get more information automatically due to the modern type and linking systems. As a result I compile Rust significantly less times which is a massive speed increase.

      Rusts tooling is hands down better than C/++ which aids to a more streamlined and efficient development experience

      8 replies →

    • The popular C compilers are seriously slow, too. Orders of magnitude compared to C compilers of yesteryear.

  • embedded hardware, any processor Rust doesn't support (there are many), and any place where code size is critical. Rust has a BIG base size for an application, uselessly so at this time. I'd also love to see if it offered anything that could be any use in those spaces - especially where no memory allocation takes place at all. C (and to a lesser extent C++) are both very good in those spaces.

    • You can absolutely make small rust programs, you just have to actually configure things the right way. Additionally, the Rust language doesn’t have allocation at all, it’s purely a library concern. If you don’t want heap allocations, then don’t include them. It works well.

      The smallest binary rustc has produced is like ~145 bytes.

      2 replies →

  • > Generally, are there specific domains or applications where C/C++ remain preferable?

    Well, anything were your people have more experience in the other language or the libraries are a lot better.

  • Rust can do inline ASM, so finding a task Rust "fundamentally cannot handle" is almost impossible.

    • That's almost as vacuous as saying that Rust can implement universal Turing machines are that Rust can do FFI?

In C, using uninitialized data is undefined behavior only if:

- it is an automatic variable whose address has not been taken; or

- the uninitialized object' bits are such that it takes on a non-value representation.

I don’t buy the “it’s because of optimization argument”.

And I especially don’t buy that UB is there for register allocation.

First of all, that argument only explains UB of OOB memory accesses at best.

Second, you could define the meaning of OOB by just saying “pointers are integers” and then further state that nonescaping locals don’t get addresses. Many ways you could specify that, if you cared badly enough. My favorite way to do it involves saying that pointers to locals are lazy thunks that create addresses on demand.

  • No, it's absolutely because of optimization. For instance, C++20 defined signed integer representation as having two's complement, but signed integer overflow is still undefined behaviour. The reason is that if you compile with flags that make it defined, you lose a few percentage points of performance (primarily from preventing loop unrolling and auto-vectorization).

    Same thing with e.g. strict aliasing or the various UB that exists in the standard library. For instance, it's UB to pass a null pointer to strlen. Of course, you can make that perfectly defined by adding an `if` to strlen that just returns 0. But then you're adding a branch to every strlen, and C is simply not willing to do that for performance reasons, so they say "this is UB" instead.

    Pretty much instance of UB in standard C or C++ is because making it defined would either hamper the optimizer, or it would make standard library functions slower. They don't just make things UB for fun.

    • This isn’t the reason why the UB is in the spec in the first place. The spec left stuff undefined to begin with because of lack of consensus over what it should do.

      For example the reason why 2s complement took so long is because of some machine that ran C that still existed that was 1s complement.

      > The reason is that if you compile with flags that make it defined, you lose a few percentage points of performance (primarily from preventing loop unrolling and auto-vectorization).

      I certainly don’t lose any perf on any workload of mine if I set -fwrapv

      If your claim is that implementers use optimization as the excuse for wanting UB, then I can agree with that.

      I don’t agree that it’s a valid argument though. The performance wins from UB are unconvincing, except maybe on BS benchmarks that C compilers overtune for marketing reasons.

      2 replies →

    • I wish there was a way to opt into undefined behavior for unsigned overflow. Its rare that wraparound is actually what you want and in many cases overflow is still a bug. Sucks to have to either miss out on potential optimizations or miss out on the guarantee that the value can't be negative.

      3 replies →

  • > First of all, that argument only explains UB of OOB memory accesses at best.

    It explains many loop-unroll and integer overflow as well.

  • > Second, you could define the meaning of OOB by just saying “pointers are integers"

    This means losing a lot of optimisations, so in fact when you say you "don't buy" this argument you only mean that you don't care about optimisation. Which is fine, but this does mean the "improved" C isn't very useful in a lot of applications, might as well choose Java.

    • > This means losing a lot of optimisations

      You won’t lose “a lot” of optimizations and you certainly won’t lose enough for it to make a noticeable difference in any workload that isn’t SPEC

This asserts that UB was deliberately created for optimisation purposes; not to handle implementation differences. It doesn't provide any evidence though and that seems unlikely to me.

The spec even says:

> behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this International Standard imposes no requirements

No motivation is given that I could find, so the actual difference between undefined and implementation defined behaviour seems to be based on whether the behaviour needs to be documented.

  • I'd say the original intent of UB was not the sort of "optimizer exploits" we see today, but to allow wiggle room for supporting vastly different CPUs without having to compromise runtime performance or increasing compiler complexity to balance performance versus correctness. Basically an escape hatch for compilers. The difference to IB also has always been quite fuzzy.

    Also the C spec has always been a pragmatic afterthought, created and maintained to establish at least a minimal common feature set expected of C compilers.

    The really interesting stuff still only exists outside the spec in vendor language extensions.

I, once again, disagree with the premise that UB is a necessary precondition for optimisation, or that it exists to allow for optimisation. You do not need UB to unroll a loop, inline a function, lift an object or computation out of a loop, etc. Moreover, _most_ UB does not assist in optimisation at all.

The two instances where UB allows for optimisation are as follows:

1. The 'signed overflow' UB allows for faster array indexing. By ignoring potential overflow, the compiler can generate code that doesn't check for accidental overflow (which would require masking the array index, recomputing the address on each loop iteration). I believe the better solution here would be to introduce a specific type for iterating over arrays that will never overflow; size_t would do fine, and making signed overflow at least implementation defined, if not outright fully defined, after a suitable period during which compilers warn if you use a too-small type for array indexing.

2. The 'aliasing' UB does away with the need to read/write values to/from memory each time they're used, and is extremely important to performance optimisation.

But the rest? Most of it does precisely nothing for performance. At 'best', the compiler uses detected UB to silently eliminate code branches, but that's something to be feared, not celebrated. It isn't an optimisation if it removes vital program logic, because the compiler could 'demonstrate' that it could not possibly take the removed branch, on account of it containing UB.

The claim in the linked article ("what every C programmer should know") that use of uninitialized variables allows for additional optimisation is incorrect. What it does instead is this: if the compiler see you declare a variable, and then reading from it before writing to it, it has detected UB, and since the rule is that "the compiler is allowed to assume UB does not occur", use that as 'evidence' that that code branch will never occur and can be eliminated. It does not make things go faster; it makes them go _wrong_.

Undefined behaviour, ultimately, exists for many reasons: because the standards committee forgot a case, because the underlying platforms differ too wildly, because you cannot predict in advance what the result of a bug may be, to grandfather in broken old compilers, etc. It does not, in any way, shape, or form, exist _in order to_ enable optimisation. It _allows_ it in some cases, but that is, and never was, not the goal.

Moreover, the phrasing of "the compiler is allowed to assume that UB does not occur" was originally only meant to indicate that the compiler was allowed to emit code as if all was well, without introducing additional tests (for example, to see if overflow occurred or if a pointer was valid) - clearly that would be very expensive or downright infeasible. Unfortunately, over time this has enabled a toxic attitude to grow that turns minor bugs into major disasters, all in the name of 'performance'.

The two bullet points towards the end of the article are both true: the compiler SHOULD NOT behave like an adversary, and the compiler DOES NEED license to optimize. The mistake is thinking that UB is a necessary component of such license. If that were true, a language with more UB would automatically be faster than one with less. In reality, C++ and Rust are roughly identical in performance.

Worse languages ever.

  • Jack Sparrow: “… but you have heard of them.”

    The dustbin of programming languages is jam packed with elegant, technically terrific, languages that never went anywhere.

  • C and C++ are languages that brought us UNIX, the Linux kernel, macOS and Windows, the interpreters of virtually every other language in the world, powering virtually all software in the world as well as the vast majority of embedded devices.

    Chill the fuck out.