← Back to context

Comment by the_why_of_y

7 years ago

> “The volatile keyword was devised to prevent compiler optimizations that might render code incorrect in the presence of certain asynchronous events.” > This is simply wrong.

Hardware interrupts and UNIX signals are the asynchronous events in question, and C's volatile is still useful in those contexts, where there is only a single thread of execution.

Volatile still doesn’t protect you there, whereas C++ 11 atomic do. If the item you mark volatile is not changed at the cpu and cache level atomically, you’re going to access torn variables. I’ve been there and am certain about it. And pre C++ 11, there is no way to portably find out which operations are architecture atomic, so it was impossible to write such code portably. C++ 11 fixed all that, and there’s no reason to use volatile for any of this any more: use atomics, possibly with fine grained barriers if needed and understood.

Here’s a compiler showing that your use fails on some systems:

http://www.keil.com/support/docs/2801.htm

  • You're right that just "volatile" isn't enough; typically you'd declare the variable sig_atomic_t to be portable, which makes the necessary guarantees since C89 so predates C++11. (It does not guarantee anything regarding access from multiple threads, of course.)

    The problem with std::atomic<T> is that it may be implemented with a mutex, in which case it can deadlock in a signal handler. But as you say, you can check for that with is_lock_free.

    • Yep. And this thread illustrates why threading is hard, especially in C++ :)

      Oh, and sig_atomic_t is not guaranteed thread-safe, only signal safe. The difference is when you move your code from a single cpu to dual cpu system it breaks. I ran across this some time ago moving stuff to an ESP32.

      Atomic so far works best across the chips I’m poking at.