In depth understanding of thread safety and singletons

  • 2020-04-02 01:33:35
  • OfStack

Thread safety is a tricky issue. Even if you use locks properly, they may not have the desired effect.
Let's look at some plausible code


X=0;
Thread 1                   Thread2
lock();  lock();
x++;    x++;
unlock();  unlock();

You would think that after executing these two threads, some value of X is equal to 2, right? Yes, the execution of x++ is not interrupted because of the protection of lock() and unlock(). Why is the ++ operation disturbed by multithreading? The reason for this is that the ++ operation is compiled into assembly code. However, the compiler may, by some clever optimization, put x in register (because register is fast), which means that when Thread1 completes x++, it is broken by Thread2, but the value of 1 is only saved in register x, not written to the x variable in memory. Then, after Thread2 completes, the value of x in memory is equal to 1, at which point, after Thread1 completes, x in memory is written to 1 again.
It's the compiler!

Let's do another example


x=y=0;
Thread1                        Thread2
y=1;                                x=1;
r1=x;                               r2=y;

When you chest-thumped your adoring MM and assured them that at least one of r1 or r2 was 1, the compiler was once again on the wrong side of you.

The reason is that compilers have had such an optimization mechanism for exchanging sequences of instructions for efficiency for decades or decades. So the above code might look like this:


x=y=0;
Thread1                        Thread2
r1=x;                             r2=y;
y=1;                              x=1;                 

I know you're wrong, but we also have volatile:
1. Prevents the compiler from storing a variable cache in a register rather than writing it back to memory for speed.
2. Prevents the compiler from adjusting the sequence of operation instructions

Haha, it is a pity that while the priest is one foot high, the devil is ten feet high. CPU dynamic scheduling function, the CPU can exchange instruction sequences. Volatile won't help you, but Zeus gave us the barrier (which is a CPU's instruction) to help us prevent the CPU from changing the sequence of instructions.
I think we have solved the problem of site safety so far.

There is a famous change-order problem with the double-check of the Singleton pattern. The code looks something like this:


volatile Singleton* Singleton::_instance = 0;


static Singleton& Instance() {
      if (0 == _instance) {
          Lock lock(_mutex);
          if (0 == _instance) {
              _instance = new Singleton();
              atexit(Destroy);
          }
      }
      return *_instance;
 }

In short, the compiler may reorder the instructions for efficiency (compiler-based reorderings).
Look at this line of code:
_instance = new Singleton ();

In case the compiler is not optimized, the order is as follows:
1. New operator allocates appropriate memory;
2. Construct a Singleton object on allocated memory;
3. Assign the memory address to _instance.

But when the compiler optimizes, the order of execution may be as follows:
1. New operator allocates appropriate memory;
2. Assign the memory address to _instance;
3. Construct a Singleton object on allocated memory.

When the compiler is optimized, if the thread is suspended as soon as it executes to 2. Thread 2 starts executing and finds that 0 == _instance is false, so it simply returns, while the Singleton object may not have been constructed yet, resulting in...


Related articles: