Implementation of golang spin lock

  • 2020-06-23 00:36:44
  • OfStack

CAS Algorithm (compare and swap)

CAS algorithm is a famous lock-free algorithm. Lock-free programming is the synchronization of variables between multiple threads without the use of locks. In other words, the synchronization of variables without threads being blocked is also called non-blocking synchronization (ES9en-ES10en Synchronization). The CAS algorithm involves three operands

The memory value V that needs to be read and written Compare the value A The new value B to write

If and only if the value of V is equal to A, CAS updates the value of V atomically with the new value of B, otherwise nothing is done (the comparison and replacement is 1 atomic operation). 1 in the general case is 1 spin operation, that is, repeated retries.

spinlocks

A spin lock is when a thread acquires a lock. If the lock has already been acquired by another thread, the thread will loop and wait until the lock has been acquired.

The thread 1 that gets the lock is active but is not performing any valid tasks, and using this lock results in ES30en-ES31en.

It is a lock mechanism for protecting Shared resources. In fact, spinlocks are similar to mutexes in that they are used to resolve the exclusive use of a resource. Either a mutex or a spin lock, at most one holder at any one time, that is, at most one execution unit at any one time can acquire a lock. But the scheduling mechanism is slightly different. For a mutex, the resource applicant can only go to sleep if the resource is already occupied. But a spin lock does not cause the caller to sleep. If the spin lock has been held by another execution unit, the caller loops around to see if the holder of the spin lock has released the lock, hence the name "spin".

golang implements spin locks


type spinLock uint32
func (sl *spinLock) Lock() {
  for !atomic.CompareAndSwapUint32((*uint32)(sl), 0, 1) {
    runtime.Gosched()
  }
}
func (sl *spinLock) Unlock() {
  atomic.StoreUint32((*uint32)(sl), 0)
}
func NewSpinLock() sync.Locker {
  var lock spinLock
  return &lock
}

Reentrant spin locks and non-reentrant spin locks

The code at the beginning of this article can be seen from a careful analysis of 1 that it is non-supported reentrant, that is, when a thread has acquired the lock on the first time and re-acquired the lock on the second time before the lock is released, it cannot be successfully acquired on the second time. Since CAS is not satisfied, the second fetch will go into the while loop and wait, and if it is reentrant, the second fetch should be successful.

Moreover, even if the second lock is successfully obtained, the second lock will be released when the first lock is released, which is unreasonable.

To implement a reentrant lock, we need to introduce a counter that records the number of threads that acquired the lock


type spinLock struct {
   owner int
   count int
}

func (sl *spinLock) Lock() {
    me := GetGoroutineId()
    if spinLock .owner == me { //  If the current thread has already acquired the lock, the number of threads increases 1 And then go back 
        sl.count++
        return
    }
    //  If no lock is obtained, pass CAS The spin 
  for !atomic.CompareAndSwapUint32((*uint32)(sl), 0, 1) {
    runtime.Gosched()
  }
}
func (sl *spinLock) Unlock() {
   if rl.owner != GetGoroutineId() {
     panic("illegalMonitorStateError")
   }
   if sl.count >0 { //  If more than 0 , indicates that the current thread has acquired the lock multiple times, and the lock is released through count Reduction of 1 To simulate the 
      sl.count--
    }else { //  if count==0 , the lock can be released, so that the number of times the lock can be obtained and the number of times the lock can be released is 1 To the. 
      atomic.StoreUint32((*uint32)(sl), 0)
    }
}

func GetGoroutineId() int {
  defer func() {
    if err := recover(); err != nil {
      fmt.Println("panic recover:panic info:%v", err)   }
  }()

  var buf [64]byte
  n := runtime.Stack(buf[:], false)
  idField := strings.Fields(strings.TrimPrefix(string(buf[:n]), "goroutine "))[0]
  id, err := strconv.Atoi(idField)
  if err != nil {
    panic(fmt.Sprintf("cannot get goroutine id: %v", err))
  }
  return id
}


func NewSpinLock() sync.Locker {
  var lock spinLock
  return &lock
}

Other variations of the spin lock

1. TicketLock

TicketLock mainly addresses the issue of fairness.

Idea: every time a thread when acquiring the lock, give the thread a incremental id, we call queue number, at the same time, the lock corresponding to one service, every time a thread lock is released, the service will increase, at this time if the service number with a thread queue number 1, then the thread lock, because the line number is increasing, so to ensure that the first request of acquiring a lock threads can be the first to get to the lock, to realize the fairness.

Can imagine the bank to handle the business end of the line, line up every one customer on behalf of a need to lock the thread, and the bank service window lock, whenever there is a window service completed service number plus 1, as at this point in the line up all of the customers, only their line number and service number 1 can get service.

2. CLHLock

CLH lock is an extensible, high-performance, fair spin lock based on a linked list. The application thread spins only on local variables. It continuously polls the state of the precursor.

3. MCSLock

MCSLock loops through the nodes of local variables.

4. CLHLock and MCSLock

Both are based on linked lists. The difference is that CLHLock is based on implicit linked lists and does not have real attributes of subsequent nodes. MCSLock is a display linked list with one attribute pointing to subsequent nodes.

The problem of TicketLock multi-processor cache synchronization is solved by saving the thread state of the lock via a node (node), each thread having a separate node.

Spinlocks and mutexes

Both spinlocks and mutex are mechanisms to protect resource sharing. Whether a spin lock or a mutex, there can be no more than one holder at any given time. The thread that gets the mutex and goes to sleep if the lock is already occupied; Instead of sleeping, the thread that gets the spinlock loops 1 in a loop, waiting for the lock to be released.

Conclusion:

Spinlock: When a thread acquires a lock, if the lock is held by another thread, the current thread will loop until the lock is acquired. Thread state does not change during spinlock wait, thread 1 is user mode and active (active). A spin lock that holds the lock for too long will cause other threads waiting to acquire the lock to exhaust CPU. The spin lock itself does not guarantee fairness, nor does it guarantee reentrancy. Based on spinlock, the lock with fairness and reentrant property can be realized. TicketLock: It adopts a method similar to the bank queue to realize the fairness of spinlock. However, due to the continuous reading of serviceNum, each read and write operation must be synchronized between multiple processor caches, which will lead to heavy system bus and memory traffic, greatly reducing the overall performance of the system. CLHLock and MCSLock use linked lists to avoid reducing processor cache synchronization and greatly improve performance. The difference is that CLHLock polls the status of its precursor nodes, while MCS looks at the lock status of the current node. Using CLHLock under the NUMA architecture presents problems. In the NUMA system architecture without cache, CLHLock is not the optimal spinlock on the NUMA architecture because CLHLock spins on the first node of the current node and the processor accesses local memory faster than the memory of other nodes accessed through the network.

Related articles: