A detailed introduction to lock classification in Java

  • 2020-10-23 20:59:07
  • OfStack

In many concurrent articles, you will read about various locks such as fair locks, optimistic locks, etc. This article introduces the classification of various locks. The introduction is as follows:

Fair locks/unfair locks Reentrant lock Exclusive locks/Shared locks Mutex/read-write lock Optimistic lock/pessimistic lock Segmented lock Biased locks/lightweight locks/heavyweight locks spinlocks

These categories do not refer to the state of the lock, some refer to the characteristics of the lock, and some refer to the design of the lock. The following is a summary of the content of the definition of each lock.

Fair locks/unfair locks

Fair locking means that multiple threads acquire locks in the order in which they apply for locks.

Unfair locking means that the order in which multiple threads acquire the lock is not the order in which they apply for the lock. It is possible that the thread applying for the lock after applying for the lock has priority over the thread applying for the lock first. It can cause priority inversion or starvation.

For Java ReentrantLock Specifies whether the lock is fair through the constructor, and defaults to non-fair. The advantage of unfair locking is that the throughput is greater than that of fair locking.

for Synchronized Is also a kind of unfair lock. Because it doesn't ReentrantLock Thread scheduling is implemented through AQS, so there is no way to make it a fair lock.

Reentrant lock

A reentrant lock, also known as a recursive lock, automatically acquires the lock when the same thread acquires it in an outer method when it enters an inner method. To be a little abstract, here is a code example.

For Java ReentrantLock For example, his name can be seen as 1 reentrant lock whose name is Re entrant Lock Re-enter the lock.

for Synchronized Is also a reentrant lock. One advantage of reentrant locking is that deadlocks can be avoided to a certain degree.


synchronized void setA() throws Exception{
  Thread.sleep(1000);
  setB();
}

synchronized void setB() throws Exception{
  Thread.sleep(1000);
}

The above code is a feature of a reentrant lock. If it were not a reentrant lock, setB might not be executed by the current thread, potentially causing a deadlock.

Exclusive locks/Shared locks

Exclusive lock means that the lock can only be held by one thread at a time.

A Shared lock means that the lock can be held by more than one thread.

For Java ReentrantLock For, it is exclusive lock. But for another implementation class for Lock ReadWriteLock , whose read lock is a Shared lock and whose write lock is an exclusive lock.

The Shared lock of the read lock ensures that concurrent reads are very efficient, and the process of reading, writing, reading and writing is mutually exclusive.

Exclusive locks and Shared locks are also implemented through AQS, which implements different methods to achieve exclusive or Shared locks.

for Synchronized And, of course, exclusive locks.

Mutex/read-write lock

The exclusive lock/Shared lock mentioned above is a broad term, and the mutex/read-write lock is a concrete implementation.

The specific implementation of mutex in Java is ReentrantLock

The specific implementation of read-write locks in Java is ReadWriteLock

Optimistic lock/pessimistic lock

Optimistic locking and pessimistic locking do not refer to the specific type of lock, but refer to the perspective of concurrency and synchronization.

Pessimistic locks assume that for concurrent operations on the same data, 1 must be modified, even if there is no modification, it will be considered modified. Therefore, for the concurrent operation of the same data, pessimistic lock takes the form of locking. The pessimistic view is that unlocked concurrent operation 1 is bound to cause problems.

Optimistic locking assumes that concurrent operations on the same data will not be modified. When you update the data, you update the data by trying to update and constantly re-updating the data. The optimistic view is that there is nothing wrong with unlocked concurrent operations.

From the above description, we can see that pessimistic locks are suitable for situations where there are a lot of write operations, and optimistic locks are suitable for situations where there are a lot of read operations, and leaving them unlocked provides a significant performance improvement.

The use of pessimistic locks in Java makes use of various locks.

The use of optimistic locking in Java is lock-free programming, often using the CAS algorithm, a typical example is the atomic class, through the CAS spin to achieve atomic operation updates.

Segmented lock

Block lock is a lock design, not a specific lock, for ConcurrentHashMap The realization of its concurrency is to realize efficient concurrent operation in the form of segmented lock.

We take the ConcurrentHashMap Let's talk about the meaning of section lock and the design idea. ConcurrentHashMap The segmented lock in Segment is called Segment, which is similar to the structure of HashMap (JDK7 and JDK8 implementation of HashMap), that is, there is an Entry array inside, and each element in the array is also a linked list. There was also an ReentrantLock (Segment succeeded ReentrantLock).

When the put element is needed, instead of locking the entire hashmap, hashcode first knows which segment it is going to put in, and then locks that segment, so when multithreaded put, as long as it is not placed in one segment, it is really parallel insertion.

However, in the statistics of size, when the hashmap global information is obtained, all the segment locks need to be obtained in order to count.

The segmented lock is designed to refine the granularity of the lock. When the operation does not need to update the entire array, the lock operation is only performed for one item in the array.

Biased locks/lightweight locks/heavyweight locks

These three types of locks refer to the state of the lock and are targeted Synchronized . Efficiency was achieved in Java 5 by introducing a lock upgrade mechanism Synchronized . The state of these three locks is indicated by the field in the object header of the object monitor.

Biased locking means that 1 section of synchronization code 1 is directly accessed by 1 thread, then the thread will automatically acquire the lock. Reduce the cost of acquiring locks.

Lightweight lock means that when a biased lock is accessed by another thread, the biased lock will be upgraded to lightweight lock, and other threads will attempt to acquire the lock through spin, without blocking, to improve performance.

Heavyweight lock means that when the lock is lightweight, the spin of another thread will not last for 1 straight time although it is spin 1. When the spin 1 is fixed for a certain number of times, the lock will enter the block before the lock is acquired, and the lock will expand to heavyweight lock. Heavyweight locks can cause other threads to block and degrade performance.

spinlocks

In Java, spin locking means that the thread trying to acquire the lock does not immediately block, but instead tries to acquire the lock in a circular fashion, which has the advantage of reducing the cost of thread context switching, and the disadvantage of looping which consumes CPU.

A spin lock is implemented by having the current thread execute continuously in the loop, allowing it to enter the critical zone when the loop condition is changed by another thread. The following


public class SpinLock {

 private AtomicReference<Thread> sign =new AtomicReference<>();

 public void lock(){
  Thread current = Thread.currentThread();
  while(!sign .compareAndSet(null, current)){
  }
 }

 public void unlock (){
  Thread current = Thread.currentThread();
  sign .compareAndSet(current, null);
 }
}

Using the CAS atomic operation, the lock function sets owner to the current thread and predicts that the original value is null. The unlock function sets owner to null and predicts that the value is the current thread.

When a second thread calls the lock operation, the value of owner is not empty, so loop 1 is executed until the first thread calls the unlock function and sets owner to null, then the second thread can enter the critical section.

Because the spin lock simply keeps the current thread running through the body of the loop without changing the thread's state, the response is faster. But when the number of threads keeps increasing, the performance degrades significantly because each thread needs to execute and takes up CPU time. If thread contention is not fierce and locks are kept for a period of time. Suitable for use with spin locks.

Note: This example is a non-fair lock. The order in which the lock is obtained will not be the order in which it entered lock.


Related articles: