mysql insert check scheme in concurrent environment

  • 2020-12-09 01:04:34
  • OfStack

Business Background:
Basic business scenario is that the request data () vehicle vin information into the interface, you will need to determine the status of in the database, if there is no the vin in the library, or the state of vin bit is "1" (completed), then perform some tests after operation, 1 insert data into the database, the new vin status is 0, called artificial processing interface, return the result in about 10 minutes, will be set to 1. If its status bit is "0 (processing)", the operation is dismissed and a prompt message is returned.
In a single-threaded environment, such a business would be fine, but when the interface is accessed concurrently, two identical AB requests will be entered at the same time, and one A should normally be inserted and one B rejected. However, in the concurrent environment, when B performs the check status, A has not been inserted yet, so AB is entered into the database and the data is wrong.

Solution 1:
The first thought is to use sql processing, the database corresponding field to add only 1 index, to ensure 1. If duplicate data is inserted, the catch exception is indicated.


ALTER tableName ADD UNIQUE [indexName] ON (tableColumns(length))

However, due to business restrictions, vin is repeatable in the library. Multiple duplicate data queries are up to date, so only one index can no longer be added to vin.

Solution 2:
Using the mysql transaction operation, checks for existence and inserts are processed as a single transaction, and when the checks fail, inserts are not inserted. Searched 1 from the Internet, and the general idea is as follows:


public static void StartTransaction(Connection con, String[] sqls) throws Exception { 
    try { 
      //  The transaction start  
      con.setAutoCommit(false);  //  Set the connection not to commit automatically, that is, nothing done with the connection is updated to the database  
      sm = con.createStatement(); //  create Statement object  

      // Execute incoming in turn SQL statements  
      for (int i = 0; i < sqls.length; i++) { 
        sm.execute(sqls[i]);//  Executes the statement that adds the item  
      } 
      con.commit();  //  Submit to database processing  
      //  End of the transaction  

    // Capture execution SQL Exception in statement group    
    } catch (SQLException e) { 
      try { 
        System.out.println(" Transaction failed, roll back! \n"); 
        con.rollback(); //  If an exception occurs in one of the previous statements, roll back and cancel all previous operations  
      } catch (SQLException e1) { 
        e1.printStackTrace(); 
      } 
    } finally { 
      sm.close(); 
    } 
  }

But this doesn't really solve the concurrency problem, it just turns two operations into a 1-atom sql operation that can be used to insert two data 1 dependencies at the same time, but is not suitable for requirements.

Since the sql level did not solve the problem, consider addressing it in the concurrent programming direction of java.
Solution 3:
When java addresses concurrency, the first thing that comes to mind is the use of built-in or reentrant locks. The basic syntax is as follows:
· Built-in lock:
Because it is processed in Servlet, the business code is processed directly using synchronized(this), so that only one thread can access the business code in the case of concurrency:


synchronized(this){
  //todo1: check vin If there is a 
  //todo2: If there is no insertion vin
}

· Reentrant lock:
The equivalent of a more flexible built-in lock is essentially the same here as the built-in lock


public class DashengCallBack extends HttpServlet {
  private static ReentrantLock lock= new ReentrantLock();
  protected void doGet(HttpServletRequest request, HttpServletResponse response){
    lock.lock();
    try{
      //todo1: check vin If there is a 
      //todo2: If there is no insertion vin
    }finally{
      lock.unlock();
    }
  }
}

After testing, this scheme is feasible. The reason why it was not adopted at last is that the locking method is directly used, and the locking code is too much, which affects the efficiency.

Solution 4:
Set a query Map to store data before insertion and delete data after insertion. The code is as follows:


    ConcurrentHashMap<String, String> vinMap=new ConcurrentHashMap<String,String>();
    if(vinMap.containsKey(vin)){
      // todo1: vin  After the request is completed ,  from vinInRequestMap Let me get rid of this vinNo
      // todo2:  Return query in progress 
    }
    vinMap.put(vin, "");
    //todo3: insert vin To the database 
    vinMap.remove(vin);
  }

This scheme basically meets the business requirements, and the only problem of 1 is that the update time of the interface should be staggered with the business time, otherwise the update interface will empty vinMap, resulting in data chaos and errors in the library.

Above is the entire content of this article, I hope to help you with your study.


Related articles: