Causes and Solutions of mysql database lock

  • 2020-12-16 06:09:15
  • OfStack

Like database and operating system 1, it is a shared resource used by multiple users. When multiple users access data concurrently, there will be multiple transactions accessing the same data in the database. If concurrent operations are not controlled, incorrect data may be read and stored and the database may be corrupted. Locking is a very important technique to realize database and control. In practical applications, often encountered with lock-related exceptions, when two transactions need a set of conflicting locks, but cannot continue the transaction, deadlock will occur, severely affecting the normal execution of the application.

There are two basic lock types in databases: exclusive locks (Exclusive Locks, or X locks) and shared locks (Share Locks, or S locks). When an exclusive lock is placed on a data object, it cannot be read or modified by other transactions. Data objects with shared locks can be read by other transactions, but cannot be modified. Databases use these two basic lock types to control the concurrency of database transactions.

The first case of deadlock

1 user A accesses table A(locking table A) and then accesses table B again. Another user, B, accesses table B(locking table B) and attempts to access table A; At this point, user A, since user B has locked the table B, must wait for user B to release the table B to continue, and user B must wait for user A to release the table A to continue, resulting in a deadlock.

Solutions:

Such deadlocks are common and are caused by the program's BUG and have no alternative but to adjust the logic of the program. Close analysis of the program logic for multi-table operation of the database, in processing in the same order as far as possible, try to avoid locking two resources at the same time, such as operating A and B two tables, always in after the first A B order processing, must lock two resources at the same time, to ensure that at any time should be to lock in the same order.

The second case of deadlocks

User A queries one record and then modifies the record; User B to modify this record at this moment, the nature of the lock inside the user A transaction from the query of the Shared lock attempt to rise to an exclusive lock, and because the exclusive lock in the users B A there is a Shared lock so must wait for A released a Shared lock, but A cannot rise due to the exclusive lock of B exclusive lock is also Shared it will be impossible to release locks, and appear the deadlock. Such deadlocks are more subtle, but often occur in larger projects. For example, in a certain project, after the button is clicked on the page, the button is not immediately invalid, so that the user will quickly click the same button for many times, so that the same section of code to the database with a record for many times to operate, it is very easy to appear this deadlock situation.

Solutions:

1. For buttons and other controls, click the button and make it invalid immediately. Users are not allowed to click again, so as to avoid operating on the same record at the same time.
2. Use optimistic locks for control. Optimistic locking is mostly based on the data version (Version) recording mechanism. Add 1 version id to the data. In database table based version solution, 1 is usually achieved by adding 1 "version" field to the database table. When reading and retrieving data, read the version number 1 together, and then add 1 to the version number when updating later. At this time, the version of the submitted data is compared with the current version of the corresponding record in the database table. If the version number of the submitted data is greater than the current version number of the database table, it will be updated; otherwise, it will be considered as expired data. The optimistic locking mechanism avoids the database locking overhead in long transactions (neither user A nor user B locks the database data during the operation), and greatly improves the overall performance of the system under large amount of concurrency. Hibernate built an optimistic lock implementation into its data access engine. It should be noted that since the optimistic locking mechanism is implemented in our system, the user update operation from external system is not controlled by our system, so dirty data may be updated to the database.
3. Use pessimistic locks for control. Pessimistic locking mostly depends on the locking mechanism of the database, such as Oracle's Select... for update statement to ensure maximum exclusivity of the operation. But with that comes a lot of overhead for database performance, especially for long transactions, which can often be prohibitively expensive. As a financial system, when an operator to read the user's data, and read the user data on the basis of modified (e.g., change user account balance), if use pessimistic locking mechanism, means that the entire operation process (read out data from the operator, began to modify and submit the whole process of the modified results and even the operator the way to make coffee time), remain the state of locking the database record, it is conceivable that if the face of hundreds of thousands of concurrent, this kind of situation will lead to disastrous consequences. Therefore, the use of pessimistic locks for control 1 must be considered clearly.

The third case of deadlocks

If an update statement that does not meet the criteria is executed in a transaction, then a full table scan is performed to raise the row-level lock to the table-level lock. Deadlock and blocking are easy to occur after multiple such transactions are executed. A similar situation occurs when the amount of data in a table is too large and the indexes are built too little or inappropriately, which often leads to full table scans and eventually to slower and slower applications that eventually block or deadlock.

Solutions:

Don't use too complex queries associated with multiple tables in THE SQL statement; The "Execution plan" was used to analyze SQL statements, and corresponding indexes were established for optimization for SQL statements with full table scan.
5. summary
In general, memory leaks and lock tables are the result of poorly written code, so improving the quality of your code is the most fundamental solution. The idea that you can implement BUG first and then fix it in the test phase is wrong. Just as the quality of a product is determined during manufacturing, not during quality testing, the quality of software is determined during design and coding. Testing is only a verification of software quality, because testing cannot find all the BUG in the software.


Related articles: