Solution to the read write mutex problem of Mysql of MyISAM

  • 2020-05-12 06:20:18
  • OfStack

Since there is no way to add read servers in a short period of time, some configuration of Mysql is adopted to sacrifice real-time data at the cost of the life safety of all servers. Oh, the specific relevant adjustments and ideas are as follows:

MyISAM is efficient when read operations are dominant. When there is a lot of read/write concurrency, the efficiency of MyISAM plummets compared with that of InnoDB, and the data storage methods of MyISAM and InnoDB are significantly different: Usually, in MyISAM, new data will be attached to the end of the data file, if often do some UPDATE, 1 DELETE after operation, the data file is no longer a sequential, image 1 point, is a data file, there are many holes in the insert new data, according to the default Settings will first look at the size of the hole hole can accommodate new data, if you can, is storing new data directly to the hole hole, on the other hand, is storing new data to the end of the data file. The reason for this is to reduce the size of the data files and reduce the generation of file fragments. However, this is not the case in InnoDB. In InnoDB, since the primary key is cluster, the data file is always sorted according to the primary key. If the self-added ID is used as the primary key, the new data is always at the end of the data file.

With these basics in mind, here are a few configuration options for MyISAM that are easy to overlook:

concurrent_insert:

In general, read and write operations are serial in MyISAM, but when query and insert operations are performed on the same table, in order to reduce the frequency of lock contention, MyISAM can process queries and inserts in parallel according to the setting of concurrent_insert:

Concurrent insertion is not allowed when concurrent_insert=0.

When concurrent_insert=1, concurrent insertion is allowed for tables with no holes, and the new data is at the end of the data file (the default).

When concurrent_insert=2, concurrent insertion at the end of the data file is allowed regardless of whether the table has holes or not.

In this case, setting concurrent_insert to 2 is cost-effective, and the resulting file fragmentation can be periodically optimized using OPTIMIZE TABLE syntax.

max_write_lock_count:

By default, write operations have a higher priority than read operations, and even if read requests are sent first and write requests are sent later, write requests are processed first and read requests are processed later. This creates a problem: once I make several write requests, I block all read requests until the write requests have been processed. Consider using max_write_lock_count at this point:

max_write_lock_count=1

With this setup, when the system processes a write operation, it pauses the write operation to give the read operation a chance to execute.

low priority - updates:

We can also be a bit cleaner, simply lowering the write priority and giving higher priority to the read.

low-priority-updates=1

In general, concurrent_insert=2 is absolutely recommended. For max_write_lock_count=1 and low-priority-updates =1, it depends. If the priority of write operations can be reduced, low-priority-updates =1, otherwise max_write_lock_count=1.

Related articles: