Analysis of MySQL sharding query usage

  • 2020-12-20 03:49:45
  • OfStack

This article illustrates the MySQL sharding query usage. To share for your reference, the details are as follows:

For large queries, there is sometimes a need to 'divide and conquer ', splitting large queries into small ones: each query functions exactly the same, but only completes 1 small part of the original, and each query returns 1 small part of the result set.

Deleting old data is a good example. When cleaning up old data on a regular basis, if an sql involves a large amount of data, it may lock more than one table or row at a time, consuming a lot of system resources while blocking many other small but important queries. When you shard a large DELETE statement into smaller queries, you minimize the impact on msql's performance and reduce the latency associated with mysql replication.

For example, a statement that runs once a month to clean up data from three months ago:

mysql> DELETE FROM messages WHERE dt<DATE_SUB(NOW(), INTERVAL 3 MONTH);

You can do this in the following ways:


rows_affected = 0
do {
   rows_affected = do_query("DELETE FROM messages WHERE dt<DATE_SUB(NOW(), INTERVAL 3 MONTH) LIMIT 10000")
}while rows_affected>0

Deleting 10, 000 rows at a time is a more efficient and less disruptive approach to the server. At the same time, if 1 is paused each time the data is deleted, the original one-time pressure of the server can be spread over a longer period of time, thus reducing the lock time of the table when deleting the row.

For more information about MySQL, please refer to MySQL Transaction Operation Skills summary, MySQL Stored Procedure Skills Collection, MySQL Database Locking Skills Summary and MySQL Common Functions Summary.

I hope this article has been helpful to you with the MySQL database.


Related articles: