MySQL Limit Performance Optimization and Paging Data Performance Optimization Detailed Explanation

  • 2021-11-24 03:09:13
  • OfStack

MySQL Limit can be segmented query database data, mainly used in paging. Although the website data written now are thousands of levels, 1 small optimization plays a small role, but the development should be done to the extreme and pursue perfect performance. Here are some limit performance optimization methods.

Limit syntax:


SELECT * FROM table LIMIT [offset,] rows | rows OFFSET offset

The LIMIT clause can be used to force the SELECT statement to return a specified number of records. LIMIT accepts one or two numeric parameters. Parameter must be a 1 integer constant.

Given two parameters, the first parameter specifies the offset of the first return record row, and the second parameter specifies the maximum number of return record rows. The offset of the initial record line is 0 (instead of 1).

Support for limit # offset # Syntax:


mysql> SELECT * FROM table LIMIT 5,10; // Retrieve record rows 6-15
// To retrieve from a 1 Offset to the end of the recordset for all record rows, and you can specify the 2 Parameters are -1
mysql> SELECT * FROM table LIMIT 95,-1; // Retrieve record rows 96-last
// If only given 1 Which means that the maximum number of record rows is returned. In other words, LIMIT n  Equivalent to  LIMIT 0,n
mysql> SELECT * FROM table LIMIT 5; // Before retrieval 5 Rows of records 

limit n, m means that m records are selected from n records. Most developers prefer to use such statements to solve the classic paging problem in Web. For small-scale data, this will not be a big problem. For forums, which may have very large-scale data, the efficiency of limit, n and m is 10 points low. Because the data needs to be selected every time. If you only select the first five records, it is very easy and easy; However, for 1 million records, if you select 5 records starting from 800,000 lines of records, you need to scan the records to this position.

That is to say, limit 10000, 20 means scanning 10020 lines that meet the conditions, throwing away the previous 10000 lines and returning to the last 20 lines; This is the problem. If it is limit 100,000, 100, 100,100 rows need to be scanned. In a highly concurrent application, each query needs to scan more than 10W rows, which will definitely reduce the performance.

Comparison of data reading efficiency of different data volumes;

1. When offset is relatively small:


select * from table limit 5,10

Run multiple times and keep the time between 0.0004 and 0.0005


Select * From table Where id >=( 
Select id From table Order By id limit 10,1 
) limit 10

Run for many times, and the time is kept between 0.0005 and 0.0006. Therefore, when offset is small, it will be more efficient to use limit directly!

2. When offset data is relatively large:


select * from table limit 10000,10

Run many times, and the time is kept at about 0.0187 seconds.


Select * From table Where id >=( 
Select id From table Order By id limit 10000,1 
) limit 10

After many runs, the time is kept at about 0.061 seconds, which is about 1/3 of the former. Therefore, when offset is large, the efficiency of using the latter will be achieved! This is the result of using id as index.

If id is used as the primary key of the data table:


select id from table limit 10000,10

The query takes about 0.04 seconds, because the id primary key is used as the result of the index.

Limit Performance Optimization:


Select * From cyclopedia Where ID>=( 
Select Max(ID) From ( 
Select ID From cyclopedia Order By ID limit 90001 
) As tmp 
) limit 100; 
Select * From cyclopedia Where ID>=( 
Select Max(ID) From ( 
Select ID From cyclopedia Order By ID limit 90000,1 
) As tmp 
) limit 100;

The same is to take the last 100 records of 90,000, and the second sentence will be faster. Because the first sentence is to take the first 90001 records, take the largest ID value as the starting identification, and then use it to quickly locate the next 100 pieces of data; In the second sentence, only the last record is taken, and then the ID value is taken as the starting identification to locate 100 records. The second sentence can be abbreviated as:


Select * From cyclopedia Where ID>=( 
Select ID From ( 
Select ID From cyclopedia Order By ID limit 90000,1 
) As tmp 
) limit 100;

The Max operation is omitted, and all ID are incremented.

Paging data performance optimization:

1. For data tables with large data volume, you can establish primary keys and index fields to establish index tables, query corresponding primary keys through index tables, and query data tables with large data volume through primary keys;

2. If you have where condition and want to use limit for index, you must design an index, put where in the first place, put the primary key used in limit in the second place, and only select primary key! This can improve the reading speed

3. Using in: First, obtain the corresponding primary key value through where condition, and then query the corresponding field value by using the primary key value.

Paging using the cursor (cursor):

For the best query performance of mysql, I changed the paged query to cursor query mode:


select * from table where id > last_id limit 20 order by reply_id ASC;

The above last_id is the id of the last record on this page, so that the query of "the next page" can be realized, and the query of "the last page" can also be realized by the same token.

cursor paging is only suitable for sequential data and does not support page skipping. We can establish self-increasing ID or add ordered fields to the data table: for items with large data volume, page skipping has little effect, so we can use filter conditions to achieve the purpose of searching.

Summarize


Related articles: