mysql server query slow reason analysis and solution summary

  • 2020-05-13 03:36:49
  • OfStack

It is often found that developers check 1 statement without index or limit n statement, which will have a great impact on the database. For example, a large table with tens of millions of records needs to be scanned, or filesort is constantly done, which will have an io impact on the database and server. This is what happens above the mirror library.

When it comes to the online library, in addition to the statement with no index and no limit statement, there is also one more case, the problem of too many connections of mysql. With that said, let's take a look at some of our surveillance practices
1. Deploy open source distributed monitoring systems such as zabbix to obtain io, cpu and connection number of databases every day
2. Deploy weekly performance statistics, including data increment, iostat, vmstat, datasize
3. Mysql slowlog collection, listing top 10

I thought it was perfect to do all this monitoring, but now when I deploy mysql node process monitoring, I find many disadvantages
Disadvantages of the first approach: zabbix is too large, and it is not monitored inside mysql, so a lot of data are not very prepared. Now, 1 is generally used to check the historical data
The disadvantage of the second approach: because it is only once a week to run, many cases can not be found and call the police
The e downside of the third approach is that top10 becomes meaningless when there are too many nodes of slowlog, and a lot of times you will be given regular task statements that are 1 to run. The reference is of little value
So how can we solve and query these problems

The easiest problem to find and solve for troubleshooting performance bottlenecks is MYSQL's slow queries and queries that don't use indexes.
OK, start looking for SQL statements in mysql that aren't "cool" to execute.
=========================================================
Method 1: I am using this method, ha ha, prefer this kind of immediacy.
 
Mysql5.0 The above version can support slower execution SQL Write down the statement.  
mysql> show variables like 'long%';  Note: this long_query_time Is used to define how many seconds slower than a "slow query" is.  
+-----------------+-----------+ 
| Variable_name | Value | 
+-----------------+-----------+ 
| long_query_time | 10.000000 | 
+-----------------+-----------+ 
1 row in set (0.00 sec) 
mysql> set long_query_time=1;  Note:   I set up 1,  That is, the execution time is over 1 Seconds are all slow queries.  
Query OK, 0 rows affected (0.00 sec) 
mysql> show variables like 'slow%'; 
+---------------------+---------------+ 
| Variable_name | Value | 
+---------------------+---------------+ 
| slow_launch_time | 2 | 
| slow_query_log | ON |  Note: whether to turn on logging  
| slow_query_log_file | /tmp/slow.log |  Note:   Where to set it  
+---------------------+---------------+ 
3 rows in set (0.00 sec) 
mysql> set global slow_query_log='ON'  Note: turn on logging  
1 denier slow_query_log The variable is set to ON . mysql The recording will begin immediately.  
/etc/my.cnf  Inside you can set the top MYSQL The initial value of a global variable.  
long_query_time=1 
slow_query_log_file=/tmp/slow.log 

Method 2:mysqldumpslow command
 
/path/mysqldumpslow -s c -t 10 /tmp/slow-log 
 This will output the one with the most records 10 article SQL Statements, where:  
-s,  It's how you sort it, c , t , l , r They are sorted according to the number of records, time, query time and number of records returned. ac , at , al , ar , represents the corresponding flashback;  
-t,  is top n Is the number of previous data returned;  
-g,  I can write in the back 1 A regular matching pattern, case insensitive;  
 Such as  
/path/mysqldumpslow -s r -t 10 /tmp/slow-log 
 Get the most recordset returned 10 A query.  
/path/mysqldumpslow -s t -t 10 -g  " left join "  /tmp/slow-log 
 I get the first order in time 10 Contains the query statement for the left join.  

Finally, the benefits of node monitoring are summarized in 1
1. Lightweight monitoring, but also in real time, can be customized and modified according to the actual situation
2. The filter program is set up to filter those 1 must run statements
3. While it can be time consuming to process slow statements, it's still worth it to avoid database failures
4. When there are too many connections to the database, the program will automatically save processlist of the current database, which is a sharp tool when DBA is searching for reasons
5. When using mysqlbinlog for analysis, you can get a clear period of time when the database state is abnormal
Some people will say let's do the mysql profile Settings

Some other parameters were found when adjusting tmp_table_size
The number of queries Qcache_queries_in_cache that have been registered in the cache
The number of queries Qcache_inserts were added to the cache
Number of cache samples Qcache_hits
The number of queries that Qcache_lowmem_prunes were removed from the cache for lack of memory
The number of queries Qcache_not_cached not cached (not cached, or due to QUERY_CACHE_TYPE)
Qcache_free_memory queries the total amount of free memory in the cache
Qcache_free_blocks queries the number of free memory blocks in the cache
Qcache_total_blocks queries the total number of blocks in the cache
Qcache_free_memory can cache 1 common query, if it is common sql will be loaded into memory. That increases the speed of database access.

Related articles: