MySQL Basic Optimization tutorial for Discuz Forum program

  • 2020-12-07 04:32:56
  • OfStack

After so long, discuz BBS problems still plagued by a lot of net friend, but from all the problems in the BBS see concluded, critical 1 point because no data table engine into InnoDB, discuz in concurrent slightly high 1 point of environment is very bad, produce a large number of lock waits, at that time, if change the data table engine to InnoDB I believe that would be much better. Write a literacy post this time.

1. Enable the innodb engine and configure relevant parameters


#skip-innodb


innodb_additional_mem_pool_size = 16M #1 a 16M That's enough. You can adjust it 
innodb_buffer_pool_size = 6G # If it's dedicated db Words, 1 General is the amount of memory 80%
innodb_data_file_path = ibdata1:1024M:autoextend
innodb_file_io_threads = 4
innodb_thread_concurrency = 20
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size = 16M
innodb_log_file_size = 256M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 50
innodb_lock_wait_timeout = 120
innodb_file_per_table

Modify the table engine as innodb:


mysql> alter table cdb_access engine = innodb;

Other table similar to the above, change the table name 1 can...
Changing the table storage engine to innodb not only avoids a lot of lock waits, but also improves query efficiency, since innodb will put both data and index in buffer pool, which is more efficient.

2. Cache optimization
Add/modify the following options in ES26en.cnf:


 # Unlocks the external lock on the file system 
skip-locking
# Do not do domain name anti-resolution , Notice the permissions that come with that / Authorization problem 
skip-name-resolve
# The index buffer , Depending on the size of the memory , If it's independent db The server , You can set up to 80% Total amount of memory 
key_buffer = 512M
# Total number of connection queue lists 
back_log = 200
max_allowed_packet = 2M
# Open table cache total , You can avoid the overhead of frequently opening tables 
table_cache = 512
# Each thread sorts the buffers needed 
sort_buffer_size = 4M
# Each thread reads the buffers needed for the index 
read_buffer_size = 4M
#MyISAM The buffer needed to be reordered when the table changes 
myisam_sort_buffer_size = 64M
# Cache the number of threads that can be reused 
thread_cache = 128
# Query result cache 
query_cache_size = 128M
# Set the timeout , Long connections can be avoided 
set-variable = wait_timeout=60
# Maximum number of concurrent threads ,cpu The number of *2
thread_concurrency = 4
# Record slow queries , Then query against the slow 11 To optimize the 
log-slow-queries = slow.log
long_query_time = 1
# Close unwanted table types , If you need to , Don't add this 
skip-bdb

The above parameters are adjusted according to the configuration differences of the respective servers and are for reference only.

3. Index optimization
As mentioned above, slow queries have been turned on, so it's time to optimize slow queries one by one.

The query SQL for the search is as follows:


 SELECT t.* FROM cdb_posts p, cdb_threads t WHERE
t.fid IN ('37', '45', '4', '6', '17', '41', '28', '32', '31', '1', '42')
AND p.tid=t.tid AND p.author LIKE 'JoansWin'
GROUP BY t.tid ORDER BY lastpost DESC LIMIT 0, 80;

The results of EXPLAIN analysis are as follows:


 mysql>EXPLAIN SELECT t.* FROM cdb_posts p, cdb_threads t WHERE
t.fid IN ('37', '45', '4', '6', '17', '41', '28', '32', '31', '1', '42')
AND p.tid=t.tid AND p.author LIKE 'JoansWin'
GROUP BY t.tid ORDER BY lastpost DESC LIMIT 0, 80; 

+-----------+------------+----------+--------------+-------------+-----------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref   | rows | Extra
+-----------+------------+----------+--------------+-------------+-----------+-------------+
| 1 | SIMPLE  | t  | range | PRIMARY,fid | fid | 2  | NULL  | 66160 | Using where; 
Using temporary; Using filesort |
| 1 | SIMPLE  | p  | ref | tid   | tid | 3  | Forum.t.tid | 10 | Using where
| +----+-------------+-------+-------+---------------+------+---------+-------------+-------+
---------

Only ES45en.fid and p.tid are used, while p.author have no indexes available and need to be scanned altogether
66160*10 = 661600 indexes
Then analyze the index situation of cdb_threads and cdb_posts:


 mysql>show index from cdb_posts; 

+-----------+------------+----------+--------------+-------------+-----------+----------
---+----------+--------+------+--+
| Table  | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | 
Packed | Null | Index_type | Comment | +-----------+------------+----------+--------------+----
---------+-----------+-------------+----------+--------+------+--+
| cdb_posts |   0 | PRIMARY |   1 | pid   | A   |  680114 |  NULL | NULL |
| BTREE  |   |
| cdb_posts |   1 | fid  |   1 | fid   | A   |   10 |  NULL | NULL |
| BTREE  |   |
| cdb_posts |   1 | tid  |   1 | tid   | A   |  68011 |  NULL | NULL |
| BTREE  |   |
| cdb_posts |   1 | tid  |   2 | dateline | A   |  680114 |  NULL | NULL |
| BTREE  |   |
| cdb_posts |   1 | dateline |   1 | dateline | A   |  680114 |  NULL | NULL |
| BTREE  |   | 
+-----------+------------+----------+--------------+-------------+-----------+---

As well as


 mysql>show index from cdb_threads; 

innodb_additional_mem_pool_size = 16M #1 a 16M That's enough. You can adjust it 
innodb_buffer_pool_size = 6G # If it's dedicated db Words, 1 General is the amount of memory 80%
innodb_data_file_path = ibdata1:1024M:autoextend
innodb_file_io_threads = 4
innodb_thread_concurrency = 20
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size = 16M
innodb_log_file_size = 256M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 50
innodb_lock_wait_timeout = 120
innodb_file_per_table

0

Seeing that the index fid and enablehot cardinality are too small, it seems that the index is not necessary at all, but in the case of a large fid cardinality, you may want to keep it > The index.
The modifications are as follows:


innodb_additional_mem_pool_size = 16M #1 a 16M That's enough. You can adjust it 
innodb_buffer_pool_size = 6G # If it's dedicated db Words, 1 General is the amount of memory 80%
innodb_data_file_path = ibdata1:1024M:autoextend
innodb_file_io_threads = 4
innodb_thread_concurrency = 20
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size = 16M
innodb_log_file_size = 256M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 50
innodb_lock_wait_timeout = 120
innodb_file_per_table

1

Here, the partial index length of p. author field I set is 10, which is the result obtained after analysis. For different systems, the length here is also different, so it is better to take 1 as the average first, and then adjust appropriately.
Now, perform the above slow query again, and the discovery time has changed from 6s to 0.19s, a 30-fold improvement.


Related articles: