PHP garbage code optimization operation code

  • 2020-03-31 21:00:58
  • OfStack

The company has several websites linked to the virtual host in the United States. The mysql service on the server will suddenly be down at some time every day, and then it will be restored after a while. It is suspected that the CPU usage limit is exceeded and is automatically terminated, but actually the traffic on the server is very small. So early when contacted the server provider of India a three customer service, want to see whether other users do more harm to everyone died together, a three people after the search, vowed to pat the long hair of the chest is not their problem, the matter is not resolved. Fortunately, we can access the information_schema database. After a look, there is no word. The data in user_statistics shows that one of our mysql users is too high in busy_time, cpu_time and other indexes. So quickly check program, this site is not done by me before, but know that there are many problems in architecture to implement, but the page is not general, mingled with the HTML code, depends on the past also deathless, (this especially when you feel the MVC how wonderful), at ordinary times can make do with run is ok, anyway, there is no traffic.

Since it is a heavy burden on mysql, then find this, local on the site to get a mirror run under the my.ini changes to add

 
[mysqld] 
log="d:/temp/mysql.log" 
log_slow_queries="d:/temp/mysql_slow.log" 
long_query_time=1 


This directory should already exist. Restart the mysql service and you are ready to log.

After looking at the SQL record surprised, the number of queries is surprising, any one page, SQL queries are in dozens, more than a thousand!

Take the forum to say, the number of database queries on a page is 10 times, using the cache can be lower. If calculate so, the burden that is equivalent to original dozens times, can not hang?

Who also can't there have the perseverance to write down hundreds of queries ah, so it must be a circular query. The SQL statement also indicates this. If you know the reason, you can change it, find the relevant page, and get rid of the circular query, for example, there is a page, to show all the regions of the classification and the number of articles under the classification, without considering the optimization of the database structure, in terms of the program, the original is roughly like this
 
$sql1="SELECT aid,count(*) as cc FROM pz_content WHERE uid=$uid group by aid"; 
$rs1=$db->query($sql1); 
if(is_array($rs1)){ 
foreach($rs1 as $r1){ 
 The output ... 
echo id2name($r1->aid); 
} 
} 
............ 
function id2name($aid) 
{ 
$sql="select ename from pz_area_a where aid_a=".$id; 
$result=mysql_query($sql); 
$row=mysql_fetch_object($result); 
return $row->ename; 
} 

Regardless of the code's fault tolerance, see the implementation will know that he first read the relevant articles of the user and by the region ID for grouping and statistics, and then each region each region output region name, so, if there are 1w regions, here is to query 1w times. Put a timing code, looked at the next, about 6M memory consumption, execution time 16 seconds, the cumulative query 1001 times

In fact, here is as long as the SQL can be done, there is no need to loop.

 
$sql1="select pz_area.aid,pz_area.ename,tb1.cc from pz_area right join (SELECT aid,count(*) as cc FROM pz_content WHERE uid=$uid group by aid) as tb1 on pz_area.aid=tb1.aid"; 
$rs1=$db->query($sql1); 
if(is_array($rs1)){ 
  foreach($rs1 as $r1){ 
 The output ... 
   echo $r1->ename; 
  } 
} 

The problem can be solved. Re-run, memory consumption is about the same, 1 query, CPU execution time is only 647 milliseconds, and the original difference of 26 times! Look again, found that this pz_content table, records are still relatively large, there are often to query by region partition and so on, but the original no index. By adding an index to the aid, the execution time is reduced to 432 milliseconds.

Okay, so that's it for this page, and we'll continue next time.

Related articles: