php read and write file and read and write database efficiency comparison sharing

  • 2020-10-07 18:36:31
  • OfStack

Also recently thought of this question, what is read faster file or database faster, can how much faster, luck also searched, didn't see a net friend answer on this issue, also may be too simple, we are still to the measured one, in this paper, in the interest of time, VC haven't installed, the luck with PHP tested 1 first, next time you have time on C/C + + complement test to this article, because of the ground floor of PHP resolution should also C based, so the estimated test results about the environment, small problem big harvest, Now let's look at the test process and the results.

The test procedure is as follows:

Note 1: Since the read database statement calls the simple packet function twice, the read file is also called twice in succession. The database records ID as 1 in article 1, and there is only 1 index.
Note 2: The data of 4K was tested once, and the data of shaping was tested once


set_time_limit(0);
function fnGet($filename)
{
$content = file_get_contents($filename);
return $content;
}
function fnGetContent($filename)
{
$content = fnGet($filename);
return $content;
}
$times=100000;
echo ' Database query results: <br/>';
//---------------------------------
$begin=fnGetMicroTime();
for($i=0;$i<$times;$i++)
{
$res=$dbcon->mydb_query("SELECT log_Content FROM blog WHERE log_ID='1'");
$row=$dbcon->mydb_fetch_row($res);
$content=$row[0];
}
echo 'fetch_row '.$times.'  Time to time: <font color=red>'.(fnGetMicroTime()-$begin).'</font> seconds <br/>';
//---------------------------------
$begin=fnGetMicroTime();
for($i=0;$i<$times;$i++)
{
$res=$dbcon->mydb_query("SELECT log_Content FROM blog WHERE log_ID='1'");
$row=$dbcon->mydb_fetch_array($res);
$content=$row['log_Content'];
}
echo 'fetch_array '.$times.'  Time to time: <font color=red>'.(fnGetMicroTime()-$begin).'</font> seconds <br/>';
//---------------------------------
$begin=fnGetMicroTime();
for($i=0;$i<$times;$i++)
{
$res=$dbcon->mydb_query("SELECT log_Content FROM blog WHERE log_ID='1'");
$row=$dbcon->mydb_fetch_object($res);
$content=$row->log_Content;
}
echo 'fetch_object '.$times.'  Time to time: <font color=red>'.(fnGetMicroTime()-$begin).'</font> seconds <br/>';
//---------------------------------
$dbcon->mydb_free_results();
$dbcon->mydb_disconnect();
fnWriteCache('test.txt',$content);
echo ' Direct reading test results: <br/>';
//---------------------------------
$begin=fnGetMicroTime();
for($i=0;$i<$times;$i++)
{
$content = fnGetContent('test.txt');
}
echo 'file_get_contents Direct reading '.$times.' Time to time: <font color=red>'.(fnGetMicroTime()-$begin).'</font> seconds <br/>';
//---------------------------------
$begin=fnGetMicroTime();
for($i=0;$i<$times;$i++)
{
$fname = 'test.txt';
if(file_exists($fname))
{
$fp=fopen($fname,"r");//flock($fp,LOCK_EX);
$file_data=fread($fp, filesize($fname));//rewind($fp);
fclose($fp);
}
$content = fnGetContent('test.txt');
}
echo 'fopen Direct reading '.$times.' Time to time: <font color=red>'.(fnGetMicroTime()-$begin).'</font> seconds <br/>';

Query results of 4K size data:
fetch_row 100,000 times: 16.737720012665 seconds
fetch_array 100,000 times: 16.661195993423 seconds
Time: 16.775065898895 seconds
Direct reading test results:
file_get_contents direct read 100,000 times: 5.4631857872009 seconds
fopen Read directly 100,000 times: 11.463611125946 seconds
Results of ID query:
Time: 12.812072038651 seconds
fetch_array 100,000 times: 12.667390108109 seconds
fetch_object 100000 times: 12.988099098206 seconds
Direct reading test results:
file_get_contents Read directly 100,000 times: 5.6616430282593 seconds
fopen Read directly 100,000 times: 11.542816877365 seconds

Test conclusion:

1. Direct file reading is more efficient than database query, and the time of connection and disconnection is not taken into account.
2, 1 times the content of the reading, the greater the direct read files of the more obvious advantages (read files are small growth time, the file is stored with continuity and related cluster size, and so on), the results matched the luck instead, explain MYSQL for larger file read may be attached to some operations (twice time increased by nearly 30%), if simply assignment conversion should be a small difference.
3. Write files and INSERT with little testing to infer that database efficiency will only get worse.
4, if the small configuration files do not need to use the database features, more suitable for access in a separate file, do not need to create separate data table or records, large files such as pictures, music file storage is more convenient, only the path or thumbnail index information into the database is more reasonable 1.
5. On PHP, if you only read files, file_get_contents is more efficient than fopen and fclose, excluding the existence of this function time will be about 3 seconds less.
6, fetch_row and fetch_object should be converted from fetch_array, I have not seen the source code of PHP, only from the implementation can show that fetch_array is more efficient, this seems to be contrary to the statement on the Internet.
In fact, before the experiment, I had a rough idea from personal experience, and after the test, It was a bit of a revelation. Assumption on procedures and key process and the measures are not included in the cache, etc under the condition of, read and write files to any type of data is not direct operation fast, no matter how MSYQL process, finally all is going to read the "file" to disk (equivalent) record store, so, of course, this one is the premise of cutting read-only content, is independent of any sort or search operations.


Related articles: