How can a single web server maximize web site performance

  • 2020-05-10 23:13:56
  • OfStack

I think the first step is to select a suitable environment. For most php sites, running on lnmp (linux+nginx+mysql+php) is an ideal choice.

First of all, I won't mention the advantages of linux over win.

Secondly, the advantages of nginx can be summarized as load balancing, high concurrency and excellent performance.

Here, php USES fastcgi to access nginx, and php-fpm, which comes with php5.3, is already very good. Compared with php module of apache, php can directly handle php requests, and the number of processes can be adjusted to optimize the concurrency performance.

The above is the basic environment, I have an q6600 old 4 core, 4G memory linux server, it ran a few 10 sites, the highest run to 90M bandwidth, hold successfully live!

Let's talk about program optimization.

We know that static pages are faster than dynamic pages, especially in nginx, where static pages are served directly by nginx. And dynamic pages, while fastcgi is also good, are a bit different than nginx's direct handling of static pages.

Two good things are presented here, one is nginx's own caching functions proxy_cache and fastcgi_cache, and then there is the nginx module ngx_cache_purge to clean up the cache specifying url.

We mainly use fastcgi_cache for a single server here. We can cache the execution result of php script to disk and memory under the specified url, and we can specify the expiration time. When the second access is made, nginx will directly fetch the cache file, which is equivalent to static page, which is of course very efficient.

For some pages, we don't need to cache all of them. We just need to cache some short data, such as some arrays, access records and other temporary new content. Traditional php USES direct file caching, just like the data/cache directory in dedecms, which stores many cache files, mainly to avoid frequent database queries. 1 case, the file cache is enough, but to the pursuit of perfection, especially under high concurrency, might as well try memcached, very good one, is 1 some string in the form of key-value pairs stored in server memory, the specified expiration time, directly from the memory, the next time you're in no consume disk I/O, speed is not a level, principle and 1 sample file cache.

So much for the moment, but the main idea is one: caching. It's just in a different way. So how to choose the appropriate way of caching, is what we developers need to consider.


Related articles: