Choose a better cache server in varnish squid apache nginx

  • 2020-05-09 19:47:45
  • OfStack

1. Differences between varnish, squid, apache and nginx

1. From these functions. varnish and squid are professional cache services, while apache and nginx are all third party modules.

2. If we want to do cache service, we must choose professional cache service, and squid and varnish are preferred.

The technical advantage of varnish itself is higher than that of squid. It adopts the technology of "Visual Page Cache". In terms of the utilization of memory, Varnish has advantages over Squid. varnish cannot be cache to the local hard disk.

There is also a powerful Varnish management port that allows you to use regular expressions to quickly, in bulk, remove portions of the cache

The advantage of squid lies in the complete and large cache technical data and many application production environments (this should be related to the early emergence of squid).

3, talk about nginx, nginx is the third module ncache to do the buffer, its performance is basically up to varnish, but in the architecture nginx1 as the reverse (static files now use a lot of nginx, concurrency can support up to 20,000 +). In a static architecture, if the front end is directly facing cdn and the front end has 4 layers of load, cache of nginx is sufficient.

4. I think if it is to improve the performance of apache service, it is perfectly ok to do some local cache, but if apache is used to do cache service in the system architecture, it is a bit of a bull's head and a horse's tail.

2. Test varnish, squid, apache, nginx

1. Test environment

1. The hardware is a pentium dual-core. The machine was bought three years ago. System is archlinux

2. When testing varnish and squid, web service USES apache

3. When testing apache, 5 processes were started, but the process will increase as the pressure increases.

4. When testing nginx, 10 nginx processes were started and 20 php-cgi processes were started

5, varnish, squid, nginx use the reverse proxy situation, that is, when accessing images, first through the cache tool

2, test

1, varnish


[root@BlackGhost bin]# /usr/local/bin/webbench -c 100 -t 20 http://127.0.0.1:8080/00/01/RwGowEtWvcQAAAAAAAAWHH0Rklg81.gif
Webbench - Simple Web Benchmark 1.5
Copyright (c) Radim Kolar 1997-2004, GPL Open Source Software.
Benchmarking: GET http://127.0.0.1:8080/00/01/RwGowEtWvcQAAAAAAAAWHH0Rklg81.gif
100 clients, running 20 sec.
Speed=476508 pages/min, 47258114 bytes/sec.
Requests: 158836 susceed, 0 failed.

varnish's cache efficiency hit ratio is really high

2, squid


[root@BlackGhost bin]# /usr/local/bin/webbench -c 100 -t 20 http://localhost:9000/00/01/RwGowEtWvcQAAAAAAAAWHH0Rklg81.gif
Webbench - Simple Web Benchmark 1.5
Copyright (c) Radim Kolar 1997-2004, GPL Open Source Software.
Benchmarking: GET http://localhost:9000/00/01/RwGowEtWvcQAAAAAAAAWHH0Rklg81.gif
100 clients, running 20 sec.
Speed=133794 pages/min, 7475018 bytes/sec.
Requests: 44598 susceed, 0 failed.

In terms of the test results, squid quite disappointed me. Before the test, I estimated that the best cache was varnish, followed by squid, then nginx, and finally apache. Now, squid is the worst. Later, I looked at the log file and found that the ratio between cache and no cache was not 1:2 in normal conditions, and the ratio between cache and no cache was even smaller under high pressure.

3, apache


[root@BlackGhost conf]# /usr/local/bin/webbench -c 100 -t 20 http://localhost/00/01/RwGowEtWvcQAAAAAAAAWHH0Rklg81.gif
Webbench - Simple Web Benchmark 1.5
Copyright (c) Radim Kolar 1997-2004, GPL Open Source Software.
Benchmarking: GET http://localhost/00/01/RwGowEtWvcQAAAAAAAAWHH0Rklg81.gif
100 clients, running 20 sec.
Speed=160890 pages/min, 15856005 bytes/sec.
Requests: 53630 susceed, 0 failed.

4, nginx


[root@BlackGhost conf]# /usr/local/bin/webbench -c 100 -t 20 http://localhost:10000/00/01/RwGowEtWvcQAAAAAAAAWHH0Rklg81.gif
Webbench - Simple Web Benchmark 1.5
Copyright (c) Radim Kolar 1997-2004, GPL Open Source Software.
Benchmarking: GET http://localhost:10000/00/01/RwGowEtWvcQAAAAAAAAWHH0Rklg81.gif
100 clients, running 20 sec.
Speed=304053 pages/min, 30121517 bytes/sec.
Requests: 101351 susceed, 0 failed.

From the above test results we can find that varnish > nginx > apache > squid, I think this result, the root of the expected result is a little different, because squid is a traditional file cache tool how can so bad, squid hit rate is low, I looked up on the Internet 1, a lot of people are like this, this may be the root of the personal configuration relationship, maybe the real master, can let squid play the maximum power

Varnish is a high-performance open source HTTP accelerator, which is generally used in combination with Nginx and Apache to build an efficient Web server. One of the threads in Varnish that accepts the new HTTP connection begins to wait for the user, and if a new HTTP connection is made, it is always responsible for receiving and waking up one of the waiting threads.

The Worker thread reads the URI requested by HTTP, looks for the existing object, and if it hits, returns and replies to the user. If it does not hit, you need to retrieve the requested content from the back-end server, store it in the cache, and then reply. Varnish creates a cache file of the appropriate size based on what you read to object.

According to the official statement, Varnish is a type cache HTTP reverse Dai. Varnish creates the cache file according to the request. If there is a timeout thread, it detects the lifetime of all object in the cache.

According to various data, Varnish is better than squid, Apache and so on in handling Web requests. Using Varnish can greatly improve your web server and reduce the load of Web server.


Related articles: