A simple understanding of several scheduling algorithms of Nginx seven layer load balancing

  • 2020-05-17 07:53:06
  • OfStack

This article mainly introduces the simple understanding of Nginx7 layer load balancing several scheduling algorithms, the article through the example code introduced in detail, for everyone's learning or work has a fixed reference learning value, you can refer to the need of friends

Nginx is a lightweight, high-performance web server, as well as a very good load balancer and reverse proxy server. Due to the support of powerful regular matching rules, static and static separation, URLrewrite function, simple installation and configuration and very small dependence on network stability, it is often used as a seven-layer load balancing. When the hardware is not bad, it can normally support tens of thousands of concurrent connections stably. When the hardware performance is good enough, and the system kernel parameters and Nginx configuration can be optimized, it can even reach more than 100,000 concurrent.

The following are several common scheduling algorithms and applicable business scenarios for Nginx as a 7-layer load balancing tool

1. Polling (default scheduling algorithm)

Features: each request is assigned to a different back-end server in chronological order, 1 at a time.
Applicable business scenario: the back-end server hardware performance configuration is completely 1, when the business has no special requirements.


upstream backendserver { 
server 192.168.0.14 : 80 max_fails=2 fail_timeout=10s; 
server 192.168.0.15 : 80 max_fails=2 fail_timeout=10s; 
}

2. Weighted polling

Features: specified polling probability, weight value (weight) is proportional to the access ratio, user requests are allocated according to the weight ratio.
Applicable business scenarios: for situations where the hardware processing power of the back-end servers is uneven.


upstream backendserver { 
server 192.168.0.14:80 weight=5 max_fails=2 fail_timeout=10s; 
server 192.168.0.15:80 weight=10 max_fails=2 fail_timeout=10s;
}

3, ip_hash

Features: each request is assigned according to the hash result of accessing ip, so that each visitor has a fixed access to one back-end server, which can solve the session session retention problem.
Applicable business scenario: applicable to the system requiring account login, session connection maintenance business.


upstream backendserver { 
ip_hash; 
server 192.168.0.14:80 max_fails=2 fail_timeout=10s; 
server 192.168.0.15:80 max_fails=2 fail_timeout=10s; 
} 

4. Minimum number of connections least_conn

Features: according to the number of connections between the nginx reverse agent and the back-end server, the priority is given to the least number of connections.

Applicable business scenarios: applicable to businesses where clients and back-end servers need to maintain long connections.


upstream backendserver { 
least_conn;
server 192.168.0.14:80 max_fails=2 fail_timeout=10s; 
server 192.168.0.15:80 max_fails=2 fail_timeout=10s; 
} 

5. fair (need to compile and install the 3rd party module ngx_http_upstream_fair_module)

Features: requests are allocated according to the response time of the back-end server, and priority is given with short response time.
Applicable business scenarios: businesses that have specific requirements for access response speed.


upstream backendserver {
fair; 
server 192.168.0.14:80 max_fails=2 fail_timeout=10s; 
server 192.168.0.15:80 max_fails=2 fail_timeout=10s; 
}

6. url_hash (need to compile and install the 3rd party module ngx_http_upstream_hash_module)

Features: requests are allocated by accessing the hash results of url so that the same url accesses the same back-end server.

Applicable to business scenarios: effective when the back-end server is a cache server.


upstream backendserver { 
server 192.168.0.14:80 max_fails=2 fail_timeout=10s;
server 192.168.0.15:80 max_fails=2 fail_timeout=10s; 
hash $request_uri; 
}

Related articles: