Nginx load balancing is described in detail

  • 2020-05-12 06:51:17
  • OfStack

If there is only one server and the server is down, it is a disaster for the site, so load balancing is a good thing and it will automatically eliminate the servers that are down.

The following is a brief introduction to my experience of using Nginx as a load

Download -- install Nginx these are not covered, the previous article covered.

windows and Linux configuration of Nginx load 1 sample, so it is not separately introduced.

Nginx load balancing 1 some basics:

nginx's upstream currently supports four distributions
1) polling (default)
Each request is assigned to a different backend server one by one in chronological order. If the backend server down is dropped, it can be automatically culled.
2), weight
Specify the polling probability that weight is proportional to the access ratio for situations where the backend server performance is uneven.
2), ip_hash
Each request is allocated according to the hash result of accessing ip, so that each visitor has a fixed access to one back-end server, which can solve the problem of session.
3) fair (third party)
Requests are allocated according to the response time of the back-end server, and priority is given to those with short response times.
4) url_hash (third party)

Configuration:

Add: to http node:

Define the Ip and device state of the load balancing device
upstream myServer {

server 127.0.0.1:9090 down;
server 127.0.0.1:8080 weight=2;
server 127.0.0.1:6060;
server 127.0.0.1:7070 backup;
}

Add under the Server node that needs to use the load

proxy_pass http://myServer;

upstream state of each device:

down means that server before the order is temporarily not participating in the load
By default, weight is 1.weight. The larger the weight, the greater the load weight.
max_fails: the default number of allowed failed requests is 1. When the maximum number is exceeded, an error defined by the proxy_next_upstream module is returned
fail_timeout:max_fails the time to pause after a failure.
backup: all other non-backup machines down or, when busy, request backup machines. So this machine will have the least pressure.

Nginx also supports multi-group load balancing, and multiple upstream can be configured to serve different Server.

Configuring load balancing is easy, but one of the key questions is how to share session among multiple servers

There are several ways to do this (the following is from the web, and the fourth method is not practiced).

1) instead of using session, use cookie

If you can change session to cookie, you can avoid some disadvantages of session. In the previous book of J2EE, it is also pointed out that session cannot be used in the cluster system, otherwise it will be difficult to deal with problems. If the system is not complicated, then the priority should be to remove session. If it is very troublesome to change session, then use the following method.

2) the application server realizes sharing by itself

asp.net can use the database or memcached to save session, thus setting up an session cluster in asp.net itself. In this way, session can ensure the stability of session. Even if a node fails, session will not be lost. But its efficiency is not very high, does not apply to the high efficiency request occasion.

Neither of the above methods has anything to do with nginx. Here is what to do with nginx:

3) ip_hash

The ip_hash technology in nginx can direct a request for ip to the same back end, so that a client and a back end under ip can establish a stable session. ip_hash is defined in the upstream configuration:

upstream backend {
server 127.0.0.1:8080 ;
server 127.0.0.1:9090 ;
ip_hash;
}

ip_hash is easy to understand, but because only the factor ip can be used to assign the back end, ip_hash is defective and cannot be used in some cases:

1/ nginx is not the most front-end server. ip_hash requires that nginx1 must be the most front-end server, otherwise nginx cannot make hash according to ip if ip is not correct. For example, if squid is used as the most front-end, then nginx can only get the ip address of squid server when ip is taken from squid. It is definitely wrong to use this address for shunt.

The 2/ nginx backend has other ways of load balancing. If the nginx backend is load-balanced and the request is diverted in a different way, then a client request cannot be located on the same session application server. In this way, the nginx backend can only point directly to the application server, or it can take another squid and point to the application server. The best way is to do a shunt with location. Part of the request that requires session will be streamed through ip_hash and the rest will go to the other backend.

4) upstream_hash

To solve some of the problems with ip_hash 1, you can use the upstream_hash 3rd party module, which is mostly used as url_hash, but does not prevent it from being used for session sharing:

If the front-end is squid, he will add ip to x_forwarded_for, http_header, upstream_hash can use this header as a factor to direct the request to the specified backend:

Visible this document: http: / / www sudone. com/nginx/nginx_url_hash html

In the document, $request_uri is used as the factor, which is slightly changed by 1:

hash $http_x_forwarded_for;

This has been changed to use x_forwarded_for header as a factor. In the new version of nginx, cookie value can be read, so it can also be changed to:

hash $cookie_jsessionid;

If session configured in php is not cookie mode, nginx can be used in conjunction with nginx's own userid_module module to generate an cookie. Please refer to the English document of userid module:
http://wiki.nginx.org/NginxHttpUserIdModule
The other available Yao Weibin write modules upstream_jvm_route: http: / / code google. com/p/nginx - upstream - jvm - route /

PS: continue to call for help, why does the page style deployed on the Nginx server display incorrectly?


Related articles: