Four scenarios for Nginx load balancing configuration instances

  • 2020-05-07 20:57:02
  • OfStack

1. Poll

Polling, Round Robin, distributes client Web requests to different back-end servers in the order in the Nginx configuration file.
An example of configuration is as follows:


http{ 
 upstream sampleapp { 
   server <<dns entry or IP Address(optional with port)>>; 
   server <<another dns entry or IP Address(optional with port)>>; 
 } 
 .... 
 server{ 
   listen 80; 
   ... 
   location / { 
    proxy_pass http://sampleapp; 
   }  
 } 

Only one DNS entry is inserted into the upstream section, sampleapp, which is also mentioned again in the proxy_pass section.

2, connect at least

The Web request is forwarded to the server with the fewest connections.
An example of configuration is as follows:


http{ 
  upstream sampleapp { 
    least_conn; 
    server <<dns entry or IP Address(optional with port)>>; 
    server <<another dns entry or IP Address(optional with port)>>; 
  } 
  .... 
  server{ 
    listen 80; 
    ... 
    location / { 
     proxy_pass http://sampleapp; 
    }  
  } 

The above example simply adds the least_conn configuration to the upstream section. Other configurations are the same as the poll configuration.

3, IP address,

In the two load-balancing scenarios described above, continuous Web requests from the same client may be dispatched to different back-end servers for processing, so if sessions Session are involved, the sessions can be complex. Common is database-based session persistence. To overcome the above difficulties, you can use a load balancing scheme based on the IP address hash. In this case, successive Web requests from the same client are dispatched to the same server for processing.
An example of configuration is as follows:


http{ 
  upstream sampleapp { 
    ip_hash; 
    server <<dns entry or IP Address(optional with port)>>; 
    server <<another dns entry or IP Address(optional with port)>>; 
  } 
  .... 
  server{ 
    listen 80; 
    ... 
    location / { 
     proxy_pass http://sampleapp; 
    }  
  } 

The above example simply adds the ip_hash configuration to the upstream section. Other configurations are the same as the poll configuration.

4. Load balancing based on weight

Load balancing based on weights, Weighted Load Balancing, allows us to configure Nginx to distribute more requests to high-configuration back-end servers and relatively few requests to low-configuration servers.
An example of configuration is as follows:


http{ 
  upstream sampleapp { 
    server <<dns entry or IP Address(optional with port)>> weight=2; 
    server <<another dns entry or IP Address(optional with port)>>; 
  } 
  .... 
  server{ 
    listen 80; 
    ... 
    location / { 
     proxy_pass http://sampleapp; 
    } 
 } 

The above example is the weight=2 configuration after the server address and port, which means that for every 3 requests received, the first 2 requests are distributed to the first server, the third request is distributed to the second server, and the rest of the configuration is the same as the polling configuration.

Also note 1 that load balancing based on weight and load balancing based on the IP address hash can be combined in 1.


Related articles: