What is Nginx load balancing and how to configure it

  • 2021-10-13 09:06:40
  • OfStack

What is Load Balancing

Load balancing is mainly realized by special hardware equipment or software algorithm. The load balancing realized by hardware equipment has good effect, high efficiency and stable performance, but the cost is relatively high. Load balancing realized by software mainly depends on the choice of balancing algorithm and the robustness of program. There are many kinds of load balancing algorithms, and there are two common ones: static load balancing algorithm and dynamic load balancing algorithm. Static algorithms are simple to implement and can achieve better results in a general network environment, including general polling algorithm, weighted polling algorithm based on ratio and weighted polling algorithm based on priority. Dynamic load balancing algorithm has stronger adaptability and better effect in complex network environment, mainly including least connection first algorithm based on task quantity, fastest response first algorithm based on performance, prediction algorithm and dynamic performance allocation algorithm.

The general principle of network load balancing technology is to balance the network load to each operation unit of the network cluster by using a fixed distribution strategy, So that a single heavy load task can be shared on multiple units for parallel processing, or a large number of concurrent access or data traffic can be shared on multiple units for separate processing, thus reducing the waiting time of users for response.

Nginx Server Load Balancing Configuration

The Nginx server implements a static weighted polling algorithm based on priority, and the main configurations used are proxy_pass instructions and upstream instructions, which are actually easy to understand. The key point lies in the flexible and diverse configuration of Nginx server, how to reasonably integrate other functions while configuring load balancing, and form a set of configuration schemes that can meet actual needs.

There are 1 basic example fragments below. Of course, it is impossible to include all the configuration situations. I hope it can play an important role in attracting jade. At the same time, we need to summarize and accumulate more in the practical application process. Notices in the configuration will be added in the form of comments.

Configuration example 1: Load balancing of 1 polling rule for all requests

In the following example fragment, the priority of all servers in the backend server group is configured to the default weight=1, so that they will receive the requested tasks in turn according to the 1 polling policy. This configuration is the simplest one to realize load balancing of Nginx server. All requests to www. myweb. name are load balanced in the backend server group. The example code is as follows:


...
 
upstream backend                    # Configuring back-end server groups 
{
    server 192.168.1.2:80;
    server 192.168.1.3:80;
    server 192.168.1.4:80;          # Default weight=1
}
server
{
    listen 80;
    server_name www.myweb.name;
    index index.html index.htm;
    location / {
        proxy_pass http://backend;
        prox_set_header Host $host;
    }
    ...
}   

Configuration Example 2: Load Balancing of Weighted Polling Rules for All Requests

In this instance fragment, the servers in the backend server group are given a different priority than in Configuration Instance 1, and the value of the weight variable is the "weight" in the polling policy. Among them, 192.168. 1.2: 80 has the highest level, giving priority to receiving and processing client requests; The lowest level of 192.168. 1.4: 80 is the server that receives and processes the least client requests, and 192.168. 1.3: 80 will be somewhere in between. All requests to access www. myweb. name are weighted load balanced in the backend server group. The example code is as follows:


...
 
upstream backend                    # Configuring back-end server groups 
{
    server 192.168.1.2:80 weight=5;
    server 192.168.1.3:80 weight=2;
    server 192.168.1.4:80;          # Default weight=1
}
server
{
    listen 80;
    server_name www.myweb.name;
    index index.html index.htm;
    location / {
        proxy_pass http://backend;
        prox_set_header Host $host;
    }
    ...
}

Configuration Example 3: Load Balancing for Specific Resources

In this example fragment, we set up two groups of proxy server groups, one group named "videobackend" is used to load balance client requests requesting video resources, and the other group is used to load balance client requests requesting filed resources. All requests to "http://www.mywebname/video/*" are equalized in the videobackend server group, and all requests to "http://www.mywebname/file/*" are equalized in the filebackend server group. This example shows the configuration of realizing 1-type load balancing. For the configuration of weighted load balancing, please refer to "Configuration Example 2".

In the location/file/{......} block, we populate the client's real information into the "Host", "X-Real-IP" and "X-Forwareded-For" header fields in the request header, so that the back-end server group receives requests that retain the client's real information instead of the Nginx server's information. The example code is as follows:


...
 
upstream videobackend                    # Configuring back-end server groups 1
{
    server 192.168.1.2:80;
    server 192.168.1.3:80;
    server 192.168.1.4:80;
}
upstream filebackend                    # Configuring back-end server groups 2
{
    server 192.168.1.5:80;
    server 192.168.1.6:80;
    server 192.168.1.7:80;
}
server
{
    listen 80;
    server_name www.myweb.name;
    index index.html index.htm;
    location /video/ {
        proxy_pass http://videobackend; # Using back-end server groups 1
        prox_set_header Host $host;
        ...
    }
    location /file/ {
        proxy_pass http://filebackend;  # Using back-end server groups 2
                                        # Retain the true information of the client 
        prox_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        ...
    }
}    

Configuration Example 4: Load Balancing for Different Domain Names

In this instance, we set up two virtual servers and two sets of back-end proxy server groups to receive different domain name requests and load balance these requests. If the domain name requested by the client is "home. myweb. name", the server server1 receives and turns to the homebackend server group for load balancing processing; If the domain name requested by the client is "bbs. myweb. name", the server server2 receives bbsbackend server level for load balancing processing. In this way, load balancing of different domain names is realized.

Note that one of the two backend server groups, server 192.168. 1.4: 80, is common. All resources under both domain names need to be deployed on this server to ensure that client requests will not be problematic. The example code is as follows:


...
upstream bbsbackend                    # Configuring back-end server groups 1
{
    server 192.168.1.2:80 weight=2;
    server 192.168.1.3:80 weight=2;
    server 192.168.1.4:80;
}
upstream homebackend                    # Configuring back-end server groups 2
{
    server 192.168.1.4:80;
    server 192.168.1.5:80;
    server 192.168.1.6:80;
}
                                        # Start configuration server 1
server
{
    listen 80;
    server_name home.myweb.name;
    index index.html index.htm;
    location / {
        proxy_pass http://homebackend;
        prox_set_header Host $host;
        ...
    }
    ...
}
                                        # Start configuration server 2
server
{
    listen 80;
    server_name bbs.myweb.name;
    index index.html index.htm;
    location / {
        proxy_pass http://bbsbackend;
        prox_set_header Host $host;
        ...
    }
    ...
}

Configuration Example 5: Implementing Load Balancing with URL Rewrite

First, let's look at the specific source code, which is a modification based on Example 1:


...
upstream backend                    # Configuring back-end server groups 
{
    server 192.168.1.2:80;
    server 192.168.1.3:80;
    server 192.168.1.4:80;          # Default weight=1
}
server
{
    listen 80;
    server_name www.myweb.name;
    index index.html index.htm;
     
    location /file/ {
        rewrite ^(/file/.*)/media/(.*)\.*$) $1/mp3/$2.mp3 last;
    }
     
    location / {
        proxy_pass http://backend;
        prox_set_header Host $host;
    }
    ...
}

This instance fragment adds an URL rewrite capability that includes "/file/" to URI compared to "Configuration 1". For example, when the client requests an URL of "http://www. myweb. name/file/downlaod/media/1. mp3", the virtual server first uses the location file/{......} block to forward to the back-end backend server group for load balancing. In this way, load balancing with URL rewriting function is easily realized. In this configuration scheme, 1 must know the difference between last tag and break tag in rewrite instruction, so as to achieve the expected effect.

The above five configuration examples show the basic methods for Nginx server to realize load balancing configuration in different situations. Because the functions of Nginx server are incremental in structure, we can continue to add more functions on the basis of these configurations, such as Web cache and other functions, as well as Gzip compression technology, identity authentication, rights management and so on. At the same time, when using upstream instructions to configure server groups, we can give full play to the functions of each instruction and configure Nginx servers that meet the needs, are efficient, stable and have rich functions.

These are the details of what Nginx load balancing is and how to configure it. For more information about Nginx load balancing, please pay attention to other related articles on this site!


Related articles: