Nginx+Windows load balancing configuration method

  • 2020-05-06 12:18:24
  • OfStack

Download Nginx
http://nginx.org/download/nginx-1.2.5.zip
Unzip to C:\nginx directory
2. Build one website on each server:

S1:192.168.16.35:8054
S2:192.168.16.16:8089 Find the directory
C:\nginx\conf\nginx.conf
Open the nginx. conf
The configuration is as follows:

 
# Users and groups used ,window Is not specified  
#user nobody; 
# Specifies the number of work derived processes ( General is equal to the CPU A sum or two times the sum, as of two quad-nuclei CPU , then the sum number is 8) 
worker_processes 1; 
# Specify the error log file storage path. The error log level can be selected as [ debug|info|notice|warn|error|crit 】  
#error_log logs/error.log; 
#error_log logs/error.log notice; 
error_log logs/error.log info; 
# The specified pid Store path  
#pid logs/nginx.pid; 

# Working mode and connection limit  
events { 
# Using the network I/O Model, Linux Recommended use of the system epoll Model, FreeBSD Recommended use of the system kqueue;window Is not specified  
#use epoll; 
# Number of connections allowed  
worker_connections 1024; 
} 

# set http Server, which provides load balancing support with its reverse proxy function  
http { 
# set mime type  
include mime.types; 
default_type application/octet-stream; 
# Set log format  
#log_format main '$remote_addr - $remote_user [$time_local] "$request" ' 
# '$status $body_bytes_sent "$http_referer" ' 
# '"$http_user_agent" "$http_x_forwarded_for"'; 

#access_log logs/access.log main; 
log_format main '$remote_addr - $remote_user [$time_local]' 
'"$request" $status $bytes_sent' 
'"$http_referer" "$http_user_agent" "$http_x_forwarded_for"' 
'"$gzip_ratio"'; 
log_format download '$remote_addr - $remote_user [$time_local]' 
'"$request" $status $bytes_sent' 
'"$http_referer" "$http_user_agent"' 
'"$http_range" "$sent_http_content_range"'; 

# Set request buffer  
client_header_buffer_size 1k; 
large_client_header_buffers 4 4k; 

# set access log 
access_log logs/access.log main; 
client_header_timeout 3m; 
client_body_timeout 3m; 
send_timeout 3m; 

sendfile on; 
tcp_nopush on; 
tcp_nodelay on; 
#keepalive_timeout 0; 
keepalive_timeout 65; 

# open gzip The module  
gzip on; 
gzip_min_length 1100; 
gzip_buffers 4 8k; 
gzip_types text/plain application/x-javascript text/css application/xml; 

output_buffers 1 32k; 
postpone_output 1460; 

server_names_hash_bucket_size 128; 
client_max_body_size 8m; 

fastcgi_connect_timeout 300; 
fastcgi_send_timeout 300; 
fastcgi_read_timeout 300; 
fastcgi_buffer_size 64k; 
fastcgi_buffers 4 64k; 
fastcgi_busy_buffers_size 128k; 
fastcgi_temp_file_write_size 128k; 
gzip_http_version 1.1; 
gzip_comp_level 2; 
gzip_vary on; 

# Set the list of load balancing servers  
upstream localhost { 
# According to the ip The calculation allocates the request to that backend tomcat Can solve session The problem  
ip_hash; 
# The same machine in the case of multi-network, route switch, ip May be different  
#weigth The parameter represents the weight, and the higher the weight, the greater the probability of being assigned  
#server localhost:8080 weight=1; 
#server localhost:9080 weight=1; 
server 192.168.16.35:8054 max_fails=2 fail_timeout=600s; 
server 192.168.16.16:8089 max_fails=2 fail_timeout=600s; 
} 

# Set up virtual host  
server { 
listen 80; 
server_name 192.168.16.16; 

#charset koi8-r; 
charset UTF-8; 
# Set the access log of the virtual host  
access_log logs/host.access.log main; 
# If the access  /img/*, /js/*, /css/*  Resource, then directly take the local document, do not pass squid 
# If there are many of these documents, this method is not recommended because it is passed squid Is much better  
#location ~ ^/(img|js|css)/ { 
# root /data3/Html; 
# expires 24h; 
# } 
# right  "/"  Enable load balancing  
location / { 
root html; 
index index.html index.htm index.aspx; 

proxy_redirect off; 
# Retain real user information  
proxy_set_header Host $host; 
proxy_set_header X-Real-IP $remote_addr; 
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 
# Maximum number of bytes per file allowed for client request  
client_max_body_size 10m; 
# The buffer agent buffers the maximum number of bytes requested by the client and is understood to be saved locally before being passed to the user  
client_body_buffer_size 128k; 
# Connection timeout with backend server   Initiate handshake wait response timeout  
proxy_connect_timeout 12; 
# After successful connection   Wait for backend server response time   In fact, has entered the back-end queue waiting for processing  
proxy_read_timeout 90; 
# Backend server data return time   This means that the back-end server must pass all the data within the specified time  
proxy_send_timeout 90; 
# Proxy request cache   This cache interval will hold the user's header information altogether Nginx Rule processing   In general as long as the information can be saved under the head  
proxy_buffer_size 4k; 
# Same as above   tell Nginx Save a few for a single use Buffer What's the maximum space  
proxy_buffers 4 32k; 
# If the system is very busy when you can apply for domestic major proxy_buffers  The official recommendation  *2 
proxy_busy_buffers_size 64k; 
#proxy  Cache the size of the temporary file  
proxy_temp_file_write_size 64k; 
proxy_next_upstream error timeout invalid_header http_500 http_503 http_404; 
proxy_max_temp_file_size 128m; 

proxy_pass http://localhost; 
} 

#error_page 404 /404.html; 

# redirect server error pages to the static page /50x.html 
# 
error_page 500 502 503 504 /50x.html; 
location = /50x.html { 
root html; 
} 

# proxy the PHP scripts to Apache listening on 127.0.0.1:80 
# 
#location ~ \.php$ { 
# proxy_pass http://127.0.0.1; 
#} 

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 
# 
#location ~ \.php$ { 
# root html; 
# fastcgi_pass 127.0.0.1:9000; 
# fastcgi_index index.php; 
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; 
# include fastcgi_params; 
#} 

# deny access to .htaccess files, if Apache's document root 
# concurs with nginx's one 
# 
#location ~ /\.ht { 
# deny all; 
#} 
} 


# another virtual host using mix of IP-, name-, and port-based configuration 
# 
#server { 
# listen 8000; 
# listen somename:8080; 
# server_name somename alias another.alias; 

# location / { 
# root html; 
# index index.html index.htm; 
# } 
#} 


# HTTPS server 
# 
#server { 
# listen 443; 
# server_name localhost; 

# ssl on; 
# ssl_certificate cert.pem; 
# ssl_certificate_key cert.key; 

# ssl_session_timeout 5m; 

# ssl_protocols SSLv2 SSLv3 TLSv1; 
# ssl_ciphers HIGH:!aNULL:!MD5; 
# ssl_prefer_server_ciphers on; 

# location / { 
# root html; 
# index index.html index.htm; 
# } 
#} 

} 

4. Double-click C:\nginx\nginx.exe file to launch nginx.
Open a browser:
Enter http://192.168.16.16 to access
Test: close the website on S1 and refresh the browser. Close the web site on S2, open the web site on S1, and refresh the browser.

Core code 1: add
to http{}
 
# Set the list of load balancing servers  
upstream localhost { 
# According to the ip The calculation allocates the request to that backend tomcat Can solve session The problem  
ip_hash; 
# The same machine in the case of multi-network, route switch, ip May be different  
#weigth The parameter represents the weight, and the higher the weight, the greater the probability of being assigned  
#server localhost:8080 weight=1; 
#server localhost:9080 weight=1; 
server 192.168.1.98:8081 max_fails=2 fail_timeout=600s; 
server 192.168.1.98:8082 max_fails=2 fail_timeout=600s; 


Core code 2: add
in server {}
 
# right  "/"  Enable load balancing  
location / { 
root html; 
index index.html index.htm index.aspx; 

proxy_redirect off; 
# Retain real user information  
proxy_set_header Host $host; 
proxy_set_header X-Real-IP $remote_addr; 
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 
# Maximum number of bytes per file allowed for client request  
client_max_body_size 10m; 
# The buffer agent buffers the maximum number of bytes requested by the client and is understood to be saved locally before being passed to the user  
client_body_buffer_size 128k; 
# Connection timeout with backend server   Initiate handshake wait response timeout  
proxy_connect_timeout 12; 
# After successful connection   Wait for backend server response time   In fact, has entered the back-end queue waiting for processing  
proxy_read_timeout 90; 
# Backend server data return time   This means that the back-end server must pass all the data within the specified time  
proxy_send_timeout 90; 
# Proxy request cache   This cache interval will hold the user's header information altogether Nginx Rule processing   In general as long as the information can be saved under the head  
proxy_buffer_size 4k; 
# Same as above   tell Nginx Save a few for a single use Buffer What's the maximum space  
proxy_buffers 4 32k; 
# If the system is very busy when you can apply for domestic major proxy_buffers  The official recommendation  *2 
proxy_busy_buffers_size 64k; 
#proxy  Cache the size of the temporary file  
proxy_temp_file_write_size 64k; 
proxy_next_upstream error timeout invalid_header http_500 http_503 http_404; 
proxy_max_temp_file_size 128m; 
proxy_pass http://localhost; 
} 


here are some additional tools:

Nginx load balancing is an amazing technology that many people can't master very well. Today, we will introduce the load balancing of Nginx to you in detail. Today I tried Nginx load balancing. It was great! What is Nginx?

Nginx (" engine x ") is a high-performance HTTP and reverse proxy server, as well as an IMAP/POP3/SMTP proxy server. Nginx was developed by Igor Sysoev, the second most visited Rambler.ru site in Russia, and has been operating there for more than two and a half years. Igor publishes the source code as an BSD-like license. Although still in beta, Nginx is already known for its stability, rich feature set, sample configuration files, and low system resource consumption.

The first is that the configuration is very simple and powerful. It was too late. Let's look at the configuration file first.

 
worker_processes 1; 
events { 
worker_connections 1024; 
} 
http{ 
upstream myproject { 
# Specify multiple source servers, ip: port ,80 The port can be written or not  
server 192.168.43.158:80; 
server 192.168.41.167; 
} 
server { 
listen 8080; 
location / { 
proxy_pass http://myproject; 
} 
} 
} 


What are the functions of Nginx load balancing?

If one of them is broken at the back of the server, it can automatically identify, more cattle is good Nginx after it can immediately identify the server A and B, if A response time is 3, B response time is 1, then Nginx will automatically adjust the probability of access B is 3 times of A, truly Nginx good load balancing, the installation is complete. I made a mistake in make, saying there was something wrong with HTTP Rewrite module, so I took

. / configure � without - http_rewrite_module
And then make,make install.

Once installed, create a new profile, copy the contents of the above profile in, and of course modify your IP, save it as load_balance.conf and then launch

/usr/local/Nginx/sbin/Nginx -c load_balence.conf

Since the author of Nginx is a Russian, the English documentation is not so perfect. For me, the biggest advantage of Nginx is its simple configuration and powerful functions. I used to have an apache-jk, which is really out of the ordinary. It is too complex and can only be used for Nginx load balancing of tomcat.

Nginx does not have this restriction, and it is completely transparent to it what servers are behind it. The Nginx is a bit of a mess. It doesn't currently run under the windows itself. A lot, haha. ~ ~ say of wrong everybody points out

Related articles: