nginx realizes load balancing and static and static separation

  • 2020-05-14 06:02:46
  • OfStack

nginx configuration (windows configuration), for your reference, the details are as follows

The following is a configuration file for my project


#user nobody;
worker_processes 4; # Number of processes, 1 a cpu I'll write it as many cores as I want 

#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;

#pid logs/nginx.pid;


events {
 worker_connections 1024;# The maximum number of connections for a single process 
}


http {
 include mime.types;
 default_type application/octet-stream;

 #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
 #   '$status $body_bytes_sent "$http_referer" '
 #   '"$http_user_agent" "$http_x_forwarded_for"';

 #access_log logs/access.log main;

 sendfile on;
 #tcp_nopush on;

 #keepalive_timeout 0;
 keepalive_timeout 65;
 proxy_connect_timeout 15s; 
 proxy_send_timeout 15s; 
 proxy_read_timeout 15s;
 fastcgi_buffers 8 128k;

 gzip on;
 client_max_body_size 30m;
 gzip_min_length 1k;
 gzip_buffers 16 64k;
 gzip_http_version 1.1;
 gzip_comp_level 6;
 gzip_types text/plain application/x-javascript text/css application/xml application/javascript image/jpeg image/gif image/png image/webp;
 gzip_vary on;

 # The first 1 A cluster 
 upstream xdx.com{
 server 119.10.52.28:8081 weight=100;
 server 119.10.52.28:8082 weight=100;
 
 }

 # The first 2 A cluster for uploading images 
 upstream xdxfile.com{
 server 119.10.52.28:8081;# Requests for file uploads access this cluster 
 }

 # The first 3 A cluster 
 upstream xdx8082.com{
 server 119.10.52.28:8082;#8082
 }

 # The first 4 A cluster 
 upstream xdxali.com{
 server 139.196.235.228:8082;# Ali cloud 
 }

 # The first 5 A cluster 
 upstream xdxaliws.com{
 server 139.196.235.228:8886;# Ali cloud websocket
 }
# The first 1 Two proxy servers, listening for 80 Port, the domain you're listening to is www.wonyen.com or wonyen.com
 server {
 listen 80;# Listening port 
 server_name www.wonyen.com wonyen.com;# Monitored domain name 

 #charset koi8-r;

 #access_log logs/host.access.log main;


 #location Refers to the access path, and the following configuration indicates that when the root directory of the site is accessed, it is accessed wonyen.com or www.wonyen.com Go to the root directory as html Go below to look for it index.html or index.htm . in index.html You can do that on this page 1 Some redirection work, jump to the specified page 

 # You can also customize to a cluster 
 # location / {
  # root html;
  # index index.html index.htm;
 #}
 # All static requests are submitted nginx Process, store directory as webapps Under the root, The expiration time is 30 day 
  location ~ \.(css|js|gif|jpg|jpeg|png|bmp|swf|eot|svg|ttf|woff|mp3|mp4|wav|wmv|flv|f4v|json)$ { 
  root apache-tomcat-8.0.9-windows-x86-yipin-8081/apache-tomcat-8.0.9/webapps/ROOT; 
  expires 30d; 
 }

 # Configuration to Att The end of the request processing cluster is http://xdxfile.com
 location ~ ^/\w+Att{
  proxy_pass http://xdxfile.com;
 }

 # Configuration to Fill The end of the request processing cluster is http://xdxfile.com
 location ~ ^/\w+Fill{
  proxy_pass http://xdxfile.com;
 }

 # Precise configuration if the request is named /crowdFundSave , 
 location = /crowdFundSave{
  proxy_pass http://xdxfile.com;
 }

 # Precise configuration, ibid 
 location = /crowdFundRewardSave{
  proxy_pass http://xdxfile.com;
 }

 # Precise configuration, ibid 
 location = /garbageCategorySave{
  proxy_pass http://xdxfile.com;
 }

 # Precise configuration, ibid 
 location = /mailTestAjax{
  proxy_pass http://xdx8082.com;
 }

 # Precise configuration, ibid 
 location = /mailSendAjax{
  proxy_pass http://xdx8082.com;
 }

 # Precise configuration, ibid 
 location = /mailOldAjax{
  proxy_pass http://xdx8082.com;
 }

 # Precise configuration, ibid 
 #location = /wechatAuthority{
  #proxy_pass http://xdxali.com;
 #}
 location ~ ^/ueditor1_4_3{
  proxy_pass http://xdxfile.com;
 }
 # All other requests are accessed  http://xdx.com The cluster  
  location ~ .*$ {
  index index;
  proxy_pass http://xdx.com;
  }
 #404 Page access /Error404.jsp this location
 error_page 404  /Error404.jsp;

 #500 Such pages are also visited  /Error404.jsp this location
 error_page 500 502 503 504 /Error404.jsp;

 # Configure the request /Error404.jsp Just visit http://xdxfile.com The cluster 
 location = /Error404.jsp {
  proxy_pass http://xdxfile.com;
 }

 # proxy the PHP scripts to Apache listening on 127.0.0.1:80
 #
 #location ~ \.php$ {
 # proxy_pass http://127.0.0.1;
 #}

 # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
 #
 #location ~ \.php$ {
 # root  html;
 # fastcgi_pass 127.0.0.1:9000;
 # fastcgi_index index.php;
 # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
 # include fastcgi_params;
 #}

 # deny access to .htaccess files, if Apache's document root
 # concurs with nginx's one
 #
 #location ~ /\.ht {
 # deny all;
 #}
 }


 # another virtual host using mix of IP-, name-, and port-based configuration
 # In addition 1 Two proxy servers, listening 8886 Interface, the domain name of the listener is www.wonyen.com or wonyen.com
 server {
 listen 8886;
 server_name www.wonyen.com wonyen.com;
# Configure if the request is wonyen.com:8086( The root directory) , Let him visit http://xdxaliws.com So this cluster, this is configured here websocket The service side 
 location / {
  proxy_pass http://xdxaliws.com;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection "upgrade";
 }
 }


 # HTTPS server
 #
 #server {
 # listen 443 ssl;
 # server_name localhost;

 # ssl_certificate cert.pem;
 # ssl_certificate_key cert.key;

 # ssl_session_cache shared:SSL:1m;
 # ssl_session_timeout 5m;

 # ssl_ciphers HIGH:!aNULL:!MD5;
 # ssl_prefer_server_ciphers on;

 # location / {
 # root html;
 # index index.html index.htm;
 # }
 #}

}

So that's one of my configurations. Basically everything you need to note is annotated in the configuration file. Let me just separate out a couple of the most important ones.

1. Cluster configuration. In the above configuration, I have defined multiple clusters, which literally means a set composed of multiple servers


upstream xdx.com{
 server 119.10.52.28:8081 weight=100;
 server 119.10.52.28:8082 weight=100;
 
 }

Such a configuration, this cluster contains two branches, we can build on two servers the same project (examples above are on the same server, different port to deploy the same project, because the server is limited by the authors), when a request to the cluster to handle nginx random distribution, of course can also configure the weight to set the access probability of two server. That's how load balancing works. We deploy the same project on multiple servers and use nginx to forward requests, so as to reduce the load caused by only one server, and when one server fails, nginx will assign another server to work, so as not to cause the service to stop.

2. The server configuration item represents a proxy server. In the above file, we configured two files to monitor port 80 and port 8886 of the two domain names wonyen.com (www.wonyen.com), and all access to wonyen.com: Requests under the domain name 80 (wonyen.com) will be forwarded according to the rules defined in the first server, while all requests for access to wonyen.com :8886 will be forwarded according to the rules defined in the second server.

3. We can even configure to handle multiple domain names, as shown in the following example. In the following example, I configured two domain name rules, one for iis server and one for tomcat server, mainly to solve the problem that port 80 can only be used by one application. If iis USES 80, tomcat doesn't work, and vice versa. So I assigned ports other than 80 to both iis and tomcat, leaving port 80 to niginx. Requests are assigned to different sites by nginx.

Copy the code


#user nobody;
worker_processes 1;

#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;

#pid logs/nginx.pid;


events {
 worker_connections 1024;
}


http {
 include mime.types;
 default_type application/octet-stream;

 #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
 #   '$status $body_bytes_sent "$http_referer" '
 #   '"$http_user_agent" "$http_x_forwarded_for"';

 #access_log logs/access.log main;

 sendfile on;
 #tcp_nopush on;

 #keepalive_timeout 0;
 keepalive_timeout 65;

 gzip on;
 client_max_body_size 30m;
 gzip_min_length 1k;
 gzip_buffers 16 64k;
 gzip_http_version 1.1;
 gzip_comp_level 6;
 gzip_types text/plain application/x-javascript text/css application/xml application/javascript image/jpeg image/gif image/png image/webp;
 gzip_vary on;
 upstream achina.com{
 server 120.76.129.218:81;
 
 }
 upstream qgrani.com{
 server 120.76.129.218:8080;
 
 }

 server {
 listen 80;
 server_name www.achinastone.com achinastone.com;

 #charset koi8-r;

 #access_log logs/host.access.log main;

 location / {
  root html;
  index index.html index.htm;
 }
  # Other requests  
  location ~ .*$ {
  index index;
  proxy_pass http://achina.com;
  }

 #error_page 404  /404.html;

 # redirect server error pages to the static page /50x.html
 #
 error_page 500 502 503 504 /50x.html;
 location = /50x.html {
  root html;
 }

 # proxy the PHP scripts to Apache listening on 127.0.0.1:80
 #
 #location ~ \.php$ {
 # proxy_pass http://127.0.0.1;
 #}

 # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
 #
 #location ~ \.php$ {
 # root  html;
 # fastcgi_pass 127.0.0.1:9000;
 # fastcgi_index index.php;
 # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
 # include fastcgi_params;
 #}

 # deny access to .htaccess files, if Apache's document root
 # concurs with nginx's one
 #
 #location ~ /\.ht {
 # deny all;
 #}
 }


 # another virtual host using mix of IP-, name-, and port-based configuration
 #
 server {
 listen 80;
 server_name www.qgranite.com qgranite.com;

 location / {
  root html;
  index index.html index.htm;
 }
  # All static requests are submitted nginx Process, store directory as webapp
  location ~ \.(css|js|gif|jpg|jpeg|png|bmp|swf|eot|svg|ttf|woff|mp3|mp4|wav|wmv|flv|f4v)$ { 
  root apache-tomcat-8.0.9\webapps\ROOT; 
  expires 30d; 
 }
  # Other requests  
  location ~ .*$ {
  index index;
  proxy_pass http://qgrani.com;
  }
 }


 # HTTPS server
 #
 #server {
 # listen 443 ssl;
 # server_name localhost;

 # ssl_certificate cert.pem;
 # ssl_certificate_key cert.key;

 # ssl_session_cache shared:SSL:1m;
 # ssl_session_timeout 5m;

 # ssl_ciphers HIGH:!aNULL:!MD5;
 # ssl_prefer_server_ciphers on;

 # location / {
 # root html;
 # index index.html index.htm;
 # }
 #}

}

4. There is one of dynamic and static separation, said very popular 1 point is, the request data (dynamic) and request (static) separate images, in tomcat, when we didn't do action separation tomcat request of images will be regard as a dynamic request, and dynamic request processing is more cost performance (as for why, I don't know much about it). So we can use the nginx configuration to achieve static and static separation.

My method is to put one of the tomcat projects in the root directory of nginx. In this way, we can configure the following way to realize that when we access static resources such as images, js, css, etc., we will go to a specified directory to access them. In addition to the performance savings, one of the benefits of doing this is that we don't need to keep these static resources in sync across all the load balancing servers, we just need to keep them in one place. Configuration is as follows


# All static requests are submitted nginx Process, store directory as webapps Under the root, The expiration time is 30 day 
  location ~ \.(css|js|gif|jpg|jpeg|png|bmp|swf|eot|svg|ttf|woff|mp3|mp4|wav|wmv|flv|f4v|json)$ { 
  root apache-tomcat-8.0.9-windows-x86-yipin-8081/apache-tomcat-8.0.9/webapps/ROOT; 
  expires 30d; 
 }

5. Since read static resources read from this directory, so we have to consider how to store static resource, especially when we do the load balancing, upload pictures in our project request could be invoked in any one branch of the cluster, such as we have A cluster, B two servers, upload pictures about it, they are likely to do if A call this request to upload pictures, pictures have been uploaded to the A this server, on the other hand is B above. As a result, the static images on the two servers of A and B are out of sync. When we want to access these images, (assuming that we haven't done static/static separation at this time), we may not be able to access them. Since we did static separation in the last step, the problem now is how to synchronize the images uploaded by A and B servers into the folder where we did static separation. Manual or program to synchronization is very troublesome, my approach is to specify one server (namely nginx installed the server) tomcat project (i.e., deployed in nginx under the root directory of the tomcat project), and let it specifically responsible for the work to upload pictures, so that all of the pictures are made by the tomcat project to upload, also ensure that the image in the static library is the complete picture. I configured a cluster for this, as follows.


# The first 2 A cluster for uploading images 
 upstream xdxfile.com{
 server 119.10.52.28:8081;# Requests for file uploads access this cluster 
 }

 Then, in location I configure it like this: 

 # Configuration to Att The end of the request processing cluster is http://xdxfile.com
 location ~ ^/\w+Att{
  proxy_pass http://xdxfile.com;
 }

 # Configuration to Fill The end of the request processing cluster is http://xdxfile.com
 location ~ ^/\w+Fill{
  proxy_pass http://xdxfile.com;
 }

Since I've added Att or Fill suffixes to all requests for attachment uploads, when nginx captures requests for these suffixes, it hands them all over to http:// xdxfile.com cluster, which is 119.10.52.28:8081.

6. After doing load balancing, one of the problems we have to face is the synchronization of memory data. We sometimes store some data in memory in the program, and the typical type 1 data is session. So, how do you get session data to share session across the branches of the cluster, and we're going to use a new thing here, which is called redis. I'll cover this in detail in the next article.


Related articles: