Using Nginx to implement reverse proxy Node.js method

  • 2020-05-15 03:30:51
  • OfStack

preface

The company has a project front end that USES node.js for server rendering and then returns to the browser to solve the single-page SEO problem. When the project is deployed, use the Nginx reverse agent Node.js. The specific steps are as follows:

(the installation and basic configuration of Nginx, Node.js are skipped directly)

First we will open the following configuration on the http node in the nginx.cnf file:


http {
 log_format main '$remote_addr - $remote_user [$time_local] "$request" '
   '$status $body_bytes_sent "$http_referer" '
   '"$http_user_agent" "$http_x_forwarded_for"';

 access_log /var/log/nginx/access.log main;

 sendfile  on;
 tcp_nopush  on;
 tcp_nodelay  on;
 keepalive_timeout 65;
 types_hash_max_size 2048;

 include  /etc/nginx/mime.types;
 default_type application/octet-stream;

 #  Open the 1 The configuration 
 include /etc/nginx/conf.d/*.conf;
}

Then each domain configuration file into the directory/etc/nginx/conf d /, the file extension to conf end.

The first way is this simple:


server {
 listen 80 ;
 server_name localhost;
 root /xxx/xxx/hxxydexx/;
 
 #set $my_server_name $scheme://$server_name; 

 #if ( $my_server_name != https://$server_name ) {
 # rewrite ^ https://$server_name$request_uri? permanent;
 #}
 
 error_log /var/log/nginx/hyde_error.log error;
 access_log /var/log/nginx/hyde_accss.log main;
 
 location / {
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header Host $http_host;
 proxy_set_header X-Nginx-Proxy true;
 proxy_http_version 1.1;
 proxy_set_header Connection "";
 
 #  If you don't need to consider the load, you don't need to configure it upstream Node. 
 proxy_pass http://127.0.0.1:3000;
 }
 
 error_page 404 /404.html;
 location = /xxx/xxx/40x.html {
 }

 error_page 500 502 503 504 /50x.html;
 location = /xxx/xxx/50x.html {
 }
}

2. The second way, considering the load


upstream node {
 server 127.0.0.1:3000; 
}
server {
 listen 80 ;
 server_name localhost;
 root /xxx/xxx/hxxydexx/;
 
 #set $my_server_name $scheme://$server_name; 

 #if ( $my_server_name != https://$server_name ) {
 # rewrite ^ https://$server_name$request_uri? permanent;
 #}
 
 error_log /var/log/nginx/hyde_error.log error;
 access_log /var/log/nginx/hyde_accss.log main;
 
 location / {
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header Host $http_host;
 proxy_set_header X-Nginx-Proxy true;
 proxy_http_version 1.1;
 proxy_set_header Connection "";
 
 #  configuration upstream node 
 proxy_pass http://node;
 }
 
 error_page 404 /404.html;
 location = /xxx/xxx/40x.html {
 }

 error_page 500 502 503 504 /50x.html;
 location = /xxx/xxx/50x.html {
 }
}

You can then restart or reload the nginx configuration file. The command is as follows:


# check nginx Whether the syntax in the configuration file is correct 
nginx -t

# restart nginx
service nginx restart

# Overloaded configuration file 
nginx -s reload 

Notes:

The following questions may appear above:


events.js:72
 throw er; // Unhandled 'error' event
   ^
Error: listen EADDRINUSE
 at errnoException (net.js:884:11)
 at Server._listen2 (net.js:1022:14)
 at listen (net.js:1044:10)
 at Server.listen (net.js:1110:5)
 at Object.<anonymous> (folderName/app.js:33:24)
 at Module._compile (module.js:456:26)
 at Object.Module._extensions..js (module.js:474:10)
 at Module.load (module.js:356:32)
 at Function.Module._load (module.js:312:12)
 at Function.Module.runMain (module.js:497:10)

This is actually an error caused by the multiple open port of Node.js service being occupied. If this problem occurs, you can use Node.js project management tool pm2, or netstat-anop to check that the port is occupied by that process, and then kill and restart the service!

Attached is the load balancing policy of Nginx:

Polling (default)

Each request is assigned to a different backend server 1 by 1 in chronological order, and can be automatically culled if the backend server down is dropped.


upstream backserver { 
 server 192.168.0.14; 
 server 192.168.0.15; 
} 

Specify the weight

Specify the polling probability, weight, which is proportional to the access ratio, for situations where the backend server performance is uneven.


upstream backserver { 
 server 192.168.0.14 weight=10; 
 server 192.168.0.15 weight=10; 
} 

IP binding ip_hash

Each request is allocated according to the hash result of accessing ip, so that each visitor has a fixed access to one back-end server, which can solve the problem of session.


upstream backserver { 
 ip_hash; 
 server 192.168.0.14:88; 
 server 192.168.0.15:80; 
} 

fair (third party)

Requests are allocated according to the response time of the back-end server, and priority is given to those with short response times.


upstream backserver { 
 server 192.168.0.14:88; 
 server 192.168.0.15:80;
 fair; 
} 

url_hash (third party)

Requests are allocated according to the hash result of accessing url, so that each url is directed to the same back-end server, which is more efficient when the back-end server is cached.


upstream backserver {

 server squid1:3128; 
 server squid2:3128;

 hash $request_uri; 
 hash_method crc32; 
}

conclusion


Related articles: