Nginx server as a reverse proxy to achieve the Intranet url forwarding configuration

  • 2020-05-10 23:29:55
  • OfStack

scenario
Since the http service of multiple servers in the company's Intranet needs to be mapped to the static IP of the company's extranet, if the port mapping of routing is used, port 80 of one Intranet server can be mapped to port 80 of the extranet, and port 80 of other servers can only be mapped to non-port 80 of the extranet. Non-80 port mapping in the access to the domain name plus the port, more trouble. And the company entry route can only do a maximum of 20 port mappings. It won't be enough later.
k brother is then proposed to trying to build a nginx reverse proxy server, will nginx reverse proxy server IP mapped to the network of 80 to 80, this domain pointing to the outside of the company network IP HTTP request will be sent to the nginx reverse proxy server, using nginx reverse proxy will give different domain name forward requests to different network port of the machine, it played a "according to the specific domain name automatically forwarded to the appropriate server port" effect, What the router's port mapping does is "automatically forward to the specific port of the corresponding server according to different ports".
Knowledge involved: nginx compilation and installation, nginx reverse proxy basic configuration, knowledge of routing port mapping, and common knowledge of network domain names.
This experiment aims to achieve: enter xxx123.tk into the browser to access port 3000 of the Intranet machine 192.168.10.38, enter xxx456.tk to access port 80 of the Intranet machine 192.168.10.40.
The configuration steps
Server ubuntu 12.04


### Update warehouse 

apt-get update -y
apt-get install wget -y
# download nginx And related software packages 

pcre is for compiling the rewrite module, and zlib is for supporting the gzip function. Well, the nginx version is a little bit old here, because I have to experiment with upgrading nginx. You can install the new version.


cd /usr/local/src
 wget <a href="ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.33.tar.gz">ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.33.tar.gz</a>
 wget <a href="http://zlib.net/zlib-1.2.8.tar.gz">http://zlib.net/zlib-1.2.8.tar.gz</a>
 wget <a href="http://nginx.org/download/nginx-1.4.2.tar.gz">http://nginx.org/download/nginx-1.4.2.tar.gz</a>
 tar xf pcre-8.33.tar.gz
 tar xf zlib-1.2.8.tar.gz
# Install build environment 

 apt-get install build-essential libtool -y
# create nginx The user 

So called unprivileged user


useradd -s /bin/false -r -M -d /nonexistent www
# Start compiling and installing 

/configure --with-pcre=/usr/local/src/pcre-8.33 --with-zlib=/usr/local/src/zlib-1.2.8 --user=www --group=www \
 --with-http_stub_status_module --with-http_ssl_module --with-http_realip_module
 make
 make install
# Authorizing folders 

chown -R www:www /usr/local/nginx
# Modify the configuration file 
vim nginx.conf


user www www;
worker_processes 1;
error_log logs/error.log;
pid logs/nginx.pid;
worker_rlimit_nofile 65535;
events {
  use epoll;
  worker_connections 65535;
}
http {
  include mime.types;
  default_type application/octet-stream;
  include /usr/local/nginx/conf/reverse-proxy.conf;
  sendfile on;
  keepalive_timeout 65;
  gzip on;
  client_max_body_size 50m; # The buffer agent buffers the maximum number of bytes requested by the client , Can be understood as saving to the local and then to the user 
  client_body_buffer_size 256k;
  client_header_timeout 3m;
  client_body_timeout 3m;
  send_timeout 3m;
  proxy_connect_timeout 300s; #nginx The connection timeout with the back-end server ( Proxy connection timeout )
  proxy_read_timeout 300s; # Back end server response time after successful connection ( Agent receive timeout )
  proxy_send_timeout 300s;
  proxy_buffer_size 64k; # Set up the proxy server ( nginx ) the buffer size to hold the user header information 
  proxy_buffers 4 32k; #proxy_buffers Buffer the average page in 32k Set this as follows 
  proxy_busy_buffers_size 64k; # Buffer size under high load ( proxy_buffers*2 ) 
  proxy_temp_file_write_size 64k; # Set the size of the cache folder to be greater than this value and will start from upstream The server delivers the request without buffering it to disk 
  proxy_ignore_client_abort on; # The proxy side is not allowed to actively close the connection 
  server {
    listen 80;
    server_name localhost;
    location / {
      root html;
      index index.html index.htm;
    }
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
      root html;
    }
  }
}

Edit the reverse proxy server configuration file:


vim /usr/local/nginx/conf/reverse-proxy.conf

server
{
  listen 80;
  server_name xxx123.tk;
  location / {
    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass http://192.168.10.38:3000;
  }
  access_log logs/xxx123.tk_access.log;
}
 
server
{
  listen 80;
  server_name xxx456.tk;
  location / {
    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass http://192.168.10.40:80;
  }
  access_log logs/xxx456.tk_access.log;
}

The xxx123.tk domain name is then directed to the company static IP. In this way, the 3000 port of 192.168.10.38 accessed by the Intranet server when xxx123.tk is entered in the browser, and the 80 port of 192.168.10.40 is accessed by xxx456.tk.
If you want to load balance the back-end machines, you can do this configuration by distributing requests for nagios.xxx123.tk to the 131 and 132 machines on the Intranet.


upstream monitor_server {
  server 192.168.0.131:80;
    server 192.168.0.132:80;
}
 
server
{
  listen 80;
  server_name nagios.xxx123.tk;
  location / {
    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 
    proxy_pass http://monitor_server;
  }
  access_log logs/nagios.xxx123.tk_access.log;
}

Well, I won't say much about load balancing and caching, just a simple "domain name forwarding" function.
In addition, since the http requests are ultimately passed by the reverse proxy server to the machines in the back segment, the original access log of the machines in the back end of the access is IP of the reverse proxy server.
In order to record the real IP, we need to modify the log format of the back-end machine. Here, we assume that the back-end is also one nginx:
Add this paragraph to the back-end configuration file:


log_format access '$HTTP_X_REAL_IP - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $HTTP_X_Forwarded_For';
 
access_log logs/access.log access;

Take a look at the original log format:


#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
 
#access_log logs/access.log main;

See the difference
 
Problems encountered
 
The following section was not configured before, and 504 gateway timeout will appear occasionally during the visit. As it happens occasionally, it is not easy to troubleshoot


  proxy_connect_timeout 300s;
  proxy_read_timeout 300s;
  proxy_send_timeout 300s;
  proxy_buffer_size 64k;
  proxy_buffers 4 32k;
  proxy_busy_buffers_size 64k;
  proxy_temp_file_write_size 64k;
  proxy_ignore_client_abort on;

Error log:

. upstream timed out (110: Connection timed out) while reading response header from upstream, client:... (omitted later)
From log seems connection timeout, after 1 through random check online estimate may be a backend server response timeout, in line with the bold assumption, caution beg a certificate of the principle, since assuming the reason of the error must be doing the experiment again wrong, then adjust the timeout parameter, in turn, the timeout threshold of agency set up small (such as 1 ms) will see 504. It was later discovered that when the parameter proxy_read_timeout was set to 1ms, 504 appeared on every access. So I turned this up, added the above configuration, and solved the problem.

PS: about domain name forwarding

The so-called domain name URL forwarding is to direct the user who accesses your current domain name to another network address specified by you through the special Settings of the server. Address redirection (also known as "URL forwarding") refers one domain name to another existing site, known in English as "URL FORWARDING". The domain name may point to the site's original domain name or url is more complex to remember.
For domain names that have been successfully registered, if the forwarding setting of URL is initially set or cancelled, it will normally take effect within 24-48 hours. For the original URL forwarding domain that has been set up successfully, if the target address of URL forwarding is changed, it will only take 1-2 hours to take effect.
No hidden path forward URL: for example: http: / / b com/point to http: / / a com/xxx/(any directory); When in the browser address bar type http: / / b com/enter, after IE the browser's address bar display will address from your type http: / / b com/automatically to display the real target addresses http: / / a com xxx /;
Hidden path forward URL: for example: same as above, the first IE the browser's address bar display address remains the same, is still http: / / b com /, but access to the actual http: / / a com xxx/content.


Related articles: