nginx sets up the tcp proxy server

  • 2020-05-14 05:32:24
  • OfStack

Not only can nginx be an http proxy server, it can also be easily built into an tcp proxy server.

First, let's take a look at the latest development builds

1. Install


> wget http://nginx.org/download/nginx-1.9.0.tar.gz
> tar zxvf nginx-1.9.0.tar.gz

Version 1.9.0+

2, configuration,


worker_processes auto;
error_log /var/log/nginx/error.log info;
stream {
  upstream backend {
    hash $remote_addr consistent;
    server backend1.example.com:12345 weight=5;
    server 127.0.0.1:12345      max_fails=3 fail_timeout=30s;
    server unix:/tmp/backend3;
  }
  server {
    listen 12345;
    proxy_connect_timeout 1s;
    proxy_timeout 3s;
    proxy_pass backend;
  }
  server {
    listen [::1]:12345;
    proxy_pass unix:/tmp/stream.socket;
  }
}

Added 3.
Currently, nginx 1.9 is a development release. Currently, stream is not available in the stable release, but it will be integrated in the next stable release. Therefore, those who are recommended to use http proxy in the future can consider switching to tcp proxy. If you only use http proxy as a simple agent, its performance will be better.

2. How to build the old version

The nginx tcp proxy function is provided by the nginx_tcp_proxy_module module while monitoring the back-end host status. The module includes: ngx_tcp_module, ngx_tcp_core_module, ngx_tcp_upstream_module, ngx_tcp_proxy_module, ngx_tcp_upstream_hash_module.
1. Install


# wget http://nginx.org/download/nginx-1.4.4.tar.gz
# tar zxvf nginx-1.4.4.tar.gz
# cd nginx-1.4.4
# ./configure --add-module=/path/to/nginx_tcp_proxy_module
# make
# make install

2. The configuration


http {
  listen 80;
  location /status {
    check_status;
  }
}
tcp {
  upstream cluster_www_ttlsa_com {
    # simple round-robin
    server 127.0.0.1:1234;
    check interval=3000 rise=2 fall=5 timeout=1000;
    #check interval=3000 rise=2 fall=5 timeout=1000 type=ssl_hello;
    #check interval=3000 rise=2 fall=5 timeout=1000 type=http;
    #check_http_send "GET / HTTP/1.0\r\n\r\n";
    #check_http_expect_alive http_2xx http_3xx;
  }
  server {
    listen 8888;
    proxy_pass cluster_www_ttlsa_com;
  }
}

One problem is that the tcp connection will drop. The reason is that when the server closes the connection, the client cannot immediately notice that the connection has been closed. It needs to wait until Nginx considers the server link closed when executing the check rule, then nginx will close the connection with the client.

3. Maintain the connection configuration


http {
  listen 80;
  location /status {
    check_status;
  }
}
tcp {
 timeout 1d;
  proxy_read_timeout 10d;
  proxy_send_timeout 10d;
  proxy_connect_timeout 30;
  upstream cluster_www_ttlsa_com {
    # simple round-robin
    server 127.0.0.1:1234;
    check interval=3000 rise=2 fall=5 timeout=1000;
    #check interval=3000 rise=2 fall=5 timeout=1000 type=ssl_hello;
    #check interval=3000 rise=2 fall=5 timeout=1000 type=http;
    #check_http_send "GET / HTTP/1.0\r\n\r\n";
    #check_http_expect_alive http_2xx http_3xx;
  }
  server {
    listen 8888;
    proxy_pass cluster_www_ttlsa_com;
 so_keepalive on;
    tcp_nodelay on;
  }
}

nginx_tcp_proxy_module module specific see instructions: http: / / yaoweibin github. io/nginx_tcp_proxy_module/README html


Related articles: