How to use Nginx to prevent IP address from being maliciously resolved

  • 2021-08-17 01:31:11
  • OfStack

Purpose of using Nginx

Using Alibaba Cloud ECS Cloud Server, first talk about the background of the author using Nginx.

After initializing ECS, a public network IP will be generated. By default, the IP address will automatically access port 80. At this time, the service started at port 80 can be directly accessed through ip address.

If the domain name is resolved to the current ip, you can directly access the service of port 80 through the domain name.

Then, a problem arises: Anyone can resolve the domain name to the ip address, that is, they can access their own services on ECS through other domain names. As for the purpose, this kind of attack means is too aboveboard. It should be to raise domain names and then sell them (guess, the brain hole is big enough to communicate 1 times).

There are many ways to avoid this attack. Refer to the answers on the Internet, it is most convenient and quick to configure Nginx.

The general idea is as follows: web service starts at non-80 port (which cannot be accessed directly through IP address), and Nginx is configured with Layer 1 forward proxy to forward domain name to domain name + port.

Results: After resolution, you can access it directly by using your own domain name, which is essentially forwarded to ip address + port. Other domain names are not configured with port forwarding, so they will be intercepted.

There are many scenarios using Nginx, such as reverse proxy, load balancing, etc. Preventing malicious parsing is only one of them.

Maybe more technical experience related to Nginx will be expanded in the future, but code is only a tool, and technology will produce value only when it solves the real problem, otherwise it will be just like an armchair strategist and meaningless.

I have seen an article before, about two developers discussing technology choices. One of them chose the unpopular Lua, and the other one expressed incomprehension, why not choose popular technologies, better performance and better development experience. However, her answer is: just solve our problems.

I fell into deep thought, and I followed the wave of micro-service architecture in 2019, and learned a lot of new technologies and nouns, which made me feel full of money. However, it is difficult to apply it to actual project development. Is high concurrency and micro-service a technology or a ostentatious capital to solve practical problems or employment problems in the project? Learning is not guilty, but before learning, I will think, will I use it or be bound by it?

With so many beeps, here are the common commands for Nginx in the Linux environment and the configuration file I copied (nginx. conf)

List of commonly used commands


yum install nginx  // Installation nginx ( centos ) 

// Startup and self-startup 
systemctl enable nginx
systemctl disable nginx

// View nginx Status 
systemctl status nginx

// Start, stop, restart 
systemctl start nginx
systemctl stop nginx
systemctl restart nginx

// Reload configuration 
systemctl reload nginx

// Default location of profile 
/etc/nginx  Master configuration file nginx.conf

Prevent malicious configuration parsing


 server {
      listen    80 default_server;
      server_name _;
      access_log  off;
      return    444;
    }

# For more information on configuration, see:
#  * Official English Documentation: http://nginx.org/en/docs/
#  * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
  worker_connections 1024;
}

http {
  log_format main '$remote_addr - $remote_user [$time_local] "$request" '
           '$status $body_bytes_sent "$http_referer" '
           '"$http_user_agent" "$http_x_forwarded_for"';

  access_log /var/log/nginx/access.log main;

  sendfile      on;
  tcp_nopush     on;
  tcp_nodelay     on;
  keepalive_timeout  65;
  types_hash_max_size 2048;

  include       /etc/nginx/mime.types;
  default_type    application/octet-stream;

  # Load modular configuration files from the /etc/nginx/conf.d directory.
  # See http://nginx.org/en/docs/ngx_core_module.html#include
  # for more information.
  include /etc/nginx/conf.d/*.conf;
    server {
      listen    80 default_server;
      server_name _;
      access_log  off;
      return    444;
    }
  server {
    listen    80;
    server_name www.zkrun.top;
    location / {
        proxy_pass http://www.zkrun.top:8080;
    }

    error_page 404 /404.html;
      location = /40x.html {
    }

    error_page 500 502 503 504 /50x.html;
      location = /50x.html {
    }
  }

# Settings for a TLS enabled server.
#
#  server {
#    listen    443 ssl http2 default_server;
#    listen    [::]:443 ssl http2 default_server;
#    server_name _;
#    root     /usr/share/nginx/html;
#
#    ssl_certificate "/etc/pki/nginx/server.crt";
#    ssl_certificate_key "/etc/pki/nginx/private/server.key";
#    ssl_session_cache shared:SSL:1m;
#    ssl_session_timeout 10m;
#    ssl_ciphers HIGH:!aNULL:!MD5;
#    ssl_prefer_server_ciphers on;
#
#    # Load configuration files for the default server block.
#    include /etc/nginx/default.d/*.conf;
#
#    location / {
#    }
#
#    error_page 404 /404.html;
#      location = /40x.html {
#    }
#
#    error_page 500 502 503 504 /50x.html;
#      location = /50x.html {
#    }
#  }
}

Summarize


Related articles: