How to use nginx to process DDOS for system optimization

  • 2020-05-15 03:30:13
  • OfStack

preface

It is well known that DDoS is so common that it is even known as a hacker's circle access skill; DDoS is also fierce and can overwhelm a 1-sided network.

The characteristic of DDOS is distributed, aiming at bandwidth and service attack, which is 4-layer traffic attack and 7-layer application attack. The corresponding defense bottleneck is 4-layer in bandwidth and 7-layer in the throughput of the architecture. For the application attack of layer 7, we can still do some configuration to defend, for example, the front-end is Nginx, mainly using http_limit_conn and http_limit_req modules to defend.

What is distributed denial of service DDoS (Distributed Denial of Service) means distributed denial of service attack, in which the attacker USES a large number of "broiler" to launch a large number of normal or abnormal requests to attack the target, depleting the target host resources or network resources, thus making the attacker unable to provide services to legitimate users. Typically, an attacker will try to saturate a system with so many connections and require it to stop accepting new traffic, or become too slow to use.

In other words, the old hotel (target) can receive 100 customers dining at the same time, Lao wang (attackers) employs 200 people next door (chicken), his position, but don't eat not to drink in hotel (abnormal request), the hotel was packed full (resource depletion), whereas the customer really want to eat into do not come, the hotel can't normal business (DDoS attack). So the question is, what should Lao zhang do?

Of course, get out!

Typically, an attacker will try to saturate a system with so many connections and require it to stop accepting new traffic, or become too slow to use.

Application layer DDoS attack features

Application layer (layer 7 / HTTP) DDoS attacks are executed by software programs (bots) that can be customized to best exploit a particular system's vulnerabilities. For example, for a system that does not handle a large number of concurrent connections well, simply opening a large number of connections and keeping them active by periodically sending a small amount of traffic may deplete the system's capacity for new connections. Other attacks can take the form of sending large Numbers of requests or very large requests. Because these attacks are performed by zombie programs rather than actual users, an attacker can easily open a large number of connections and send a large number of requests very quickly.

The DDoS attack features can be used to help mitigate these attacks, including the following (this does not mean an exhaustive list) :

- traffic usually comes from a fixed set of IP addresses, belonging to the machine used to execute the attack. As a result, each IP address is responsible for far more connections and requests than you would expect from a real user.

Note: do not assume that this traffic pattern always represents an DDoS attack. The use of the forwarding proxy can also create this pattern, because the IP address of the forwarding proxy server is used as the client address for all requests from the real clients it serves. However, the number of connections and requests from the forwarding agent is usually much lower than the DDoS attack.

- since traffic is generated by robots and means overwhelming the server, the traffic rate is much higher than that generated by human users.

-User-Agent header is sometimes set to a non-standard value.

- the Referer header is sometimes set to a value that you can associate with an attack.

Use NGINX and NGINX Plus to defend against DDoS attacks

NGINX and NGINX Plus have many capabilities that, combined with the DDoS attack features described above, make them an important part of the DDoS attack mitigation solution. These capabilities address the DDoS attack by regulating incoming traffic and controlling the traffic proxy back-end server.

Intrinsic protection of the NGINX event-driven architecture

NGINX is designed to be a "shock absorber" for your website or application. It has a non-blocking event-driven architecture that can handle a large number of requests without significantly increasing resource utilization.

New requests from the network do not interrupt the processing of ongoing requests by NGINX, which means that NGINX can protect your site or application from attacks using the techniques described below.

For more information about the underlying architecture, see Inside NGINX: how we design for performance and scale.

Limit request rate

You can limit the rate at which NGINX and NGINX Plus receive incoming requests to typical values for actual users. For example, you might decide that a real user accessing the login page can only make one request every two seconds. You can configure NGINX and NGINX Plus to allow a single client IP address to attempt login every 2 seconds (equivalent to 30 requests per minute) :


limit_req_zone $binary_remote_addr zone=one: 
10m 
 rate= 
30r 
/m; 
server { 
  
# ... 
 location /login.html { 
  limit_req zone=one; 
  
# ... 
 } 
} 

The limit_req_zone directive concatenates a Shared memory area called "one" to store the request state of the specified key, in this case the client IP address ($binary_remote_addr). The limit_req directive in the/login.html block refers to the Shared memory region. location

For a detailed discussion of rate limits, see the rate limits for NGINX and NGINX Plus on the blog.

Limit the number of connections

You can limit the number of connections that a single client IP address can open, or you can limit it to a value that is appropriate for a real user. For example, you can allow each client IP address to open no more than 10 connections to the/store area of your site:


limit_conn_zone $binary_remote_addr zone=addr: 10m ; 
server { 
  
# ... 
 location /store/ { 
  limit_conn addr 10 ; 
   
# ... 
 } 
} 

The limit_conn_zone directive is configured with a Shared memory region named addr to store requests for the specified key, in which case (as shown in the previous example) the client IP address $binary_remote_addr. In limit_conn the directive location references the Shared memory area for block/store and sets a maximum of 10 connections from each client IP address.

Close the slow connection

You can close the connection you are writing to, which might mean trying to keep the connection open as much as possible (thereby reducing the server's ability to accept new connections). Slowloris is an example of such an attack. The client_body_timeout directive controls the time NGINX waits between writes to the client body, and the client_header_timeout directive controls the time NGINX waits between writes to the client title. The default value for both instructions is 60 seconds. This example configured NGINX to wait no more than 5 seconds between writes or headers from the client:


server { 
 client_body_timeout 5s; 
 client_header_timeout 5s; 
  
# ... 
} 

List the address of IP on the blacklist

If you can identify the client IP address used for the attack, you can blacklist it using the deny directive so that NGINX and NGINX Plus do not accept their connections or requests. For example, if you determine that the attack came from an address range of 123.123.123.1 to 123.123.123.16:


location / { 
 deny 123.123 . 123.0 / 28 ; 
  
# ... 
} 

Or, if you determine the attack from the client 123.123.123.3 IP address, 123.123.123.5 and 123.123.123.7:


location / { 
 deny 123.123.123.3; 
 deny 123.123.123.5; 
 deny 123.123.123.7; 
 # ... 
} 

Whitelist IP addresses

If you only allow access to your site or application from one or more client IP addresses in a particular group or range, you can use the allow and deny directives from 1 to allow only those addresses to access the site or application. For example, you can restrict access to only addresses in a particular local network:


location / { 
 allow 192.168.1.0/24; 
 deny all; 
 # ... 
} 

Here, the deny all directive blocks all client IP addresses that are not within the scope specified by the allow directive.

Use caching to smooth traffic spikes

You can configure NGINX and NGINX Plus to absorb large spikes in traffic caused by an attack by enabling caching and setting certain cache parameters to offload back-end requests. 1 some useful Settings are:

The updating parameter proxy_cache_use_stale of the directive tells NGINX that when it needs to get an update for an old cached object, it should send only one update request and continue to feed the old object to the client requesting it during the receive time from the back-end server. This significantly reduces the number of requests to the back-end server when repeated requests to a file are part 1 of the attack. The key defined by the proxy_cache_key directive is usually composed of embedded variables (the default key $scheme$proxy_host$request_uri has three variables). If the value contains the $query_string variable, an attack that sends a random query string may result in excessive caching. $query_string we recommend that you do not include variables in variables unless you have special reasons.

Stop the request

You can configure NGINX or NGINX Plus to block several requests:

Request a specific url that appears to be targeted The User-Agent header is set to a request with a value that does not correspond to normal client traffic Request to set the Referer header to a value that can be associated with an attack Other header files have requests for values that can be associated with the attack

For example, if you determine that the target of an DDoS attack is URL/foo.php, you can block all requests on that page:


​location /foo.php { 
 deny all; 
} 

Alternatively, if you find that DDoS attacks request User-Agent header values of foo or bar, you can block these requests.


​location / { 
 if ($http_user_agent ~* foo|bar) { 
  return 403; 
 } 
 # ... 
} 

This variable refers to a request header, which in the above example is the header. A similar approach can be used with other headers that have values that can be used to identify attacks. name http_ * * ` ` User - Agent

Restrict connections to back-end servers

An NGINX or NGINX Plus instance can usually handle more concurrent connections than a load-balanced back-end server. With NGINX Plus, you can limit the number of connections to each back-end server. For example, if you want to limit NGINX Plus to no more than 200 connections to the two back-end servers in the upstream group of your site:


upstream website { 
 server 192.168.100.1:80 max_conns=200; 
 server 192.168.100.2:80 max_conns=200; 
 queue 10 timeout=30s; 
} 

The parameters that max_conns applies to each server specify the maximum number of connections that NGINX Plus can open. This queue directive limits the number of requests queued when all servers in an upstream group reach their connection limits, and the timeout parameter specifies how long to hold requests in the queue.

Handles scope-based attacks

One attack is to send a header with a very large value for Range, which can result in a buffer overflow. For a discussion of how to use NGINX and NGINX Plus to mitigate such attacks, see using NGINX and NGINX Plus to protect CVE-2015-1635.

Handling high load

DDoS attacks usually result in high traffic loads. For tips on tuning NGINX or NGINX Plus and operating systems that allow the system to handle higher loads, see tuning NGINX's performance.

Identify an DDoS attack

So far, we've focused on the fact that you can use NGINX and NGINX Plus to help mitigate the impact of DDoS attacks. But how can NGINX or NGINX Plus help you spot DDoS attacks? The NGINX status-plus module provides detailed metrics about the backend server being loaded, which you can use to find abnormal traffic patterns balancing traffic. NGINX Plus comes with a status dashboard page that graphically describes the current state of the NGINX Plus system (see the example on demo.nginx.com). The same metrics can also be used with API, which you can use to provide metrics to a custom or third-party monitoring system where you can perform historical trend analysis to detect unusual patterns and enable alerts.

reference

https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus

conclusion


Related articles: