Detail Nginx static file service configuration and optimization

  • 2020-05-17 07:45:13
  • OfStack

The root directory and index files

The root directive specifies the root directory that will be used to search the file. To get the path of the requested file, NGINX appends the request URI to the path specified by the root directive. This directive can be placed at any level in the context of http {}, server {}, or location {}. In the following example, the root directive is defined for the virtual server. It applies to all location {} blocks that do not contain a root instruction to explicitly redefine the root:


server {
  root /www/data;

  location / {
  }

  location /images/ {
  }

  location ~ \.(mp3|mp4) {
    root /www/media;
  }
}

Here, NGINX searches the file for URI starting with /images/ in the file system's /www/ data/images/ directory. If URI ends with the.mp3 or.mp4 extension, NGINX searches for the file in the /www/media/ directory, because it is defined in the matching location block.

If the request ends in /, NGINX treats it as a request to the directory and tries to find the index file in the directory. The index directive defines the name of the index file (the default is index.html). To continue the sample, if the request is URI/images/some/path, NGINX will return documents/www data/images some/path/index html (if any). If not, NGINX returns an HTTP 404 error (not found) by default. To configure NGINX to return an automatically generated list of directories, include the on parameter in the autoindex directive:


location /images/ {
  autoindex on;
}

You can list multiple file names in the index directive. NGINX searches the file in the specified order and returns the first file it found.


location / {
  index index.$geo.html index.htm index.html;
}

The $geo variable used here is a custom variable set by the geo directive. The value of the variable depends on the client's IP address.

To return the index file, NGINX checks for its existence and then internally redirects the new URI obtained by attaching the name of the index file to the base URI. Internal redirects result in a new search for a location that may end up in another location, as shown in the following example:


location / {
  root /data;
  index index.html index.php;
}

location ~ \.php {
  fastcgi_pass localhost:8000;
  #...

}

URI in here, if the request is path /, / and/data path/index html does not exist but data/path/index php exists, the internal redirect to/path/index php will be mapped to the second position. As a result, the request is proxied.

Try several options

The try_files directive can be used to check whether a specified file or directory exists. NGINX redirects internally, and if not, returns the specified status code. For example, to check if the file corresponding to request URI exists, use the try_files directive and the $uri variable, as shown below:


server {
  root /www/data;

  location /images/ {
    try_files $uri /images/default.gif;
  }
}

This file is specified as URI and is processed using root or alias instructions set in the context of the current location or virtual server. In this case, if the corresponding to the original URI file does not exist, NGINX will be redirected to the last one internal parameter specifies URI, and return/www/data/images/default gif.

The last parameter can also be a status code (directly starting with an equal sign) or a location name. In the following example, a 404 error is returned if none of the parameters of the try_files directive are resolved to an existing file or directory.


location / {
  try_files $uri $uri/ $uri.html =404;
}

In the next example, if neither the original URI nor URI with an additional trailing slash resolves to an existing file or directory, the request is redirected to the specified location and passed to the proxy server.


location / {
  try_files $uri $uri/ @backend;
}

location @backend {
  proxy_pass http://backend.example.com;
}

For more information, watch the content caching webinar to learn how to significantly improve web site performance and learn more about the caching capabilities of NGINX.

Optimize the performance of the service content

Loading speed is a key factor in providing any content. Minor optimizations to the NGINX configuration can increase productivity and help achieve optimal performance.

Enable sendfile

By default, NGINX handles the file transfer itself and copies the file into the buffer before sending it. Enabling the sendfile directive eliminates the step of copying data to a buffer and allows data to be copied directly from one file descriptor to another. Alternatively, to prevent a single fast connection from fully consuming the worker process, you can use the sendfile_max_chunk directive to limit the amount of data transferred in a single sendfile() call (in this case, 1 MB) :


location /mp3 {
  sendfile      on;
  sendfile_max_chunk 1m;
  #...

}

Enable tcp_nopush

Combine tcp_nopush with sendfile on; Instruction 1 comes into use. This allows NGINX to send the HTTP response header in a packet immediately after sendfile() gets the data block.


location /mp3 {
  sendfile  on;
  tcp_nopush on;
  #...

}

Enable tcp_nodelay

The tcp_nodelay directive allows overwriting the Nagle algorithm, which was originally designed to solve the problem of small packets in slow networks. The algorithm combines many small packets into one large packet and sends the packet with a delay of 200 milliseconds. Today, when providing large static files, data can be sent immediately regardless of packet size. Delays can also affect online applications (ssh, online games, online transactions, etc.). By default, the tcp_nodelay directive is set to on, which means that Nagle's algorithm is disabled. This directive is only used for keepalive connections:


location /mp3 {
  tcp_nodelay    on;
  keepalive_timeout 65;
  #...
  
}

Optimize the backlog queue

One important factor is how quickly NGINX can handle incoming connections. The general rule is to place a connection in the "listen" (listening) queue that listens for sockets when it is established. Under normal load, queues are small or non-existent. But under high load, the queue can grow dramatically, resulting in uneven performance, disconnected connections, and increased latency.

Display backlog queues use the command netstat-Lan to display the current listening queue. The output might look like this, which shows that there are 10 unaccepted connections in the listening queue on port 80 for the configured maximum of 128 queued connections. That's normal.


location /images/ {
  autoindex on;
}
0

In contrast, in the following command, the number of unaccepted connections (192) exceeds the limit of 128. This is common when there is a lot of traffic to the site. For best performance, you need to increase the maximum number of connections that can be queued for NGINX to accept in the operating system and NGINX configuration.


location /images/ {
  autoindex on;
}
1

Adjust operating system

Increase the value of the net.core.somaxconn kernel parameter from its default value (128) to a value large enough to accommodate a large amount of traffic. In this case, it increases to 4096.

FreeBSD's command is sudo sysctl kern.ipc.somaxconn =4096 The command of Linux is 1.sudo sysctl-w net.core.somaxconn =4096 2. Add net.core.somaxconn =4096 to the /etc/ sysctl.conf file.

Adjust the NGINX

If you set the somaxconn kernel parameter to a value greater than 512, add the backlog parameter to the NGINX listen directive to match the modification:


location /images/ {
  autoindex on;
}
2

& copy; This article is translated from Nginx Serving Static Content, with some semantic adjustments.


Related articles: