Nginx configuration file (nginx.conf) configuration details (summary)

  • 2020-05-12 06:59:58
  • OfStack

Now I often meet some new users who ask me some basic questions. Recently, I sorted out the configuration file nginx of Nginx. The configuration details of conf are as follows:


user nginx nginx ;

Nginx users and groups: user groups. Not specified under window


worker_processes 8;

Work progress: number. Depending on the hardware, it is usually equal to the amount of CPU or 2 times CPU.


error_log logs/error.log; 

error_log logs/error.log notice; 

error_log logs/error.log info; 

Error log: store the path.


pid logs/nginx.pid;

pid (process identifier) : store path.


worker_rlimit_nofile 204800;

Specifies the maximum number of descriptors that a process can open:

This directive refers to the maximum number of file descriptors that an nginx process can open. The theoretical value should be the maximum number of open files (ulimit-n) divided by the number of nginx processes, but nginx does not allocate requests evenly, so it is best to keep 1 to the ulimit-n value.

Now the number of open files in the Linux 2.6 kernel is 65535, so worker_rlimit_nofile should be filled in 65535.

This is because nginx scheduling is not so balanced in allocating requests to processes, so if you fill in 10240, when the total concurrency reaches 30,000-40,000, a process may exceed 10240, and a 502 error will be returned.


events

{

use epoll;

I/O model using epoll. linux suggests epoll, FreeBSD suggests kqueue, window does not specify.

Supplementary notes:

Like apache, nginx has a different event model for different operating systems

A) standard event model

Select and poll belong to the standard event model. If there is no more efficient method for the current system, nginx will choose select or poll

B) efficient event model

Kqueue: FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0 and MacOS X. MacOS X systems with dual processors using kqueue may cause kernel crashes.

Epoll: for use on Linux kernel 2.6 and later systems.

/dev/poll: Solaris 7 11/99+, HP/UX 11.22+ (eventport), IRIX 6.5.15+ and Tru64 UNIX 5.1A+.

Eventport: for Solaris 10. To prevent kernel crashes, it is necessary to install security patches.


worker_connections 204800;

Maximum number of connections per worker process. According to the hardware adjustment, and the previous working process together, as large as possible, but do not run cpu to 100% on the line. The maximum number of connections allowed per process, theoretically the maximum number of connections per nginx server, is. worker_processes * worker_connections


keepalive_timeout 60;

keepalive timeout.


client_header_buffer_size 4k;

The client requests the buffer size of the header. This can be set according to your system page size. The size of 1 normal request header will not exceed 1k, but since 1 normal system page size is larger than 1k, this is set to page size.

The page size can be obtained with the command getconf PAGESIZE.


[root@web001 ~]# getconf PAGESIZE

4096

There are cases where client_header_buffer_size exceeds 4k, but client_header_buffer_size must be set to an integral multiple of the system page size.

This will specify the cache for opening the file, which is not enabled by default, max specifies the number of caches, the number of recommended and open files is 1, inactive refers to how long the file has not been requested before the cache is deleted.


worker_processes 8;
0

This refers to how often the cache is checked for valid information once.


open_file_cache_min_uses 1;

The minimum number of times a file is used in the open_file_cache directive. If this number is exceeded, the file descriptor 1 is opened in the cache. In the example above, if a file is not used once in inactive, it will be removed.

Set up the http server to provide load balancing support with its reverse proxy function


http

{

include mime.types;

Set the mime type, which is defined by the mime.type file


worker_processes 8;
3

Log formatting.

$remote_addr and $http_x_forwarded_for are used to record the ip address of the client;

$remote_user: used to record the client user name;

$time_local: to record access time and time zone;

$request: the url and http protocols used to record requests;

$status: used to record request status; Success is 200,

$body_bytes_sent: record sent to the client file body content size;

$http_referer: used to record visits from that page link;

$http_user_agent: record information about the customer's browser;

Usually the web server is placed behind the reverse proxy, so the client's IP address cannot be obtained. The IP address obtained through $remote_add is the iP address of the reverse proxy server. The reverse proxy server can add the x_forwarded_for information in the http header information of forwarding the request to record the IP address of the original client and the server address requested by the original client.


worker_processes 8;
4

After setting the log format with the log_format instruction, it is necessary to use the access_log instruction to specify the storage path of log files.


worker_processes 8;
5

The hash table that holds the name of the server is controlled by the instructions server_names_hash_max_size and server_names_bucket_size. The parameter hash bucket size is always equal to the size of the hash table and is a multiple of the size of the 1-way processor cache. After reducing the number of in-memory accesses, it is possible to accelerate the lookup of hash table keys in the processor. If hash bucket size is equal to the size of the 1-way processor cache, the worst-case number of lookups in memory is 2 when looking for the key. The first time is to determine the address of the storage location, and the second time is to look up the key value in the storage location. Therefore, if Nginx indicates that you need to increase hash max size or hash bucket size, the first thing to do is to increase the size of the first parameter.


worker_processes 8;
6

The client requests the buffer size of the header. This can be set according to your system page size. The header size of 1 general request should not exceed 1k, but since 1 general system pages are larger than 1k, this is set to page size. The page size can be obtained with the command getconf PAGESIZE.


large_client_header_buffers 8 128k;

The client requested the header buffer size. nginx will, by default, read header using client_header_buffer_size, buffer, if

header is too large, it will be read using large_client_header_buffers.


open_file_cache max=102400 inactive=20s;

This directive specifies whether caching is enabled.

Ex. :


worker_processes 8;
9

Syntax: open_file_cache_off default :open_file_cache_errors off use fields :http, server, location this directive specifies whether to record cache error while searching for a file.


open_file_cache_min_uses

Syntax :open_file_cache_min_uses number default :open_file_cache_min_uses 1 use fields :http, server, This directive, location, specifies the minimum number of files that can be used within a specified time range in the parameter that is invalid in the open_file_cache directive. If a larger value is used, the file descriptor will always be open in cache.


open_file_cache_valid

Syntax: open_file_cache_time default :open_file_cache_valid 60 use fields :http, server, location this directive specifies when to check for valid information about cache items in open_file_cache.


client_max_body_size 300m;

Set the size of the file to be uploaded via nginx


sendfile on;

The sendfile directive specifies whether nginx calls the sendfile function (zero copy mode) to output the file. For normal applications, on must be set to on. If you are using the disk IO for heavy load applications such as downloads, you can set it to off to balance the processing speed of the disk and network IO and slow down the system uptime.


tcp_nopush on;

This option allows or disables the TCP_CORK option of socke, which is only used when using sendfile


proxy_connect_timeout 90; 

Timeout for back-end server connection _ initiate handshake wait response timeout


proxy_read_timeout 180;

After successful connection _ waiting back-end server response time _ is already in the back-end queue waiting for processing (also known as the time for the back-end server to process the request)


proxy_send_timeout 180;

The back-end server data return time _ is the specified time within which the back-end server must transfer all the data


proxy_buffer_size 256k;

Set the buffer size for the first part of the reply read from the proxy server. Normally this part of the reply contains a small reply header. By default this value is the size of a buffer specified in the instruction proxy_buffers, but you can set it to smaller


proxy_buffers 4 256k;

Set the number and size of buffers used to read the reply (from the proxy server), and also the paging size by default, which may be 4k or 8k, depending on the operating system


proxy_busy_buffers_size 256k;

proxy_temp_file_write_size 256k;

Set the size of the data when writing to proxy_temp_path to prevent one worker process from blocking too long while passing the file


proxy_temp_path /data0/proxy_temp_dir;

The paths specified by proxy_temp_path and proxy_cache_path must be in the same 1 partition


proxy_cache_path /data0/proxy_cache_dir levels=1:2 keys_zone=cache_one:200m inactive=1d max_size=30g;

Set the size of the memory cache space to 200MB, the content that has not been accessed in 1 day will be automatically cleared, and the size of the hard disk cache space will be 30GB.


keepalive_timeout 120;

keepalive timeout.


tcp_nodelay on;

client_body_buffer_size 512k;

If you set it to a larger value, such as 256k, it is normal to use either the firefox or IE browsers to submit any image smaller than 256k. The problem arises if you annotate the directive with the default client_body_buffer_size setting, which is twice the size of the OS page, 8k or 16k.
Whether you use firefox4.0 or IE8.0, if you submit a larger image, around 200k, you return 500 Internal Server Error error


proxy_intercept_errors on;

Means to enable nginx to block replies with an HTTP reply code of 400 or higher


upstream bakend {

server 127.0.0.1:8027;

server 127.0.0.1:8028;

server 127.0.0.1:8029;

hash $request_uri;

}

upstream of nginx currently supports four distributions

1. Polling (default)

Each request is allocated to a different backend server one by one in chronological order. If the backend server down is dropped, it can be automatically eliminated.

2, weight

Specify the polling probability that weight is proportional to the access ratio for situations where the backend server performance is uneven.

Such as:


upstream bakend {
server 192.168.0.14 weight=10;
server 192.168.0.15 weight=10;
}

2, ip_hash

Each request is allocated according to the hash result of accessing ip, so that each visitor has a fixed access to one back-end server, which can solve the problem of session.

Such as:


upstream bakend {
ip_hash;
server 192.168.0.14:88;
server 192.168.0.15:80;
}

3. fair (third party)

Requests are allocated according to the response time of the back-end server, and priority is given to those with short response times.


upstream backend {
server server1;
server server2;
fair;
}

4. url_hash (third party)

Requests are allocated according to the hash result of accessing url, so that each url is directed to the same back-end server, which is more efficient when the back-end server is cached.

Example: add hash statement in upstream, server statement cannot write weight and other parameters, hash_method is the hash algorithm used


worker_rlimit_nofile 204800;
0

tips:


worker_rlimit_nofile 204800;
1

Added in server where load balancing is required

proxy_pass http://bakend/;

The state of each device is set to:

1.down means that server before the order is temporarily not involved in the load

2. The larger weight is, the greater the load weight will be.

3.max_fails: the default number of failed requests allowed is 1. When the maximum number is exceeded, an error defined by the proxy_next_upstream module is returned

4.fail_timeout: time to pause after a failure of max_fails.

5.backup: all other non-backup machines down or request backup machines when busy. So this machine will have the least pressure.

nginx supports simultaneous load balancing of multiple groups for use by unused server.

client_body_in_file_only is set to On. You can record the data from client post to the file for debug
The directory of the client_body_temp_path Settings record file can be set to up to three levels

location matches URL. Redirection or new proxy load balancing is possible

Configure the virtual machine


worker_rlimit_nofile 204800;
2

Configure the listening port


worker_rlimit_nofile 204800;
3

Configure access domain name


worker_rlimit_nofile 204800;
4

Load balancing addresses ending in "mp3 or exe"


worker_rlimit_nofile 204800;
5

Sets the port or socket of the proxy server, as well as URL


worker_rlimit_nofile 204800;
6

The purpose of the above three lines is to transfer the user information received by the proxy server to the real server


worker_rlimit_nofile 204800;
7

## other examples


worker_rlimit_nofile 204800;
8

Note: variable

The Ngx_http_core_module module supports built-in variables whose names are 1 to apache's built-in variables.

The first is to say that the customer requested a line in title, such as $http_user_agent,$http_cookie, and so on.

There are other ones

$args this variable is equal to the parameter in the request line

$content_length is equal to the value of "Content_Length" in the request line.

$content_type is equivalent to the value "Content_Type" in the request header

$document_root is equivalent to the value specified in the currently requested root directive

$document_uri and $uri1 sample

$host looks like the value specified in the "Host" line in the request header or the name of server (no Host line) that request arrived at

$limit_rate allows a limited connection rate

$request_method = method of request, usually "GET" or "POST"

$remote_addr client ip

$remote_port client port

$remote_user is equivalent to the username and is authenticated by ngx_http_auth_basic_module

$request_filename the pathname of the file currently requested is a combination of root or alias and URI request

$request_body_file

$request_uri contains the full initial URI parameter

$query_string and $args1 sample

$sheeme http mode (http,https) as required is evaluated for example

Rewrite ^(.+)$ $sheme://example.com$; Redirect;

$server_protocol is equivalent to the request protocol, using "HTTP/ or" HTTP/

$server_addr request arrives at server ip, 1 generally gets the value of this variable for the purpose of making a system call. To avoid system calls, it is necessary to specify ip in the listen directive and to use the bind parameter.

$server_name requested the name of the server that arrived

$server_port the port number of the server to which the request arrived

$uri is the same as URI in the current request, but not the same as the initial value, such as when redirecting internally or using index


Related articles: