High concurrency nginx server linux kernel optimization configuration explained

  • 2020-05-17 07:38:05
  • OfStack

Since the default linux kernel parameter considers the most common scenario, it obviously does not conform to the definition of Web server which is used to support high concurrent access. Therefore, the Linux kernel parameter needs to be modified. Nginx can have higher performance.

In the optimal kernel, can do a lot of things, however, we usually will be adjusted according to the characteristics of business, when Nginx as static web content, the reverse proxy server or provide compression server server, phase adjustment of kernel parameter is different, here for the most general, make Nginx support more concurrent requests TCP network parameters to do a simple configuration;

The following linux system kernel optimization configuration has been tested by the online business system, and the concurrent 100,000 or so servers are running well. It took some time to sort it out. Now I would like to share with you 1.


# Controls the use of TCP syncookies

# Represents enabling reuse. Allows you to TIME-WAIT sockets Reapply to new TCP Connection, default is 0 Is closed; 
net.ipv4.tcp_syncookies = 1

#1 A Boolean flag that controls the behavior of the kernel when there are many connection requests. If enabled, the kernel will actively send if the service is overloaded RST The package. 
net.ipv4.tcp_abort_on_overflow = 1

# Means the system is held simultaneously TIME_WAIT Is the maximum number of, if more than this number, TIME_WAIT The warning message will be immediately cleared and printed. 
# The default is 180000 Instead, 6000 . for Apache , Nginx Wait for the server, this parameter can be controlled TIME_WAIT Maximum quantity of , Servers are being loaded TIME_WAIT Dragged to death 
net.ipv4.tcp_max_tw_buckets = 6000

# Selective responses 
net.ipv4.tcp_sack = 1

# This file represents the Settings tcp/ip Whether the sliding window size of the session is variable. Parameter value is Boolean value, and is 1 Is variable, and is 0 Is immutable. tcp/ip The most commonly used Windows are reachable 65535  Bytes, for high-speed networks .
# This value may be too small, at which point, if the feature is enabled, the tcp/ip The size of the sliding window increases by several orders of magnitude, thus improving the capability of data transmission. 
net.ipv4.tcp_window_scaling = 1

#TCP Receive buffer 
net.ipv4.tcp_rmem = 4096    87380  4194304

#TCP Send buffer 
net.ipv4.tcp_wmem = 4096    66384  4194304

# # Out of socket memory
net.ipv4.tcp_mem = 94500000 915000000 927000000

# This file represents the maximum buffer size allowed for each socket. 
net.core.optmem_max = 81920

# This file specifies the default value (in bytes) for the size of the send socket buffer. 
net.core.wmem_default = 8388608

# Specifies the maximum size (in bytes) of the send socket buffer. 
net.core.wmem_max = 16777216

# Specifies the default value (in bytes) for the size of the receive socket buffer. 
net.core.rmem_default = 8388608

# Specifies the maximum size (in bytes) of the receive socket buffer. 
net.core.rmem_max = 16777216

# said SYN Length of queue , The default is 1024, Increase the queue length by 10200000, Can accommodate more network connections waiting to connect. 

net.ipv4.tcp_max_syn_backlog = 1020000

# The maximum number of packets allowed to be sent to the queue for each network interface to receive packets at a rate faster than the kernel can process them. 
net.core.netdev_max_backlog = 862144

#web  In the application listen  Function of the backlog  We are given kernel parameters by default net.core.somaxconn  Limited to 128 And the nginx  The definition of the NGX_LISTEN_BACKLOG  The default is 511 , so it is necessary to adjust this value. 
net.core.somaxconn = 262144

# How many are there in the system at most TCP  The socket is not associated with any 1 User file handle. If this number is exceeded, the orphan connection is immediately reset and a warning message is printed. 


# This restriction is only to prevent simplicity DoS  Attacks, you can't rely too much on it or artificially reduce the value, you should increase it 
net.ipv4.tcp_max_orphans = 327680

# The timestamp avoids the winding of the serial number. 1 a 1Gbps  The link will definitely encounter a previously used sequence number. The timestamp allows the kernel to accept this "exception" packet. I need to turn it off. 
net.ipv4.tcp_timestamps = 0

# To open the connection to the other side, the kernel needs to send 1 a SYN  with 1 Response in front of 1 a SYN  the ACK . That's called 3 The first of the first handshakes 2 Handshake. This setting determines the kernel to abandon the connection before sending it SYN+ACK  The number of packages. 
net.ipv4.tcp_synack_retries = 1

# Send before the kernel abandons establishing a connection SYN  The number of packages. www.ofstack.com
net.ipv4.tcp_syn_retries = 1

# According to open TCP In the connection TIME-WAIT sockets Quick recycle, default is 0 Is closed; 
net.ipv4.tcp_tw_recycle = 1

# Represents enabling reuse. Allows you to TIME-WAIT sockets Reapply to new TCP Connection, default is 0 Is closed; 
net.ipv4.tcp_tw_reuse = 1

# Modify the default frame  TIMEOUT  Time. 
net.ipv4.tcp_fin_timeout = 15

# Said when keepalive And when you use it, TCP send keepalive The frequency of messages. The default is 2 Hour, suggest to change to 20 Minutes. 
net.ipv4.tcp_keepalive_time = 30

# Represents the range of ports used for outgoing connections. Small by default: 32768 to 61000 Instead, 10000 to 65000 . (note: don't set the minimum too low, or you may end up using normal ports!) 
net.ipv4.ip_local_port_range = 1024  65000

# The following may need to be loaded ip_conntrack The module  modprobe ip_conntrack , It is documented that this module fails when the firewall is enabled 

# � short established The super � � � 
net.netfilter.nf_conntrack_tcp_timeout_established = 180

#CONNTRACK_MAX  The maximum trace connection entry allowed is in kernel memory netfilter "Tasks" that can be processed simultaneously (connect trace entries) 
net.netfilter.nf_conntrack_max = 1048576
net.nf_conntrack_max = 1048576

conclusion


Related articles: