Explore how to reduce the problem of too much Linux server TIME_WAIT

  • 2020-05-06 12:10:53
  • OfStack

TIME_WAIT state meaning:
After TCP/IP connection is established between
client and server and SOCKET is closed, the port state of the server-side connection is TIME_WAIT
Do all socket that perform active shutdown go to TIME_WAIT?
Is there a situation where an socket that is actively closed goes directly into the CLOSED state?
After sending the last ack, the actively closed party will enter the TIME_WAIT state and stay for 2MSL (max segment lifetime) time, which is essential for TCP/IP, that is, cannot be "solved".
That's TCP/IP and that's how the designers designed it.

There are two main reasons for 1. Prevents packets in the previous connection from reappearing after getting lost, affecting new connections (after 2MSL, all duplicate packets in the previous connection will disappear)
2. Reliably close TCP connections to
When the last ack(fin) sent by the active close party is lost, the passive party resends fin, and if the active party is in CLOSED state, it responds to rst instead of ack. So the active party has to be in TIME_WAIT, not CLOSED.
TIME_WAIT does not take up a lot of resources unless attacked.
In the Squid server, you can enter the following command :
#netstat -n | awk '/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}'
LAST_ACK 14
SYN_RECV 348
ESTABLISHED 70
FIN_WAIT1 229
FIN_WAIT2 30
CLOSING 33
TIME_WAIT 18122
status: describes
CLOSED: connectionless is active or ongoing
LISTEN: the server is waiting to call
SYN_RECV: a connection request has arrived, waiting for confirmation of
SYN_SENT: the application has started. Open a connection to
ESTABLISHED: normal data transfer status
FIN_WAIT1: the application says it has completed
FIN_WAIT2: the other side has agreed to release
ITMED_WAIT: wait for all groups to die
CLOSING: try to close
on both sides TIME_WAIT: the other side has initialized a release
LAST_ACK: wait for all groups to die
That is, this command summarizes the current network connection status of the linux server.
Here's why:
A simple pipe character connects the netstat and awk commands.
First, netstat:
netstat -n
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 123.123.123.123:80 234.234.234.234:12345 TIME_WAIT
When you actually execute this command, you'll probably get thousands of records like this, but we'll just take one of them.
and awk:
/^tcp/
Filter out the records beginning with tcp, and block the irrelevant records such as udp and socket.
state[]
This is equivalent to defining an array called state,
NF
Represents the number of fields in the record, as shown above, NF is equal to 6
$NF
Represents the value of a field, as shown in the record above, $NF, which is $6, represents the value of the sixth field, which is TIME_WAIT
state[$NF]
The value representing the array element, as shown above, is the number of connections in the state[TIME_WAIT] state,
++state[$NF]
For the record shown above, the number of connections in the state[TIME_WAIT] state is added to one
END
Represents the command
to execute in the final stage for(key in state)
traverses groups
print key, \ "t", state key
Print the keys and values of the array, split between them with the \t TAB.
If a large number of TIME_WAIT connections are found in the system, the solution is to adjust the kernel parameters,
vim /etc/sysctl.conf
edit the file to add the following:
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 30
Then execute /sbin/ sysctl-p for the parameter to take effect.
Squid server with high concurrency under Linux, TCP TIME_WAIT socket number often reaches 20,000 to 30,000, the server can be easily dragged to death. By modifying the Linux kernel parameters, you can reduce the number of TIME_WAIT sockets on the Squid server.
vi /etc/sysctl.conf
Add the following lines: reference
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_max_tw_buckets = 5000
Description:
net.ipv4.tcp_syncookies = 1 means SYN Cookies is turned on. When SYN waiting queue overflow occurs, cookies is enabled for processing, which can prevent a small number of SYN attacks. The default value is 0, indicating shutdown.
Es234en.ipv4.tcp_tw_reuse = 1 means reuse is enabled. Es239en-WAIT sockets is allowed to be re-used for a new TCP connection.
Es244en.ipv4.tcp_tw_recycle = 1 means that the quick recovery of TIME-WAIT sockets in TCP connection is turned on.
Es254en.ipv4.tcp_fin_timeout = 30 indicates that if the socket is turned off by the local request, this parameter determines how long it will remain in the FIN-WAIT-2 state.
Es262en.ipv4.tcp_keepalive_time = 1200 indicates the frequency at which TCP sends keepalive messages when keepalive is in use. The default is 2 hours instead of 20 minutes.
Es271en.ipv4.ip_local_port_range = 1024 65000 indicates the port range for outgoing connections. By default it is small: 32768 to 61000, changed to 1024 to 65000.
Es278en.ipv4.tcp_max_syn_backlog = 8192 indicates the length of the SYN queue, which is 1024 by default.
Es286en.ipv4.tcp_max_tw_buckets = 5000 indicates that the system maintains the maximum number of TIME_WAIT sockets at the same time. Default is 180000, change to 5000. For servers like Apache, Nginx, and so on, the arguments in the last few lines are a good way to reduce the number of TIME_WAIT sockets, but not much for Squid. This parameter controls the maximum number of TIME_WAIT sockets and prevents the Squid server from being killed by a large number of TIME_WAIT sockets.
Execute the following command to enable the configuration:
/sbin/sysctl -p


Related articles: