Optimization of maximum number of connections for high concurrency socket in linux

  • 2020-05-17 07:35:48
  • OfStack

First we can pass it ulimit �a Command to view some resource constraints on the system as follows:


# ulimit -a
core file size   (blocks, -c) 1024
data seg size   (kbytes, -d) unlimited
scheduling priority    (-e) 0
file size    (blocks, -f) unlimited
pending signals     (-i) 127422
max locked memory  (kbytes, -l) 64
max memory size   (kbytes, -m) unlimited
open files      (-n) 20480
pipe size   (512 bytes, -p) 8
POSIX message queues  (bytes, -q) 819200
real-time priority    (-r) 0
stack size    (kbytes, -s) unlimited
cpu time    (seconds, -t) unlimited
max user processes    (-u) 81920
virtual memory   (kbytes, -v) unlimited
file locks      (-x) unlimited

The focus here is open files and max user processes . Represents: the maximum number of files opened by a single process; The system can apply for the maximum number of processes.

1. View and modify the number of files (currently valid at session) :


# ulimit -n
20480
# ulimit -n 20480

2. View and modify the number of processes (currently valid at session) :


# ulimit -u
81920
# ulimit -u 81920

3. Permanently set the number of files and the maximum process:

You can edit # vim /etc/security/limits.conf Where you specify a maximum setting; Or in the /etc/profile File designation;

1. Maximum number of processes:

I recently encountered the following exception while publishing my application on the Linux server:


Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:640)

At first glance, you might think the system is out of memory, and if you think so, you will be taken to the ditch by this tip. The essence of the above error is that the Linux operating system cannot create any more processes, resulting in an error. So to fix this you need to modify Linux to allow more processes to be created.

1. Temporary Settings:

We can use ulimit -u 81920 Modify the max user processes , but only in the current terminal of this session effect, re-login is still using the system default value.

2. Permanent Settings:

1) edit # vim /etc/security/limits.conf

Add the following to the file:

soft nproc 81920 hard nproc 81920

Note: * represents all users, while soft and hard represent soft and hard limits. (soft limit < = hard limit)

2) or in the file /etc/profile:


ulimit -u 81920

This allows you to set the maximum number of processes per user login.

2. Maximum number of open files:

Maximum file open on Linux platform, write a client or server application, in the high concurrency TCP connection processing, the highest number of concurrent to be system for single user 1 process at the same time to open the file number limitation (this is because the system for each TCP connection to create a socket handle, each socket handle is also a file handle).

1. View the maximum number of open files:


$ ulimit -n
1024

This means that the current user's each process up to allow open 1024 files at the same time, the 1024 file also need to remove each process must open the standard input, standard output and standard error, the server monitor socket, socket unix domain of communication between processes and other documents, then the rest of the number of files can be used for the client socket connection is only about 1024-10 = 1014 or so. This means that by default, Linux-based communication applications allow a maximum of 1014 simultaneous TCP connections.

For a communication handler that wants to support a higher number of simultaneous TCP connections, you must modify the soft (soft limit) and hard (hardlimit) limits that Linux has on the number of files that the current user's process can open at the same time. Among them:

Soft limit means that Linux further limits the number of files users can open at the same time within the range of the current system. The hard limit is the maximum number of files the system can open at the same time calculated based on the system's hardware resource status (mainly system memory). Usually the soft limit is less than or equal to the hard limit.

2. Modify the maximum number of open files:


[speng@as4 ~]$ ulimit -n 10240

In the above command, the maximum number of files allowed to be opened by the single 1 process is temporarily set (currently session is valid). If the system echo is something like "Operation notpermitted", then the above restriction failed, in fact because the specified value in the Linux system exceeds the soft or hard limit on the number of open files for the user. Therefore, it is necessary to modify the soft and hard limits that the Linux system imposes on users regarding the number of open files.

1) first of all, modify/etc/security/limits conf file, add the following line in the file:


speng soft nofile 10240
speng hard nofile 10240

Where speng specifies which user's open file limit to modify, the '*' sign can be used to indicate the limit to modify all users; soft or hard specifies whether to modify the soft or hard limit; 10240 specifies the new limit value you want to modify, which is the maximum number of open files (note that the soft limit value is less than or equal to the hard limit). Save the file after you have modified it.

2) secondly, modify/etc/pam d/login file, add the following line in the file:


session required /lib/security/pam_limits.so

This is tell Linux after the user login system, should call pam_limits. so module to set up the system of the user can maximum limit of the number of the use of various resources (including user limits of the maximum number of files can be opened), and pam_limits. so module from/etc security/limits conf reads the configuration file to set the limit. Save the file after you have modified it.

3) step 3, view the maximum open file limit (hard limit) of Linux at the system level, using the following command:


[speng@as4 ~]$ cat /proc/sys/fs/file-max
12158

This indicates that this Linux system allows a maximum of 12,158 files to be opened simultaneously (that is, including the total number of open files of all users), which is the hard limit of Linux system level. All user-level open files should not exceed this limit. In general, this system-level hard limit is the optimal maximum open file limit calculated at startup for Linux based on system hardware resources. If there is no special need, this limit should not be modified, unless you want to set a value beyond this limit for user-level open file limits. The way to modify this hard limit is to modify the /etc/ rc.local script by adding the following line:


# ulimit -n
20480
# ulimit -n 20480
0

This is to force Linux to set the hard limit for system-level open files to 22158 after startup. Save the file after you have modified it.

After completing the above steps and restarting the system, it is generally possible to set the maximum number of files that the Linux system allows for a single 1 process of a specified user to be opened at the same time to the specified value. If used after reboot ulimit-n The command checks that the limit on the number of files a user can open is still lower than the maximum set in the above steps, possibly because the ulimit-n command used in the user login script /etc/profile limits the number of files a user can open at the same time. Because through ulimit-n When you modify the system limit on the maximum number of files a user can open at the same time, the value of the new change can only be less than or equal to the previous one ulimit-n Set the value, so it is not possible to increase the limit value with this command. Therefore, if the above problems exist, you can only open /etc/profile script file and check whether it is used or not ulimit-n Limit the maximum number of files that users can open at the same time. If found, delete this command, or change the value set to the appropriate value, then save the file, and the user can log out and log in again.
With the above steps, the system limits on the number of open files are lifted for the communication handlers that support highly concurrent TCP connection processing.

3. Display of network kernel to TCP connection:

1. Modify the network kernel to limit the local port range of TCP connection:

When writing a client communication handler on Linux that supports highly concurrent TCP connections, it is sometimes found that even though the system limit on the number of files users can open at the same time has been lifted, it is still possible to successfully establish a new TCP connection when the number of concurrent TCP connections increases to 1 fixed number. There are several reasons for this. The first reason may be that the Linux network kernel has a limited range of local port Numbers. At this point, step 1 will analyze why the TCP connection cannot be established, and the problem will be found connect() The call failed to return, and the view system error message is "Can't assign requestedaddress". At the same time, if you use the tcpdump tool to monitor the network at this point, you will find that the client sends the SYN packet network traffic when there is no TCP connection at all. These situations indicate that the problem is that there are limitations in the local Linux system kernel. In fact, the root cause of the problem is that the TCP/ip protocol implementation module of the Linux kernel limits the range of local port Numbers that all clients in the system can connect to TCP (for example, the kernel limits the range of local port Numbers to between 1024 and 32768).

When a 1 time in the system at the same time, there are too many TCP client connection, because every TCP client connection takes up 1 only 1 local port (the port number in the system of the local port range limit), if the existing TCP client connection has filled all the local port, is can't for the new TCP client connection assigned a local port, so the system will be in this situation connect() The call returns a failure and sets the error message to "Can't assignrequested address". The local port number range that is set by default at kernel compile time may be too small, so you need to modify this local port range limit.

1) step 1: modify /etc/ sysctl.conf file by adding the following lines to the file:


# ulimit -n
20480
# ulimit -n 20480
1

This indicates that the system limits the local port range to between 1024 and 65000. Note that the minimum value of the local port range must be greater than or equal to 1024; The maximum port range should be less than or equal to 65535. Save the file after you have modified it.

2) step 2, execute sysctl command:


# ulimit -n
20480
# ulimit -n 20480
2

If there is no error, the new local port range setting is successful. If the above port range is set, it is theoretically possible for a single process to establish up to 60,000 TCP client connections simultaneously.

2. Modify the network kernel IP_TABLE firewall to limit the maximum number of TCP connections tracked:

The maximum number of open files has been modified, but it still appears that when the number of concurrent TCP connections increases to 1 fixed number, a new TCP connection cannot be successfully established. The second reason for the failure to establish an TCP connection may be that the IP_TABLE firewall of the Linux network kernel has a limit on the maximum number of TCP connections to be traced. If you monitor the network with the tcpdump tool, you will find that the client sends the SYN packet network traffic when there is no TCP connection at all. Since IP_TABLE firewall in the kernel to every TCP connection status tracking, tracking information will be in the conntrackdatabase in the kernel memory, the size of the database is limited, when there is too much TCP connection system, database capacity is insufficient, IP_TABLE establish tracking information, can not connect to new TCP so as connect obstruction () call. At this point, you must modify the kernel limit on the maximum number of TCP connections you can trace, in a similar way to the kernel limit on the range of local port Numbers:

1) step 1: modify /etc/ sysctl.conf file by adding the following lines to the file:


net.ipv4.ip_conntrack_max = 10240

This indicates that the system limit for the maximum number of TCP connections tracked is set to 10240. Note that this limit is kept as small as possible to save kernel memory usage.

2) step 2, execute sysctl Command:


# ulimit -n
20480
# ulimit -n 20480
4

If there is no error, the system has successfully modified the new maximum traced TCP connection limit. If the above parameters are set, a single process can theoretically establish more than 10,000 TCP client connections simultaneously.

[supplementary] optimized kernel parameters sysctl.conf:

conf /etc/ sysctl.conf is the configuration file used to control the linux network.conf is very important for network-dependent applications such as web server and cache server. RHEL provides the best tuning by default. Recommended configuration (clear the original /etc/ sysctl.conf and copy the following) :


# ulimit -n
20480
# ulimit -n 20480
5

After the modification, execute /sbin/sysctl -p To take effect.

4. Using programming techniques that support highly concurrent networks I/O:

When writing a highly concurrent TCP connection application on Linux, you must use the appropriate network I/O technology and I/O event dispatch mechanism.

Available I/O technologies are synchronous I/O, non-blocking synchronous I/O(also known as reactive I/O), and asynchronous I/O. In the case of high TCP concurrency, if I/O are used synchronously, this will seriously block the operation of the program unless one thread is created for each I/O TCP connection. However, too many threads will cause huge overhead due to the scheduling of threads by the system. Therefore, it is not advisable to use synchronous I/O in the case of high TCP concurrency. Consider using non-blocking synchronous I/O or asynchronous I/O. Non-blocking synchronization techniques for I/O include the use of mechanisms such as select(), poll(), epoll, and so on. The technique for asynchronous I/O is to use AIO.

From the point of view of the I/O event dispatch mechanism, using select() is inappropriate because it supports a limited number of concurrent connections (typically no more than 1024). If performance is taken into account, poll() is also inappropriate. Although it can support a high TCP concurrency, due to its "polling" mechanism, when the concurrency is high, I/ O event allocation may be uneven, resulting in the "hunger" phenomenon of I/O on some TCP connections. This is not the case with epoll or AIO. (the early AIO technical implementation of the Linux kernel was implemented by creating one thread per I/O request in the kernel. This implementation mechanism also had serious performance problems when used with highly concurrent TCP connections. However, in the latest Linux kernel, the implementation of AIO has been improved).
To sum up, when developing Linux applications that support highly concurrent TCP connections, epoll or AIO technologies should be used to achieve I/O control on concurrent TCP connections. This will provide effective I/O assurance for the enhancement program to support highly concurrent TCP connections.

conclusion


Related articles: