The maximum number of threads the maximum number of processes and the number of files opened by a process under Linux are discussed in depth

  • 2020-04-02 00:44:37
  • OfStack

= = = = = maximum number of threads = = = =
The maximum number of threads per process on a Linux system has its maximum limit, PTHREAD_THREADS_MAX
This restriction can be viewed in /usr/include/bits/local_lim. H
For linuxthreads the value is generally 1024, and for NPTL there are no hard and fast restrictions, only system resources
The main resource of this system is the memory occupied by the stack of threads. With ulimit-s, you can check the default thread stack size, which is normally 8M
You can write a simple piece of code to verify how many threads you can create up to

int main()
{
     int i = 0;
     pthread_t thread;
     while (1) {
         if (pthread_create(&thread, NULL, foo, NULL) != 0)
             return;
         i ++;
         printf("i = %dn", i);
     }
}

Test shows that Up to 381 threads can be created on linuxthreads and EAGAIN will be returned
A maximum of 382 threads can be created on the NPTL, after which ENOMEM is returned
This value is exactly in line with the theory, because the user space of the process in 32-bit Linux is the size of 3G, which is 3072M. You divide 3072M by 8M to get 384, but actually the code segment, data segment, etc. still take up some space. This value should be rounded down to 383, minus the main thread, and you get 382.
So why is there one less thread on linuxthreads? This is true because linuxthreads also requires an administrative thread
There are two ways to break the memory limit
1) reduce the default stack size with ulimit -s 1024
2) set a smaller stacksize with pthread_attr_getstacksize when calling pthread_create
Note that even this cannot break the 1024 thread hard limit unless you recompile the C library < = it is worth discussing here, I can get 3054 threads by using ulimit -s 1024 on ubuntu 7.04+3G memory.
================= === maximum number of processes ================= ======
Maximum theoretical number of processes in LINUX:
The local segment descriptor LDT of each process exists as a separate segment, and in the global segment descriptor GDT there is an entry that points to the starting address of the segment and specifies the length of the segment and other parameters. In addition to the above, each process has a TSS structure (the task state segment) as well. Therefore, each process occupies two table entries in the global segment description table GDT. So what is the capacity of the GDT? The bit segment width used as the GDT table index in the segment register is 13 bits, so there can be 8192 description items in the GDT. In addition to the cost of some systems (such as GDT in item 2 and 3, respectively for the kernel code and data sections, paragraphs 4 and 5 never used in the current process of code and data segments, item 1 is always zero, etc.), there are 8180 tables available, so in theory the largest number of processes in the system is 4090.

= = = = recompile the kernel to change the maximum number of files a process opens and to modify the listen queue = = = =
Use "ulimit -a" to see these limits, such as:

[root@HQtest root]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 2047
virtual memory (kbytes, -v) unlimited

Using the ulimit? Change the number of open files to 10240
Although using ulimit? A can see the change to 10240, but when I did the stress test, when more than 1024 users, the service will be down.
Finally only recompiled the kernel, compiled the kernel after everything OK!
Operation method is as follows:
Different versions of the Linux kernel have different tuning methods,
In the Linux kernel 2.2.x, you can modify it with the following command:

# echo '8192' > /proc/sys/fs/file-max
# echo '32768' > /proc/sys/fs/inode-max 

Add the above command to the /etc/rc.c.rc.local file to set the above values each time the system restarts.
In Linux kernel 2.4.x, you need to modify the source code and then recompile the kernel to take effect. Edit the include/ Linux /fs.h file in the Linux kernel source code,
Change NR_FILE from 8192 to 65536 and NR_RESERVED_FILES from 10 to 128. Edit the fs/inode.c file to change MAX_INODE from 16384 to 262144.
In general, it is reasonable to set the maximum number of open files to 256 per 4M of physical memory, such as 16384 for 256M of memory.
The maximum number of I nodes used should be three to four times the maximum number of open files.

Related articles: