Detailed analysis of Docker cpu limitation analysis

  • 2020-06-03 08:50:30
  • OfStack

This article tested several configuration parameters that the Docker container restricts the use of cpu resources. The top and dstat commands were used to analyze resource possession.


package main

import (
  "flag"
  "runtime"
  "fmt"
)

func main() {
  cpunum := flag.Int("cpunum", 0, "cpunum")
  flag.Parse()
  fmt.Println("cpunum:", *cpunum)
  runtime.GOMAXPROCS(*cpunum)
  for i := 0; i < *cpunum - 1; i++ {
    go func() {
      for {

      }
    }()
  }
  for {

  }
}

I made 1 image occupied by the test cpu, and the image occupied 1 core by default


FROM busybox
COPY ./full_cpu /full_cpu
RUN chmod +x /full_cpu
ENTRYPOINT ["/full_cpu", "-cpunum"]
CMD ["1"]

docker build -t fangfenghua/cpuuseset .
docker push fangfenghua/cpuuseset


docker info
...
Default Runtime: runc
Security Options: seccomp
Kernel Version: 3.10.0-229.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 993.3 MiB
Name: localhost.localdomain
ID: TU6M:E6WM:PZDN:ULJX:EWKS: 
  ...

docker run -it --rm=true fangfenghua/cpuuseset 
[root@localhost src]# top

top - 07:23:52 up 1:23, 2 users, load average: 0.61, 1.12, 1.04
Tasks: 154 total,  3 running, 145 sleeping,  6 stopped,  0 zombie
%Cpu(s): 18.0 us, 0.1 sy, 0.0 ni, 81.8 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem : 1017144 total,  422120 free,  171676 used,  423348 buff/cache
KiB Swap: 1040380 total, 1040284 free,    96 used.  688188 avail Mem 

 PID USER   PR NI  VIRT  RES  SHR S %CPU %MEM   TIME+ COMMAND                                         
20196 root   20  0  3048  720  460 R 101.7 0.1  0:37.56 full_cpu                                         
  1 root   20  0  41536  4028  2380 S  0.0 0.4  0:02.60 systemd                                         
  2 root   20  0    0   0   0 S  0.0 0.0  0:00.04 kthreadd                                         
  3 root   20  0    0   0   0 S  0.0 0.0  0:00.48 ksoftirqd/0                                       
  5 root    0 -20    0   0   0 S  0.0 0.0  0:00.00 kworker/0:0H                                       
  7 root   rt  0    0   0   0 S  0.0 0.0  0:00.69 migration/0  

docker run -it --rm=true fangfenghua/cpuuseset 4
top - 07:27:17 up 1:27, 2 users, load average: 2.41, 1.47, 1.18
Tasks: 159 total,  3 running, 145 sleeping, 11 stopped,  0 zombie
%Cpu(s): 99.6 us, 0.2 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
KiB Mem : 1017144 total,  402508 free,  190908 used,  423728 buff/cache
KiB Swap: 1040380 total, 1040284 free,    96 used.  668608 avail Mem 

 PID USER   PR NI  VIRT  RES  SHR S %CPU %MEM   TIME+ COMMAND                                         
20935 root   20  0  3048  720  452 R 400.0 0.1  0:55.80 full_cpu                                         
  1 root   20  0  41620  4088  2380 S  0.0 0.4  0:02.88 systemd                                         
  2 root   20  0    0   0   0 S  0.0 0.0  0:00.04 kthreadd 

On Linux systems, the parameters that can be used to limit docker container resource usage are:

--cpu-period int Limit CPU CFS (Completely Fair Scheduler) period --cpu-quota int Limit CPU CFS (Completely Fair Scheduler) quota -c, --cpu-shares int CPU shares (relative weight) --cpuset-cpus string CPUs in which to allow execution (0-3, 0,1)

docker provides the parameters ES22en-ES23en and ES24en-ES25en to control the CPU clock cycle to which the container can be allocated. The cpu-ES28en trick is used to specify how often the container's use of CPU will be reallocated, while the ES30en-ES31en trick is used to specify the maximum amount of time that can be spent running the container during the cycle. Unlike the cpu-ES33en configuration, which specifies an absolute value and has no elasticity in it, the container's use of CPU resources will never exceed the configured value.

The units of ES37en-ES38en and ES39en-ES40en are microseconds (s). The minimum value of ES42en-ES43en is 1000 microseconds, the maximum value is 1 second (10^6 s), and the default value is 0.1 second (100000 s). The default value of ES46en-ES47en is -1, indicating no control.

For example, if the container process needs to use 0.2 seconds of a single CPU every 1 second, set ES51en-ES52en to 1,000,000 (that is, 1 second) and ES53en-ES54en to 200,000 (0.2 seconds). Of course, in the multi-core case, if the container process is allowed to occupy both CPU completely, you can set ES56en-ES57en to 100,000 (0.1 second) and ES58en-ES59en to 200,000 (0.2 second).

Use the container images made in this article to test the cpu-ES64en and ES65en-ES66en parameters.

In the 4 core systems used in this article, if you want cpuusetest to fill two cores, how should you configure it? It can be seen from the above analysis that if ES71en-ES72en is set to 100000, then cpu-ES74en should be set to 4 * 100000 if the expectation is to occupy 4 cores, and 2 * 100000 if the expectation is to occupy 1 core. Here's test 1:


docker run --name cpuuse -d --cpu-period=100000 --cpu-quota=200000 fangfenghua/cpuusetest 4
top - 07:46:31 up 1:46, 2 users, load average: 0.16, 0.21, 0.51
Tasks: 168 total,  2 running, 142 sleeping, 24 stopped,  0 zombie
%Cpu(s): 47.8 us, 0.1 sy, 0.0 ni, 51.9 id, 0.1 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem : 1017144 total,  364724 free,  227816 used,  424604 buff/cache
KiB Swap: 1040380 total, 1040284 free,    96 used.  631052 avail Mem 

 PID USER   PR NI  VIRT  RES  SHR S %CPU %MEM   TIME+ COMMAND                                         
21766 root   20  0  3048  724  464 R 193.3 0.1  1:00.37 full_cpu                                         
  1 root   20  0  41620  4088  2380 S  0.0 0.4  0:03.13 systemd                                         
  2 root   20  0    0   0   0 S  0.0 0.0  0:00.05 kthreadd                                         
  3 root   20  0    0   0   0 S  0.0 0.0  0:00.52 ksoftir


top - 07:47:17 up 1:47, 2 users, load average: 0.47, 0.26, 0.51
Tasks: 172 total,  3 running, 144 sleeping, 25 stopped,  0 zombie
%Cpu(s): 99.6 us, 0.1 sy, 0.0 ni, 0.3 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem : 1017144 total,  358760 free,  233292 used,  425092 buff/cache
KiB Swap: 1040380 total, 1040284 free,    96 used.  625180 avail Mem 

docker run --name cpuuse -d --cpu-period=100000 --cpu-quota=400000 fangfenghua/cpuusetest 4
 PID USER   PR NI  VIRT  RES  SHR S %CPU %MEM   TIME+ COMMAND                                         
21976 root   20  0  3048  724  456 R 398.3 0.1  0:16.81 full_cpu                                         
21297 root   20  0    0   0   0 S  0.3 0.0  0:00.08 kworker/0:2                                       
  1 root   20  0  41620  4088  2380 S  0.0 0.4  0:03.19 systemd                                         
  2 root   20  0    0   0   0 S  0.0 0.0  0:00.05 kthreadd 

Using the above two parameters, you can set the exact control of cpu. There is also one parameter, ES79en-ES80en, which is relative. Suppose you set the A container cpu-ES83en to 1536 and B to 512. So, what is the cpu occupation before the container B is started?


top - 07:56:10 up 1:56, 2 users, load average: 0.75, 0.36, 0.50
Tasks: 153 total,  3 running, 140 sleeping, 10 stopped,  0 zombie
%Cpu(s): 99.7 us, 0.1 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
KiB Mem : 1017144 total,  436300 free,  155616 used,  425228 buff/cache
KiB Swap: 1040380 total, 1040284 free,    96 used.  703544 avail Mem 

 PID USER   PR NI  VIRT  RES  SHR S %CPU %MEM   TIME+ COMMAND                                         
22216 root   20  0  3048  720  456 R 399.3 0.1  0:55.03 full_cpu                                         
  1 root   20  0  41620  4088  2380 S  0.0 0.4  0:03.29 systemd                                         
  2 root   20  0    0   0   0 S  0.0 0.0  0:00.05 kthreadd                                         
  3 root   20  0    0   0   0 S  0.0 0.0  0:00.54 ksoftirqd/0 

Start container B:


top - 07:57:09 up 1:57, 2 users, load average: 3.55, 1.16, 0.76
Tasks: 162 total,  4 running, 148 sleeping, 10 stopped,  0 zombie
%Cpu(s): 99.6 us, 0.2 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
KiB Mem : 1017144 total,  428772 free,  158304 used,  430068 buff/cache
KiB Swap: 1040380 total, 1040284 free,    96 used.  700444 avail Mem 

 PID USER   PR NI  VIRT  RES  SHR S %CPU %MEM   TIME+ COMMAND                                         
22216 root   20  0  3048  720  456 R 305.7 0.1  4:40.78 full_cpu                                         
22336 root   20  0  3048  720  460 R 95.3 0.1  0:09.02 full_cpu                                         
  1 root   20  0  41620  4088  2380 S  0.0 0.4  0:03.31 systemd 

It is not hard to see from the above test results. When setting the relative value, container A is still full of cpu before startup, while after startup, container B accounts for 3/4 and container B accounts for 1/4.

There is also a parameter es101EN-sets, which specifies the core used by the container. Test with the above test container, specifying the container to use the 0,3 core:


docker run --name cpuuse -d --cpuset-cpus=0,3 fangfenghua/cpuusetest 4

0,3 core occupancy:


[root@localhost src]# dstat -c -C 0,3
-------cpu0-usage--------------cpu3-usage------
usr sys idl wai hiq siq:usr sys idl wai hiq siq
 25  9 66  0  0  0: 12  1 87  0  0  0
100  0  0  0  0  0:100  0  0  0  0  0
 99  0  0  0  0  1:100  0  0  0  0  0
 99  1  0  0  0  0: 99  1  0  0  0  0
100  0  0  0  0  0:100  0  0  0  0  0
100  0  0  0  0  0:100  0  0  0  0  0

1,2 core occupancy rate:


[root@localhost src]# dstat -c -C 1,2
-------cpu1-usage--------------cpu2-usage------
usr sys idl wai hiq siq:usr sys idl wai hiq siq
 21  8 71  0  0  0: 10  1 89  0  0  0
 0  0 100  0  0  0: 0  0 100  0  0  0
 0  0 100  0  0  0: 0  0 100  0  0  0
 0  0 100  0  0  0: 0  0 100  0  0  0
 0  0 100  0  0  0: 0  0 100  0  0  0

Related articles: