Summary of Cache Cleaning Methods in Linux System

  • 2021-07-18 09:39:25
  • OfStack

1) Introduction to caching mechanism

In Linux system, in order to improve the performance of the file system, the kernel allocates a buffer by using a part of physical memory to cache system operations and data files. When the kernel receives a request for reading and writing, the kernel first goes to the buffer to find out if there is any requested data, and then returns it directly. If not, it directly operates the disk through the driver.
Advantages of caching mechanism: reduce the number of system calls, reduce CPU context switching and disk access frequency.

CPU context switching: CPU gives each process a set service time. When the time slice is used up, the kernel takes back the processor from the running process, saves the current running state of the process, and then loads the next task. This process is called context switching. Essentially, it is the process switching between the terminated running process and the process to be run.

2) View cache and memory usage


[root@localhost ~]# free -m
       total    used    free   shared  buffers   cached
Mem:     7866    7725    141     19     74    6897
-/+ buffers/cache:    752    7113
Swap:    16382     32   16350

It can be seen from the above command results that the total memory is 8G, 7725M has been used, and the remaining 141M, which is what many people think.
But in fact, this cannot be regarded as the actual utilization rate. Because of the caching mechanism, the specific algorithm is as follows:

Free Memory = free (141) + buffers (74) + cached (6897)

Used Memory = total (7866)-Free Memory

From this, it is calculated that the free memory is 7112M, and the used memory is 754M, which is the true utilization rate. Please also refer to the information in-/+ buffers/cache, which is also the correct utilization rate of memory.

3) The cache distinguishes between buffers and cached

The kernel allocates the buffer size under the condition that the system can normally use physical memory and read and write data.

buffers is used to cache metadata and pages, which can be understood as system cache. For example, vi opens a file.

cached is used to cache files, which can be understood as data block cache. For example, dd if=/dev/zero of=/tmp/test count=1 bs=1G test writes a file, which will be cached in the buffer. When this test command is executed again the next time, the writing speed will be obviously fast.

4) Use of Swap

Swap means swap partition. Usually, what we call virtual memory is a partition divided from the hard disk. When the physical memory is insufficient, the kernel will release some programs that have not been used for a long time in the cache (buffers/cache), and then temporarily put these programs into Swap, that is to say, Swap will be used if the physical memory and cache memory are insufficient.

swap Cleanup:


swapoff -a && swapon -a

Note: There is a precondition for this cleanup. The free memory must be larger than the swap space already used

5) Method of freeing buffer memory

a) Clean up pagecache (page cache)


# echo 1 > /proc/sys/vm/drop_caches    Or  # sysctl -w vm.drop_caches=1

b) Clean up dentries (directory cache) and inodes


# echo 2 > /proc/sys/vm/drop_caches    Or  # sysctl -w vm.drop_caches=2

c) Clean up pagecache, dentries, and inodes


# echo 3 > /proc/sys/vm/drop_caches    Or  # sysctl -w vm.drop_caches=3

The above three methods are temporary cache release methods. To permanently release the cache, you need to configure in the file /etc/sysctl.conf: vm.drop_caches=1/2/3, and then sysctl-p can take effect!

In addition, the sync command can be used to clean up the file system cache, as well as zombie (zombie) objects and the memory they occupy


# sync

In most cases, the above operations will not cause damage to the system, but will only help to free up unused memory.

But if you are writing data while you are doing these operations, you are actually clearing the data from the file cache before it reaches disk, which can have a very bad effect. So what if this kind of thing is avoided?

Therefore, the file /proc/sys/vm/vfs_cache_pressure has to be mentioned here to tell the kernel what priority to use when cleaning the inoe/dentry cache.

vfs_cache_pressure=100 This is the default, and the kernel will try to re-declare dentries and inodes and adopt a "reasonable" ratio relative to page caching and swap caching.
Reducing the value of vfs_cache_pressure causes the kernel to tend to retain the dentry and inode caches.
Increasing the value of vfs_cache_pressure (that is, above 100) causes the kernel to tend to re-declare dentries and inodes

In summary, the value of vfs_cache_pressure:
Values less than 100 do not result in a significant reduction in cache
A value above 100 will tell the kernel that you want to clean the cache with high priority.

In fact, no matter what the value of vfs_cache_pressure is, the kernel cleans the cache at a relatively low speed.
If you set this value to 10000, the system will reduce the cache to a reasonable level.

Use the sync command to synchronize before releasing memory to ensure the integrity of the file system. Write all unwritten system buffers to disk, including modified i-node, delayed blocks I/O, and read-write mapping files. Otherwise, in the process of releasing the cache, unsaved files may be lost.

/proc is a virtual file system, which can be read and written to as a means of communication with kernel entities. That is, you can modify the current kernel behavior by modifying the files in/proc. That is, we can free up memory by adjusting /proc/sys/vm/drop_caches.

The value of drop_caches can be a number between 0 and 3, representing different meanings:

0: Do not release (system default)
1: Release the page cache
2: Release dentries and inodes
3: Release all caches

The above is about Linux system to clear the cache of all knowledge points, thank you for your study and support for this site.


Related articles: