Detail the Linux method for manually releasing the cache

  • 2020-10-07 18:58:42
  • OfStack

Linux free memory command:


sync
echo 1 > /proc/sys/vm/drop_caches

The value of drop_caches can be a number between 0 and 3, representing different meanings:
0: Do not release (system default)
1: Release the page cache
2: Release dentries and inodes
3: Release all caches

After freeing up the memory, go back and let the system reallocate the memory automatically.


echo 0 >/proc/sys/vm/drop_caches

free -m # See if the memory has been freed. 

If we need to release all the caches, type the following command:


echo 3 > /proc/sys/vm/drop_caches

Linux free memory knowledge ###############

With Linux, we generally don't need to free up memory because the system is already well managed. However, there are exceptions to everything. Sometimes the memory will be occupied by the cache, causing the system to use SWAP space and affecting the performance. For example, when you frequently access files under linux, the physical memory will be quickly used up, and when the program is over, the memory will not be released normally, but 1 as caching. At this point, the operation to free memory (clean up the cache) needs to be performed.

The caching mechanism of the Linux system is quite advanced, and it will cache for dentry (for VFS, accelerating the conversion of file path names to inode), Buffer Cache (read and write to disk blocks), and Page Cache (read and write to file inode). But after a lot of file manipulation, the cache can run out of memory resources. But in fact, we have already finished the file operation, this part of the cache is no longer used. At this point, we can only watch the cache take up memory space? Therefore, it is necessary to do the operation of freeing memory under Linux manually, which means freeing the cache. /proc is a virtual file system whose read and write operations can be used as a means of communicating with kernel entities. This means that you can modify the current kernel behavior by modifying the files in /proc. So we can adjust/proc sys/vm/drop_caches to free memory. To achieve the purpose of interpretation of the slowdown in deposit, we first need to understand the key configuration files/proc sys/vm/drop_caches. Cache release parameters are recorded in this file. The default value is 0, which means cache is not released.

1 after copying the files, available memory will be less, all is occupied by a cached, this is linux in order to improve the efficiency of file read: in order to improve the efficiency of disk access, Linux made more elaborate design, in addition to dentry caching (used for VFS, accelerate the path to the file name to inode transformation), also took two main Cache ways: Buffer Cache and Page Cache. The former is for reads and writes to disk blocks, and the latter is for reads and writes to file inode. These Cache effectively shorten the time of I/O system calls (such as read,write,getdents).

Before freeing memory, synchronize using the sync command to ensure file system integrity, write all unwritten system buffers to disk, including modified ES80en-ES81en, delayed blocks I/O, and read-write mapping files. Otherwise, unsaved files may be lost in the process of freeing the cache.


[root@fcbu.com ~]# free -m
       total    used    free   shared  buffers   cached
Mem:     7979    7897     82     0     30    3918
-/ buffers/cache:    3948    4031
Swap:     4996    438    4558

Line 1 describes the memory used by the system from a global perspective:

total total memory

The amount of memory used has used, 1 in general this value is larger because it includes the amount of memory used by cache applications

free number of free memory

shared total amount of memory Shared by multiple processes

buffers cache, mainly for directories,inode value, etc. (ls large directory can see this value increase)

cached cache for open files

Line 2 describes the memory usage of the application:
-buffers/cache Memory: used-ES114en-ES115en
buffers/cache Memory :free buffers cached
The previous value represents the size of memory used by the -ES123en /cache application, used minus the cache value
The latter value represents the amount of memory available to all applications in buffers/cache, plus the cache value

Line 3 indicates the use of swap:
used has been used
free unused

Available memory =free memory buffers cached

Why is free so small, and does it not free memory after closing the application?
In fact, we all know that this is because Linux manages memory differently than Windows. The fact that free is small does not mean it is running out of memory. You should look at the last value in line 2 of free: -/ buffers/cache: 3948 4031, which is the amount of memory available to the system.

Experience in actual projects tells us that swap can be quickly and easily judged from the usage of swap if the application has problems such as memory leak and overflow, but free is more difficult to check. I think since the core can quickly clear buffer or cache, but the core doesn't (default is 0), we shouldn't change it.

In general, when the application runs stably on the system, the free value will also remain at a stable value, although it may seem small. When there is insufficient memory, used for less than the available memory, the problems such as wrong OOM, or more should go to the analysis of the reasons of the application, such as users too big cause of memory, the application of memory, etc., otherwise, the empty buffer, forced to vacate free size, but can just put the question to temporarily blocked, so like 1 case linux often don't have to manually release the memory.


Related articles: