Solution of docker Disk Space Cleaning

  • 2021-11-01 05:24:41
  • OfStack

Some time ago, I encountered a problem that docker disk space was too small to write data. The reason is that I ran several mysql containers locally (Mac Pro) and imported 1 part of the online data, and there was no available space before the import was finished.

I initially allocated 80GB disk space to docker, and then only 0.6 GB was left when it failed to write.

You can use the following command to view docker container and mirrored disk occupancy:


docker system df

You can see output similar to the following, including mirrors (Images), containers (Containers), data volumes (Local Volumes), and build caches (Build Cache):


TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          5         5         5.158GB   0B (0%)
Containers      6         6         7.601MB   0B (0%)
Local Volumes   4         3         46.64GB   207MB (0%)
Build Cache     34        0         1.609MB   1.609MB

It can be seen that Local Volumes occupies the largest disk space among the above four types. If you want to see a more detailed report, use the following command.


docker system df -v

You can see many outputs, among which the ones about Local Volumes are:


VOLUME NAME                                                        LINKS     SIZE
641d4976908910dca270a2bf5edf33408daf7474a0f27c850b6580b5936b6dd0   1         40.1GB
ovpn-data                                                          1         33.51kB
267b52c5eab8c6b8e0f0d1b02f8c68bdaffba5ea80a334a6d20e67d22759ef48   1         6.325GB
f4a3866ef41e3972e087883f8fa460ad947b787f2eafb6545c759a822fb6e30d   0         207MB

To make room, the first simple and crude way that comes to mind is to delete all stopped containers. The command is as follows.


docker system prune -a 

However, be careful when using this command. Remember to start all docker containers that need to be used first, otherwise those containers that are not started will be deleted by this command. For security reasons, this command does not delete data volumes that are not referenced by any container by default. If you need to delete these data volumes at the same time, you need to explicitly specify-volumns.

So if you want to force the removal of containers, networks, mirrors, and data volumes, you can use the following command.


docker system prune --all --force --volumns

The second method is to change the path where docker stores data to another place with more disk space. If you are an Mac user, you can modify the Disk image location settings in the graphical Docker Desktop settings.

I tried the second method, changing Disk image location to the external SSD, and trying to synchronize the previous data first. Later, I found a big problem, that is, importing data in mysql container will be very slow, which is probably the writing bottleneck of external SSD in docker container.

If you just want to run several containers instead of storing database data locally, it is a good idea to store docker data to SSD.


Related articles: