Summary of common commands and tips for Docker

  • 2021-07-03 01:01:38
  • OfStack

Installation script

Ubuntu / Centos

There seems to be a problem with the installation of Debian, and the installation source needs to be solved.


curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh --mirror Aliyun / AzureChinaCloud

If you are in overseas cloud server manufacturers such as AWS or GCP,-mirror naturally does not need to be added.

After running Centos, you need to manually perform sudo systemctl start docker. service otherwise you will be prompted with an error such as docker not started

Log correlation

Grep String

Correct posture: docker logs nginx 2>&1 | grep "127."

For example, look at token of Jupyter Notebook: docker logs notebook 2>&1 | grep "token"

Other Supported Parameters

-f: Similar to tail-f command

--since: Starting with a timestamp, such as 2013-01-02T 13:23: 37 also supports relative time, such as: 42m

--until: Similar to above, but the other way around.

-t,--timestamp: Display timestamp

--tail N (default all): Displays the last few rows of data

Skills of Mount
For example, Grafana, etc., comes with 1 file in docker image. If the directory corresponding to mount is directly and the host directory is empty, then the internal docker

The directory will be overwritten. How to deal with this situation?

Simple and rude method 1: (idea only)

Run it once and then copy it using the docker cp command

Then delete the docker container just now, copy the file to the corresponding directory, and then mount

A more elegant method 2:

Take starting ClickHouse as an example


# Step 1.1:  Create 1 A docker volume ( Purpose:   Put CH Server The configuration of is exposed )
docker volume create --driver local \
--opt type=none \
--opt device=/home/centos/workspace/clickhouse/configs \
--opt o=bind \
ch-server-configs

# Step 1.2 :  Create volume , mount  Database data 
docker volume create --driver local \
--opt type=none \
--opt device=/home/centos/workspace/clickhouse/data \
--opt o=bind \
ch-server-data

# Step 2 :  Start   (Note:   When there is more stored data,   No. 1 2 It will take a long time to initialize.   Trying to link before initialization ends will fail.) 
sudo docker run -d --name mkt-ch-server \
-v ch-server-configs:/etc/clickhouse-server \
-v ch-server-data:/var/lib/clickhouse \
--restart always \
-p 9000:9000 -p 8123:8123 \
--ulimit nofile=262144:262144 yandex/clickhouse-server

This way, the configuration file that comes with the docker image will not be emptied at the first mount

Timing task

For example, mysql needs to export data backups regularly. This is best done using the host crond


0 1 * * * docker exec mysqldump xxxx

Common Docker images and their installation commands

MySQL

Installation


docker run --name some-mysql --restart always\
-v /my/own/datadir:/var/lib/mysql\
-e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag

Dump data

Mode 1: You already have mysql docker container locally

The following command is for mysql inside docker, or you can specify the parameter dump remote mysql directly


docker exec some-mysql sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > /path-to-data/all-databases.sql

Mode 2: mysql docker container is not available locally


#  Delete when used, and prompt for password at command line 
docker run -i --rm mysql:5.7 mysqldump --all-databases\
-h 172.17.0.1 -uroot -p | gzip -9 > /home/centos/workspace/mysql-data/backup.sql.gz

Editor reason, above > Is not displayed correctly

Restore data

Refer to Dump above, except that the command line tool is changed to mysql

Python Proxy

More or less, we should make some reptiles. Make full use of IP of cloud server to do crawler agent. At present, the simplest crawler agent building method has been found:


docker run --name py-proxy -d --restart always -p 8899:8899 abhinavsingh/proxy.py

Note:

The python script for this docker image is still old and does not support basic auth. If you need basic auth, you need to update the python file and re-docker build. Github Address: https://github.com/abhinavsingh/proxy. py In actual production, it seems that there is no automatic link when it is used too much. It may also be a problem with the target website. This thing can also be used as the browser SwitchSharp proxy oh, but recommended to add https + basic auth. Specific operation please see the official document.

Jupyter Notebook

After using it for 1 lap, I feel that Notebook, which comes with tensorflow image, is relatively simple. Because in the mount host directory, there is no strange permission problem. bash script is as follows:


sudo docker run --name notebook -d --restart always \
 -p 127.0.0.1:8888:8888 \
 -v /path-to-workspace/notebooks:/tf \
 tensorflow/tensorflow:latest-py3-jupyter

If you also need to link Apache, Spark, etc., refer to the following Script


sudo docker run --name pyspark-notebook -d \
 --net host --pid host -e TINI_SUBREAPER=true -p 8888:8888 \
 -v /path-to-workspace/notebooks:/tf \
 tensorflow/tensorflow:latest-py3-jupyter

Grafana


ID=$(id -u)
 
docker run \
 -d --restart always \
 -p 3000:3000 \
 --name=grafana \ 
 --user $ID -v /path-to-data/grafana-data:/var/lib/grafana \
 -e "GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-simple-json-datasource" \
 -e "GF_SECURITY_ADMIN_PASSWORD=aaabbbccc" \
 grafana/grafana 

1 Some simple explanations:

user $ID must be set, otherwise permission issue will appear inside docker GF_INSTALL_PLUGINS: Install 1 plug-in that is not your own GF_SECURITY_ADMIN_PASSWORD: Account number: admin/aaabbbccc

Summarize


Related articles: