An in depth understanding of docker's four networking approaches

  • 2020-06-07 05:35:08
  • OfStack

bridge mode (default)

Host IP was 186.100.8.117 and the container network was 172.17.0.0/16

Here's a look at the four types of networks that docker offers:

Create container :(because it is the default setting, no network is specified --net="bridge". You can also see that eth0 is created in the container.)


[root@localhost ~]# docker run -i -t mysql:latest /bin/bash
root@e2187aa35875:/usr/local/mysql# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
    valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
    valid_lft forever preferred_lft forever
75: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
  link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
  inet 172.17.0.2/16 scope global eth0
    valid_lft forever preferred_lft forever
  inet6 fe80::42:acff:fe11:2/64 scope link
    valid_lft forever preferred_lft forever

The container is connected to the Host network:


root@e2187aa35875:/usr/local/mysql# ping 186.100.8.117
PING 186.100.8.117 (186.100.8.117): 48 data bytes
56 bytes from 186.100.8.117: icmp_seq=0 ttl=64 time=0.124 ms

eth0 is actually the first end of veth pair, and the other end (vethb689485) is connected to the docker0 bridge:


[root@localhost ~]# ethtool -S vethb689485
NIC statistics:
   peer_ifindex: 75
[root@localhost ~]# brctl show
bridge name   bridge id        STP enabled   interfaces
docker0     8000.56847afe9799    no       vethb689485

Access to the external network within the container is realized through Iptables:


[root@localhost ~]# iptables-save |grep 172.17.0.*
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A FORWARD -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5000 -j ACCEPT

none way

Specified method: --net="none"

As you can see, the container created this way has no network at all:


[root@localhost ~]# docker run -i -t --net="none" mysql:latest /bin/bash
root@061364719a22:/usr/local/mysql# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
    valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
    valid_lft forever preferred_lft forever
root@061364719a22:/usr/local/mysql# ping 186.100.8.117
PING 186.100.8.117 (186.100.8.117): 48 data bytes
ping: sending packet: Network is unreachable

So what's the use of this?

In fact, ES51en-ES52en does this in a way that puts the responsibility for network creation entirely in the hands of the user.

More flexible and complex networks can be achieved.

In addition, the container can communicate via the link container. (More on that later)

host way

Specified method: --net="host"

This created container lets you see all the network devices on host.

In the container, you have full access to these devices, such as DUBS. So docker reminds us that this approach is not safe.

If you use this approach in a well-isolated environment (such as a tenant's virtual machine), it's not a problem.

container multiplexing

--net="container:name or id"

As you can see from the following example, the two networks are identical.


[root@localhost ~]# docker run -i -t  mysql:latest /bin/bash
root@02aac28b9234:/usr/local/mysql# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
    valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
    valid_lft forever preferred_lft forever
77: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
  link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
  inet 172.17.0.3/16 scope global eth0
    valid_lft forever preferred_lft forever
  inet6 fe80::42:acff:fe11:3/64 scope link
    valid_lft forever preferred_lft forever
[root@localhost ~]# docker run -i -t --net="container:02aac28b9234" mysql:latest /bin/bash
root@02aac28b9234:/usr/local/mysql# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
    valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
    valid_lft forever preferred_lft forever
77: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
  link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
  inet 172.17.0.3/16 scope global eth0
    valid_lft forever preferred_lft forever
  inet6 fe80::42:acff:fe11:3/64 scope link
    valid_lft forever preferred_lft forever

Example (Network implementation in openstack ES98en-ES99en)

The nova-docker plug-in for openstack can manage containers as a sample to manage virtual machines.

How to create the container network: First create the container --net="none", then configure the container network using the following procedure. (OVS for example, you can also use linux bridge)


# create veth equipment 
ip link add name veth00 type veth peer name veth01
# will veth equipment 1 Client access ovs The bridge br-int In the 
ovs-vsctl -- --if-exists del-port veth00 -- add-port br-int veth00 -- set Interface veth00 external-ids:iface-id=iface_id external-ids:iface-status=active external-ids:attached-mac=00:ff:00:aa:bb:cc external-ids:vm-uuid=instance_id
# Start the ovs The new port of 
ip link set veth00 up 

# Configure the network of containers namespace
mkdir -p /var/run/netns
ln -sf /proc/container_pid/ns/net /var/run/netns/container_id

# will veth On the other 1 End to end container namespace
ip link set veth01 netns container_id
# Configure the network device on the container mac,ip,gateway
ip netns exec container_id ip link set veth01 address mac_address
ip netns exec container_id ifconfig veth01 ip 
ip netns exec container_id ip route replace default via gateway dev veth01

At this point, the container is connected to the virtual network on host. After that, br-ES119en connects with ES120en-ES121en/ES122en-ES123en, and finally realizes the connection with the business network.


Related articles: