linux uses bond to realize dual network card binding a single IP example code

  • 2021-08-28 21:47:07
  • OfStack

In order to provide high availability of the network, we may need to bind multiple network cards into one virtual network card to provide services to the outside world, so that even if one of the physical network cards fails, the connection will not be interrupted.
bond is called bonding under Linux, IBM is called etherchanel, and broadcom is called team. However, no matter how the name changes, the effect is to use two or more network cards as one network card, which can increase bandwidth and redundancy at the same time.

There are two ways to realize the binding of two network cards: bond and team
Here, write down the binding method of bond first

Modes supported by bond

A total of 7 modes of bond [0-6] are supported, and 3 are commonly used, as follows:

mode=0: Default, load-balanced mode with automatic redundancy, but switch configuration is required. mode=1: Active and standby mode, in which if one line is disconnected, the other lines will be automatically standby, and no switch is required. mode=6: Load Balanced Mode, automatic backup, no switch configuration required.

As for other models, the explanation is as follows:

mode=2: Select the serial number of the network card = (source MAC address XOR destination MAC address)% the number of Slave network cards (slave network cards). Other transmission strategies can be specified through the xmit_hash_policy configuration entry
mode=3: With the broadcast policy, packets are broadcast to all Slave network cards for transmission
mode=4: With the Dynamic Link Aggregation policy, 1 Aggregation Group is created at startup, and all Slave network cards share the same rate and duplex settings
However, mode4 has two necessary conditions

1. Support the use of ethtool tools to obtain the rate and duplex settings of each slave network card;
2. Switch is required to support IEEE 802.3 ad Dynamic Link Aggregation (Dynamic link aggregation) mode
mode = 5: Transmission network cards are selected based on the rate of each slave network card. Requirements: Support the use of ethtool tools to obtain the rate of each slave network card.

Configuring bond

网卡 bond1 IP bond 模式
ens33、ens36 192.168.171.111 mode 1

Note: The ip address is configured on bond1, and the physical network card does not need to configure IP address


# Loading bonding Module and confirm that it has been loaded 
[root@web01 ~]# modprobe --first-time bonding
[root@web01 ~]# lsmod | grep bonding
bonding        141566 0 
# Edit bond1 Configuration file 
[root@web01 ~]# cat > /etc/sysconfig/network-scripts/ifcfg-bond1 << EOF
> DEVICE=bond1
> TYPE=Bond
> IPADDR=192.168.171.111
> NETMASK=255.255.255.0
> GATEWAY==192.168.171.2
> DNS1=114.114.114.114
> DNS2=8.8.8.8
> USERCTL=no
> BOOTPROTO=none
> ONBOOT=yes
> EOF
# Modify ens33 Configuration file 
[root@web01 ~]# cat > /etc/sysconfig/network-scripts/ifcfg-ens33 << EOF
> DEVICE=ens33
> TYPE=Ethernet
> ONBOOT=yes
> BOOTPROTO=none
> DEFROUTE=yes
> IPV4_FAILURE_FATAL=no
> NMAE=ens33
> MASTER=bond1        #  Need to match the above ifcfg-bond0 In the configuration file DEVICE Value of 1 To 
> SLAVE=yes
> EOF
# Modify ens36 Configuration file 
[root@web01 ~]# cat > /etc/sysconfig/network-scripts/ifcfg-ens33 << EOF
> DEVICE=ens36
> TYPE=Ethernet
> ONBOOT=yes
> BOOTPROTO=none
> DEFROUTE=yes
> IPV4_FAILURE_FATAL=no
> NAME=ens36
> MASTER=bood1
> SLAVE=yes
> EOF

#  Configure bonding
[root@web01 ~]# cat >> /etc/modules-load.d/bonding.conf << EOF
> alias bond1 bonding
> options bonding mode=1 miimon=200      #  Loading bonding Module, the external virtual network interface equipment is  bond1
> EOF

# Restart the network card for the configuration to take effect 
[root@web01 ~]# systemctl restart network

Note: If the restart of NIC Service 1 fails after configuration, and there is no error in the log, you can try to restart the NIC again after closing NetworkManager

View the information of each network card after restarting the network


[root@web01 ~]# ip a show ens33
2: ens33: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond1 state UP group default qlen 1000
  link/ether 00:0c:29:9f:33:9f brd ff:ff:ff:ff:ff:ff
[root@web01 ~]# ip a show ens36
3: ens36: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond1 state UP group default qlen 1000
  link/ether 00:0c:29:9f:33:9f brd ff:ff:ff:ff:ff:ff
[root@web01 ~]# ip a show bond1
7: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
  link/ether 00:0c:29:9f:33:9f brd ff:ff:ff:ff:ff:ff
  inet 192.168.171.111/24 brd 192.168.171.255 scope global noprefixroute bond1
    valid_lft forever preferred_lft forever
  inet6 fe80::20c:29ff:fe9f:339f/64 scope link 
    valid_lft forever preferred_lft forever

View information about bond1


# View bond1 Interface state of 
[root@web01 ~]# cat /proc/net/bonding/bond1        
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: load balancing (round-robin)      #  Binding mode 
MII Status: up      #  Interface status 
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: ens33       #  Alternate interface : ens33
MII Status: up        #  Interface status 
Speed: 1000 Mbps         #  Port rate 
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:9f:33:9f       #  Interface permanence MAC Address 
Slave queue ID: 0

Slave Interface: ens36      #  Alternate interface : ens36
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:9f:33:a9
Slave queue ID: 0

When this step is achieved, any one of the network cards down in ens33 or ens36 will not affect the communication

Note: If you are using the vmware workstaction virtual machine for testing, please do not directly execute the command ifdown ens33 or ifdown ens36 for testing, because of the virtual machine, you can cancel the connection in the network adapter


Related articles: