LVS+Keepalived builds the highly available load balancing of test article

  • 2020-05-06 12:10:22
  • OfStack

1. Start LVS high availability cluster service

First, start the service for each real server node:
[root@localhost ~]# /etc/init.d/lvsrs start
start LVS of REALServer
Then, start the Keepalived service:
on the main standby Director Server respectively [root@DR1 ~]#/etc/init.d/Keepalived start
[root@DR1 ~]#/ ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
- > RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP bogon:http rr
- > real-server1:http Route 1 1 0
- > real-server2:http Route 1 1 0
At this point, view the system log information for the Keepalived service as follows:
[root@localhost ~]# tail -f /var/log/messages
Feb 28 10:01:56 localhost Keepalived: Starting Keepalived v1.1.19 (02/27,2011)
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.25 added
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'.
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Configuration is using : 12063 Bytes
Feb 28 10:01:56 localhost Keepalived: Starting Healthcheck child process, pid=4623
Feb 28 10:01:56 localhost Keepalived_vrrp: Netlink reflector reports IP 192.168.12.25 added
Feb 28 10:01:56 localhost Keepalived: Starting VRRP child process, pid=4624
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Activating healtchecker for service [192.168.12.246:80]
Feb 28 10:01:56 localhost Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'.
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Activating healtchecker for service [192.168.12.237:80]
Feb 28 10:01:57 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Feb 28 10:01:58 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Feb 28 10:01:58 localhost Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
Feb 28 10:01:58 localhost Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.135 added
Feb 28 10:01:58 localhost avahi-daemon[2778]: Registering new address record for 192.168.12.135 on eth0.

ii, high availability function test

High availability is achieved with LVS's two Director Server. To simulate a failure, we first stopped the Keepalived service above the main Director Server and then observed the Keepalived operation log on the standby Director Server with the following information:
Feb 28 10:08:52 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Feb 28 10:08:54 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Feb 28 10:08:54 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
Feb 28 10:08:54 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.12.135
Feb 28 10:08:54 lvs-backup Keepalived_vrrp: Netlink reflector reports IP 192.168.12.135 added
Feb 28 10:08:54 lvs-backup Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.135 added
Feb 28 10:08:54 lvs-backup avahi-daemon[3349]: Registering new address record for 192.168.12.135 on eth0.
Feb 28 10:08:59 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.12.135
It can be seen from the log that after the failure of the host, the standby machine immediately detected it. At this time, the standby machine changed to MASTER role and took over the virtual IP resources of the host machine. Finally, the virtual IP was bound to the eth0 device.
Next, restart the Keepalived service on main Director Server and continue to observe the log status of standby Director Server:
Standby Director Server log status:
Feb 28 10:12:11 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advert
Feb 28 10:12:11 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
Feb 28 10:12:11 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) removing protocol VIPs.
Feb 28 10:12:11 lvs-backup Keepalived_vrrp: Netlink reflector reports IP 192.168.12.135 removed
Feb 28 10:12:11 lvs-backup Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.135 removed
Feb 28 10:12:11 lvs-backup avahi-daemon[3349]: Withdrawing address record for 192.168.12.135 on eth0.
According to the log, the standby machine returns to the BACKUP role after detecting that the host is back to normal and frees the virtual IP resource.

3. Load balancing test

Here, assume that the root directory of the web file for both real server nodes configuring the www service is /webdata/www directory, and then do the following:
, respectively Execute:
in real server1 echo "This is real server1" /webdata/www/index.html
Execute real server2:
echo "This is real server2" /webdata/www/index.html
Then open the browser and visit the address http://192.168.12.135, then refresh the page repeatedly. If you can see "This is real server1" and "This is real server2" respectively, LVS is already doing load balancing.

4. Failover test

Failover is to test whether the Keepalived monitoring module can detect the failure of a node in time, and then screen the failure node and transfer the service to the normal node for execution.
Here we stop the real server 1 node service, assuming that this node fails, and then look at the primary and standby log information as follows:
Feb 28 10:14:12 localhost Keepalived_healthcheckers: TCP connection to [192.168.12.246:80] failed !!!
Feb 28 10:14:12 localhost Keepalived_healthcheckers: Removing service [192.168.12.246:80] from VS [192.168.12.135:80]
Feb 28 10:14:12 localhost Keepalived_healthcheckers: Remote SMTP server [192.168.12.1:25] connected.
Feb 28 10:14:12 localhost Keepalived_healthcheckers: SMTP alert successfully sent.
According to the log, the Keepalived monitoring module detected the failure of the host 192.168.12.246 and removed the node from the cluster system.
If you visit http://192.168.12.135, you should only see "This is real server2". This is because node 1 failed and the Keepalived monitoring module removed node 1 from the cluster system.
Now restart the service on the real server 1 node and see the Keepalived log information as follows:
Feb 28 10:15:48 localhost Keepalived_healthcheckers: TCP connection to [192.168.12.246:80] success.
Feb 28 10:15:48 localhost Keepalived_healthcheckers: Adding service [192.168.12.246:80] to VS [192.168.12.135:80]
Feb 28 10:15:48 localhost Keepalived_healthcheckers: Remote SMTP server [192.168.12.1:25] connected.
Feb 28 10:15:48 localhost Keepalived_healthcheckers: SMTP alert successfully sent.
According to the log, the Keepalived monitoring module detects that after the host of 192.168.12.246 returns to normal, it adds this node to the cluster system.
At this point, visit the address http://192.168.12.135 again, and then continue to refresh this page, you should be able to see "This is real server1" and "This is real server2" respectively, which indicates that after the real server 1 node returned to normal, Keepalived monitoring module added this node into the cluster system.

This article is from technology makes dreams come true

Related articles: