LVS(DR)+Keepalived高可用群集——双机热备

   日期:2020-09-24     浏览:93    评论:0    
核心提示:传统LVS的缺陷Keepalived工具介绍1.专为LVS和HA设计的一款健康检查工具2.Keepalived实现原理剖析3.Keepalived实现原理剖析4.Keepalived的实际应用Keepalived安装与启动1.环境部署2.配置Keepalived master服务器2.1.常用配置选项3.配置Keepalived slave服务器LVS+keepalived群集介绍1.主要优势2.测试群集案例实操实验拓扑实验操作一、配置主服务器1.调整/proc响应参数2.安装ipvsadm和 keepa

  • 传统LVS的缺陷
  • Keepalived工具介绍
    • 1.专为LVS和HA设计的一款健康检查工具
    • 2.Keepalived实现原理剖析
    • 3.Keepalived实现原理剖析
    • 4.Keepalived的实际应用
  • Keepalived安装与启动
    • 1.环境部署
    • 2.配置Keepalived master服务器
      • 2.1.常用配置选项
    • 3.配置Keepalived slave服务器
  • LVS+keepalived群集介绍
    • 1.主要优势
    • 2.测试群集
  • 案例实操
    • 实验拓扑
    • 实验操作
      • 一、配置主服务器
        • 1.调整/proc响应参数
        • 2.安装ipvsadm和 keepalived程序
        • 3.清除负载分配策略
        • 4.调整keepalived参数
        • 5.开启keepalived服务
        • 6.查看负载均衡策略
      • 二、配置备调度服务器
        • 1.调整/proc响应参数
        • 2.安装ipvsadm和 keepalived程序
        • 3.清除负载分配策略
        • 4.调整keepalived参数
        • 5.开启keepalived服务
        • 6.查看负载均衡策略
      • 三、.搭建共享储存
      • 四、配置web1服务器
        • 1.添加lo:0虚拟网卡VIP地址
        • 2.调整/proc响应参数
        • 3.设置本地路由
        • 4.挂载nfs共享储存
        • 5.测试挂载状况,测试无误
      • 五、配置web2服务器
        • 1.添加lo:0虚拟网卡VIP地址
        • 2.调整/proc响应参数
        • 3.设置本地路由
        • 4.挂载nfs共享储存
        • 5.测试挂载状况,测试无误
      • 六、群集测试
        • 1.测试LVS轮询状况,两次登入,查看负载分配是否正常,轮询为轮流查看web服务器的数据
        • 2.测试keepalived状况

传统LVS的缺陷

  • 企业应用中,单台服务器承担应用存在单点故障的危险
  • 单点故障一旦发生,企业服务将发生中断,造成极大的危害

Keepalived工具介绍

1.专为LVS和HA设计的一款健康检查工具

  • 支持故障自动切换(Failover)
  • 支持节点健康状态检查(Health Checking)
  • 官方网站: http://www.keepalived.orgl
  • 目前多使用2.0以上版本

2.Keepalived实现原理剖析

  • Keepalived采用VRRP热备份协议
  • 实现Linux服务器的多机热备功能

3.Keepalived实现原理剖析

  • VRRP(虚拟路由冗余协议)是针对路由器的一种备份解决方案
  • 由多台路由器组成一个热备组,通过共用的虚拟IP地址对外提供服务
  • 每个热备组内同时只有一台主路由器提供服务,其他路由器处于冗余状态
  • 若当前在线的路由器失效,则其他路由器会根据设置的优先级自动接替虚拟IP地址,继续提供服务

4.Keepalived的实际应用

  1. Keepalived可实现多机热备,每个热备组可有多台服务器

  2. 双机热备的故障切换是由虚拟IP地址的漂移来实现,适用于各种应用服务器

  3. 实现基于Web服务的双机热备

  • 漂移地址:192.168.10.72
  • 主、备服务器:192.168.10.73、192.168.10.74
  • 提供的应用服务:Web

Keepalived安装与启动

1.环境部署

  1. 在LVS群集环境中应用时,也需用到ipvsadm管理工具
  2. YUM安装Keepalived
  3. 启用Keepalived服务

2.配置Keepalived master服务器

Keepalived配置目录位于letc/keepalivedl
keepalived.conf是主配置文件

  • global_defs {…}区段指定全局参数
  • vrrp_instance 实例名称{…}区段指定VRRP热备参数
  • 注释文字以"!"符号开头
  • 目录samples,提供了许多配置样例作为参考

2.1.常用配置选项

  1. router_id HA_TEST_R1:本路由器(服务器)的名称
  2. vrrp_instance Vl_1:定义VRRP热备实例
  3. state MASTER:热备状态,MASTER表示主服务器
  4. interface ens33:承载VIP地址的物理接口
  5. virtual_router_id 1 :虚拟路由器的ID号,每个热备组保持一致
  6. priority 100:优先级,数值越大优先级越高
  7. advert_int 1:通告间隔秒数(心跳频率)
  8. auth_type PASS:认证类型
  9. auth_pass 123456:。密码字串
  10. virtual_ipaddress { vip}:指定漂移地址(VIP),可以有多个

3.配置Keepalived slave服务器

Keepalived备份服务器的配置与master的配置有三个选项不同

  1. router_id:设为自有名称
  2. state:设为BACKUP
  3. priority:值低于主服务器

LVS+keepalived群集介绍

  • Keepalived的设计目标是构建高可用的LVS负载均衡群集,可以调用ipvsadm工具来创建虚拟服务器、管理服务器池,而不仅仅用作双机热备
  • 使用Keepalived构建LVS群集更加简便易用

1.主要优势

  1. 对LVS负载调度器实现热备切换,提高可用性
  2. 对服务器池中的节点进行健康检查,自动移除失效节点,
  3. 恢复后再重新加入

2.测试群集

  • 通过主、从调度器的/varllog/messages日志文件,可以跟踪故障切换过程
  • 可执行“ipvsadm -ln” ."ipvsadm -lnc”等操作命令查看负载分配情况

案例实操

实验拓扑

实验操作

一、配置主服务器

1.调整/proc响应参数

[root@localhost ~]# vi /etc/sysctl.conf
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0  
[root@localhost ~]# sysctl -p //生效优化的配置
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0

2.安装ipvsadm和 keepalived程序

[root@localhost ~]# yum -y install ipvsadm keepalived

3.清除负载分配策略

[root@localhost ~]# ipvsadm -C

4.调整keepalived参数

[root@localhost keepalived]# cd /etc/keepalived/
[root@localhost keepalived]# cp keepalived.conf keepalived.confbak
[root@localhost keepalived]# vim keepalived.conf
global_defs { 
   router_id HA_TEST_R1
}
   state MASTER
   interface ens33
   virtual_router_id 1
   priority 100
      auth_type PASS
      auth_pass 123456
   virtual_ipaddress { 
      192.168.30.100 
   }  
}  
    lb_algo rr 
    lb_kind DR 
    persistence 60
    protocol TCP
    
    real_server 192.168.30.22 80 { 
        weight 1
        TCP_CHECK { 
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 4
        }   
    }   
    real_server 192.168.30.33 80 { 
        weight 1
        TCP_CHECK { 
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 4
        }   
    }   
}   

下面是上述脚本解释
global_defs {
router_id HA_TEST_R1 ####本路由器的服务器名称 HA_TEST_R1
}
vrrp_instance VI_1 { ####定义VRRP热备实列
state MASTER ####热备状态,master表示主服务器
interface ens33 ####表示承载VIP地址的物理接口
virtual_router_id 1 ####虚拟路由器的ID号,每个热备组保持一致
priority 100 ####优先级,优先级越大优先级越高
advert_int 1 ####通告间隔秒数(心跳频率)
authentication { ####认证信息,每个热备组保持一致
auth_type PASS ####认证类型
auth_pass 123456 ####认证密码
}
virtual_ipaddress { ####漂移地址(VIP),可以是多个
192.168.100.10
}
}
virtual_server 192.168.100.10 80 { ####虚拟服务器地址(VIP)、端口
delay_loop 15 ####健康检查的时间间隔(秒)
lb_algo rr ####轮询调度算法
lb_kind DR ####直接路由(DR)群集工作模式
persistence 60 ####连接保持时间(秒),若启用请去掉!号
protocol TCP ####应用服务采用的是TCP协议
real_server 192.168.100.42 80 { ####第一个WEB站点的地址,端口
weight 1 ####节点的权重
TCP_CHECK { ####健康检查方式
connect_port 80 ####检查端口目标
connect_timeout 3 ####连接超时(秒)
nb_get_retry 3 ####重试次数
delay_before_retry 4 ####重试间隔(秒)
}
}

5.开启keepalived服务

[root@localhost keepalived]# systemctl start keepalived 
[root@localhost keepalived]# systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[root@localhost keepalived]# ip addr show dev ens33 //查看ens33地址,开启keepalived服务后自动生成VIP地址,不需要手动配置
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:2e:3b:31 brd ff:ff:ff:ff:ff:ff
    inet 192.168.30.10/24 brd 192.168.30.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.30.100/32 scope global ens33    ##这里可以看到VIP地址了
      ……省略部分

6.查看负载均衡策略

[root@localhost ~]# ipvsadm -ln //策略自动添加
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.30.100:80 rr
  -> 192.168.30.22:80             Route   1      0          0         
  -> 192.168.30.33:80             Route   1      0          0         

二、配置备调度服务器

1.调整/proc响应参数

[root@localhost ~]# vi /etc/sysctl.conf
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0  
[root@localhost ~]# sysctl -p //生效优化的配置
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0

2.安装ipvsadm和 keepalived程序

[root@localhost ~]# yum -y install ipvsadm keepalived

3.清除负载分配策略

[root@localhost ~]# ipvsadm -C

4.调整keepalived参数

[root@localhost keepalived]# cd /etc/keepalived/
[root@localhost keepalived]# cp keepalived.conf keepalived.confbak
[root@localhost keepalived]# vim keepalived.conf
global_defs { 
   router_id HA_TEST_R2
}
vrrp_instance VI_1 { 
   state BACKUP
   interface ens33
   virtual_router_id 1
   priority 99
   advert_int 1
   authentication { 
      auth_type PASS
      auth_pass 123456
   }
   virtual_ipaddress { 
      192.168.30.100
   }
}

virtual_server 192.168.30.100 80 { 
    delay_loop 15
    lb_algo rr
    lb_kind DR
    persistence 60
    protocol TCP

    real_server 192.168.30.22 80 { 
        weight 1
        TCP_CHECK { 
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 4
        }
    }
    real_server 192.168.30.33 80 { 
        weight 1
        TCP_CHECK { 
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 4
        }
    }
}

下面是上述脚本解释
global_defs {
router_id HA_TEST_R2 ####本路由器的服务器名称 HA_TEST_R2
}
vrrp_instance VI_1 { ####定义VRRP热备实列
state BACKUP ####热备状态,backup表示辅服务器
interface ens33 ####表示承载VIP地址的物理接口
virtual_router_id 1 ####虚拟路由器的ID号,每个热备组保持一致
priority 99 ####优先级,优先级越大优先级越高
advert_int 1 ####通告间隔秒数(心跳频率)
authentication { ####认证信息,每个热备组保持一致
auth_type PASS ####认证类型
auth_pass 123456 ####认证密码
}
virtual_ipaddress { ####漂移地址(VIP),可以是多个
192.168.100.10
}
}

5.开启keepalived服务

root@localhost keepalived]# systemctl start keepalived 
[root@localhost keepalived]# systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[root@localhost keepalived]# ip addr show dev ens33 //现在是查看不到VIP地址的,因为是备选服务器
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:e5:5e:bb brd ff:ff:ff:ff:ff:ff
    inet 192.168.30.11/24 brd 192.168.30.255 scope global noprefixroute ens33
     ……省略部分

6.查看负载均衡策略

[root@localhost ~]# ipvsadm -ln //策略自动添加
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.30.100:80 rr
  -> 192.168.30.22:80             Route   1      0          0         
  -> 192.168.30.33:80             Route   1      0          0         
[root@localhost ~]# tail -f /var/log/messages //查看日志可以观察负载情况

三、.搭建共享储存

[root@localhost ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.30.44  netmask 255.255.255.0  broadcast 192.168.30.255
        inet6 fe80::a52a:406e:6512:1c66  prefixlen 64  scopeid 0x20<link>
[root@localhost ~]# route -n //查看路由表,看网关
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.30.0    0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0
[root@localhost ~]# rpm -q nfs-utils //查看nfs是否安装
nfs-utils-1.3.0-0.61.el7.x86_64
[root@localhost ~]# rpm -q rpcbind //查看rpcbind是否安装
rpcbind-0.2.0-47.el7.x86_64
[root@localhost ~]# yum -y install nfs-utils //确实安装了
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Package 1:nfs-utils-1.3.0-0.61.el7.x86_64 already installed and latest version
Nothing to do
[root@localhost ~]# yum -y install rpcbind //安装远程调用
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Package rpcbind-0.2.0-47.el7.x86_64 already installed and latest version
Nothing to do
[root@localhost ~]# systemctl start nfs //启动nfs
[root@localhost ~]# systemctl enable nfs //设置开机自启
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@localhost ~]# systemctl start rpcbind
[root@localhost ~]# systemctl enable rpcbind
[root@localhost ~]# vi /etc/exports //设置共享名单
/opt/web1 192.168.30.0/24(rw,sync)
/opt/web2 192.168.30.0/24(rw,sync)
[root@localhost ~]# systemctl restart nfs
[root@localhost ~]# systemctl restart rpcbind
[root@localhost ~]# showmount -e //查看共享目录
Export list for localhost.localdomain:
/opt/web2 192.168.30.0/24
/opt/web1 192.168.30.0/24
[root@localhost web2]# exportfs -vr
exporting 192.168.30.0/24:/opt/web2
exporting 192.168.30.0/24:/opt/web1
[root@localhost ~]# mkdir /opt/web1/ /opt/web1/ 
[root@localhost ~]# vi /opt/web1/index.html //制作web1的网页
<html>
<title>I'm Web1</title> <body><h1>I'm Web1</h1></body>
<img src="web1.jpg" />
</html>
[root@localhost ~]# vi /opt/web2/index.html //制作web2的网页
<html>
<title>I'm Web2</title> <body><h1>I'm Web2</h1></body>
<img src="web2.png" />
</html>

四、配置web1服务器

1.添加lo:0虚拟网卡VIP地址

[root@localhost ~]# cd /etc/sysconfig/network-scripts/
[root@localhost network-scripts]# cp ifcfg-lo ifcfg-lo:0
[root@localhost network-scripts]# vi ifcfg-enslo:0
DEVICE=lo:0
IPADDR=192.168.30.100
NETMASK=255.255.255.255
ONBOOT=yes
[root@localhost network-scripts]# ifup lo:0 //开启lo:0网卡
[root@localhost network-scripts]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.30.33  netmask 255.255.255.0  broadcast 192.168.30.255
……省略部分
lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 192.168.30.100  netmask 255.255.255.255

2.调整/proc响应参数

[root@localhost network-scripts]# vi /etc/sysctl.conf
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
[root@mysql2 network-scripts]# sysctl -p //生效参数
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2

3.设置本地路由

[root@localhost network-scripts]# vi /etc/rc.local //设置开机项
/sbin/route add -host 192.168.30.100 dev lo:0  //添加VIP到本地路由,即直连路由
[root@localhost network-scripts]# route add -host 192.168.30.100 dev lo:0
[root@mysql2 network-scripts]# route -n //查看路由表,VIP添加成功
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.30.11   0.0.0.0         UG    100    0        0 ens33
192.168.30.0    0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.30.100  0.0.0.0         255.255.255.255 UH    0      0        0 lo
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

4.挂载nfs共享储存

[root@localhost ~]# yum -y install nfs-utils
[root@localhost ~]# showmount -e 192.168.30.44 //若查看不到,可能是nfs服务器发布失败,去nfs服务器再次发布一下:exportsfs
Export list for 192.168.30.44:
/opt/web2 192.168.30.0/24
/opt/web1 192.168.30.0/24
[root@mysql2 ~]# yum -y install httpd
[root@mysql2 ~]# systemctl start httpd
[root@mysql2 ~]# systemctl enable httpd
[root@localhost html]# vi /etc/fstab
192.168.30.44:/opt/web1 /var/www/html nfs defaults,_netdev 0 0
[root@localhost html]# mount 192.168.30.44:/opt/web1 /var/www/html/

5.测试挂载状况,测试无误

五、配置web2服务器

1.添加lo:0虚拟网卡VIP地址

[root@localhost html]# cd /etc/sysconfig/network-scripts/
[root@localhost network-scripts]#cp ifcfg-lo ifcfg-lo:0
[root@localhost network-scripts]# vi ifcfg-enslo:0
DEVICE=lo:0
IPADDR=192.168.30.100
NETMASK=255.255.255.255
ONBOOT=yes 
[root@localhost network-scripts]# systemctl restart network
[root@localhost network-scripts]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.30.22  netmask 255.255.255.0  broadcast 192.168.30.255
……省略部分
lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 192.168.30.100  netmask 255.255.255.255

2.调整/proc响应参数

[root@localhost network-scripts]# vi /etc/sysctl.conf
########插入下面配置,解决ARP映射问题参数
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
[root@mysql2 network-scripts]# sysctl -p //生效配置
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2

3.设置本地路由

[root@localhost network-scripts]# vi /etc/rc.local
/sbin/route add -host 192.168.30.100 dev lo:0   //添加VIP本地访问路由
[root@localhost network-scripts]# route add -host 192.168.30.100 dev lo:0
[root@mysql2 network-scripts]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.30.11   0.0.0.0         UG    100    0        0 ens33
192.168.30.0    0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.30.100  0.0.0.0         255.255.255.255 UH    0      0        0 lo
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

4.挂载nfs共享储存

[root@localhost ~]# yum -y install nfs-utils
[root@localhost ~]# showmount -e 192.168.30.44 //若查看不到,可能是nfs服务器发布失败,去nfs服务器再次发布一下:exportsfs
Export list for 192.168.30.44:
/opt/web2 192.168.30.0/24
/opt/web1 192.168.30.0/24
[root@mysql2 ~]# yum -y install httpd
[root@mysql2 ~]# systemctl start httpd
[root@mysql2 ~]# systemctl enable httpd
[root@localhost html]# vi /etc/fstab
192.168.30.44:/opt/web1 /var/www/html nfs defaults,_netdev 0 0
[root@localhost html]# mount 192.168.30.44:/opt/web1 /var/www/html/

5.测试挂载状况,测试无误

六、群集测试

1.测试LVS轮询状况,两次登入,查看负载分配是否正常,轮询为轮流查看web服务器的数据


2.测试keepalived状况

2.1、登入网页并抓包在两台调度服务器都在线的情况下,抓取到主服务器发出的VRRP报文

ping通VIP地址,并查看ARP表对应的MAC地址信息,此时为master的MAC地址



2.2,关闭master的keepalived功能,再次测试,由备服务器发出报文


再次ping通VIP地址,并查看ARP表对应的MAC地址信息,此时已经转变成Backup备服务器的MAC地址了

 
打赏
 本文转载自:网络 
所有权利归属于原作者,如文章来源标示错误或侵犯了您的权利请联系微信13520258486
更多>最近资讯中心
更多>最新资讯中心
0相关评论

推荐图文
推荐资讯中心
点击排行
最新信息
新手指南
采购商服务
供应商服务
交易安全
关注我们
手机网站:
新浪微博:
微信关注:

13520258486

周一至周五 9:00-18:00
(其他时间联系在线客服)

24小时在线客服