DRAV41和D汉兰达2陈设keepalived和lvs作主从架构或主主架构,大切诺基S1和XC90S2配备nginx搭建web站点。
3.3.2 LVS
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
qingean@163.com
}
notification_email_from admin@test.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_MASTER
}
vrrp_instance VI_1 {
state MASTER #BACKUP上修修正改为BACKUP
interface ens4
virtual_router_id 51
priority 100 #BACKUP上改动为90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
193.168.140.80
}
}
virtual_server 193.168.140.80 80 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.255
protocol TCP
real_server 193.168.140.152 80 {
weight 10
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 193.168.140.224 80 {
weight 10
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
在意:各节点的时刻必要一块(ntpdate
ntp1.aliyun.com卡塔尔;关闭firewalld(systemctl stop
firewalld.service,systemctl disable
firewalld.service卡塔 尔(阿拉伯语:قطر,设置selinux为permissive(setenforce
0卡塔尔;同期有限补助各网卡协助MULTICAST(多播卡塔 尔(阿拉伯语:قطر通讯。
3.3.3 RS
为所有RS修改sysctl.conf
net.ipv4.conf.lo.arp_ignore= 1
net.ipv4.conf.lo.arp_announce= 2
net.ipv4.conf.all.arp_ignore= 1
net.ipv4.conf.all.arp_announce= 2
net.ipv4.ip_forward= 1
执行/sbin/ifconfig lo:0 193.168.140.80 broadcast 193.168.140.80 netmask
255.255.255.255
可用route –n查看是或不是中标
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 193.168.1.1 0.0.0.0 UG 100 0 0 ens4
193.168.0.0 0.0.0.0 255.255.0.0 U 100 0 0 ens4
193.168.140.80 0.0.0.0 255.255.255.255 UH 0 0 0 lo
若不成事施行/sbin/route add -host 193.168.140.80 dev lo:0
3.4 验证方式
3.4.1 全体机器关闭防火墙:
systemctl stop firewalld
3.4.2 全部PRADOS写入测验页和开启httpd服务
RS1:echo “RS1″ > /var/www/html/index.html
RS2:echo “RS2″ > /var/www/html/index.html
systemctl start httpd
3.4.3 主副LVS开启keepalived服务
systemctl start keepalived
3.4.4 访问
浏览器访谈
刷新会轮换体现中华VS1或OdysseyS2
3.4.5 查看当前测量试验机的拜候哀告被转接到哪些服务器
ipvsadm –lcn
IPVS connection entries
pro expire state source virtual destination
TCP 01:54 FIN_WAIT 10.167.225.60:53882 193.168.140.80:80
192.168.102.163:80
TCP 00:37 NONE 10.167.225.60:0 193.168.140.80:80 192.168.102.163:80
【www.710.com】Linux虚拟服务器。3.4.6 测试
效仿宕掉主LVS,服务器照常职业,再宕掉Web1,这时候只会显得Web2,那样就兑现IP负载均衡,高可用集群。当主LVS苏醒后,会切换到主动服务器,要是Keepalived监控模块检查测量检验web故障恢复生机后,苏醒的主机又将此节点参预集群系统中。
3.3 DTucson方式配置
3.3.1 情况概述
操作系统 负载均衡格局 VIP
RHEL7.4 DR 193.168.140.80
keepalived的主主架构
注
LVS(Linux Virtual
Server):Linux虚构服务器,这里经过keepalived作为负载均衡器
CR-VS(Real Server):真实服务器
VSportageRP(Virtual Router Redundancy Protocol): 虚构路由冗余研商,
消除局域网中配置静态网关现身单点失效现象的路由公约
纠正XC90S1和奥德赛S2,增添新的VIP:
[root@RS1 ~]# cp RS.sh RS_bak.sh
[root@RS1 ~]# vim RS_bak.sh #添加新的VIP
#!/bin/bash
#
vip=192.168.4.121
mask=255.255.255.255
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig lo:1 $vip netmask $mask broadcast $vip up
route add -host $vip dev lo:1
;;
stop)
ifconfig lo:1 down
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
;;
*)
echo "Usage $(basename $0) start|stop"
exit 1
;;
esac
[root@RS1 ~]# bash RS_bak.sh start
[root@RS1 ~]# ifconfig
...
lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 192.168.4.120 netmask 255.255.255.255
loop txqueuelen 0 (Local Loopback)
lo:1: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 192.168.4.121 netmask 255.255.255.255
loop txqueuelen 0 (Local Loopback)
[root@RS1 ~]# scp RS_bak.sh root@192.168.4.119:~
root@192.168.4.119's password:
RS_bak.sh 100% 693 0.7KB/s 00:00
[root@RS2 ~]# bash RS_bak.sh #直接运行脚本添加新的VIP
[root@RS2 ~]# ifconfig
...
lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 192.168.4.120 netmask 255.255.255.255
loop txqueuelen 0 (Local Loopback)
lo:1: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 192.168.4.121 netmask 255.255.255.255
loop txqueuelen 0 (Local Loopback)
1 Keepalived是什么,有何遵从?
1.1 Keepalived的定义
Keepalived 是叁个基于V奥迪Q5RP公约来完成的LVS高可用方案
1.2 Keepalived的作用
1.2.1 通过IP漂移完结高可用
主副LVS分享一个诬捏IP,同一时间唯有多个LVS占领VIP并对外提供服务,若该LVS不可用,则VIP漂移至另大器晚成台LVS并对外提供劳动;
1.2.2 对福睿斯S集群实行状态监察和控制
若科雷傲S不可用,则keepalived将其从集群中摘除,若EvoqueS恢复生机,则keepalived将其重新到场集群中。
2 Keepalived有三种格局,各种方式的相近点和差异点是何许?
2.1 Keepalived的形式种类
Keepalived有3种方式:NAT(地址调换卡塔 尔(英语:State of Qatar);DLAND(直接路由卡塔 尔(阿拉伯语:قطر;TUN(隧道卡塔 尔(英语:State of Qatar)
2.2 Keepalived的次第方式的牵线
2.2.1 NAT
可取:集群中的TiggoS能够行使别的帮助TCP/IP操作系统,大切诺基S能够分配Internet的保存私有地址,独有LVS供给一个法定的IP地址。
症结:增添性有限。当奥迪Q3S节点拉长到十八个或愈来愈多时,LVS将改成全部类其他瓶颈,因为全部的央求包和回答包都急需通过LVS再生。
2.2.2 TUN
作者们发现,超级多Internet服务(例如WEB服务器卡塔 尔(阿拉伯语:قطر的央求包异常的短小,而应答包日常极大。
可取:LVS只承当将号召包分发给奔驰M级S,而凯雷德S将回应包直接发放客户。所以,LVS能管理很伟大的央浼量,这种形式,意气风发台载荷均衡能为超过100台的兰德酷路泽S服务,LVS不再是系统的瓶颈。
缺陷:不过,这种办法亟待有所的服务器扶助”IP Tunneling”(IP
Encapsulation)左券,作者仅在Linux系统上实现了那些。
2.2.3 DR
亮点:和TUN雷同,LVS也只是散发乞求,应答包通过单独的路由方法再次回到给顾客端。与TUN比较,D瑞鹰这种完成格局不必要隧道结构,由此得以行使大好些个操作系统做为EvoqueS。
相差:必要LVS的网卡必须与TiguanS的网卡在叁个网段上
3 不一样方式的布署格局,验证格局分别是何等?
3.1 基本的情状须要
需要2台LVS和n(n>=2)台RS
3.1.1 LVS
安装ipvsadm(LVS管理工科具卡塔尔和keepalived;
开启路由转载作用:
vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
验证:
sysctl -p
net.ipv4.ip_forward = 1
3.1.2 RS
设置httpd(用于最终测量试验卡塔 尔(英语:State of Qatar)
3.2 NAT格局配置
3.2.1 景况概述
操作系统 负载均衡方式 VIP NVIP
RHEL7.4 NAT 193.168.140.80 192.168.102.165
keepalived的主干架构
LVS1 LVS2 RS1 RS2
ens3:192.168.102.161 ens3:192.168.102.162 ens3:192.168.102.163
ens3:192.168.102.164
ens4:193.168.140.79 ens4:193.168.140.83 网关:192.168.102.165
网关:192.168.102.165
搭建DR1:
[root@DR1 ~]# yum -y install ipvsadm keepalived #安装ipvsadm和keepalived
[root@DR1 ~]# vim /etc/keepalived/keepalived.conf #修改keepalived.conf配置文件
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id 192.168.4.116
vrrp_skip_check_adv_addr
vrrp_mcast_group4 224.0.0.10
}
vrrp_instance VIP_1 {
state MASTER
interface eno16777736
virtual_router_id 1
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass %&hhjj99
}
virtual_ipaddress {
192.168.4.120/24 dev eno16777736 label eno16777736:0
}
}
virtual_server 192.168.4.120 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.4.118 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.4.119 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@DR1 ~]# systemctl start keepalived
[root@DR1 ~]# ifconfig
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.4.116 netmask 255.255.255.0 broadcast 192.168.4.255
inet6 fe80::20c:29ff:fe93:270f prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:93:27:0f txqueuelen 1000 (Ethernet)
RX packets 14604 bytes 1376647 (1.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6722 bytes 653961 (638.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eno16777736:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.4.120 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:93:27:0f txqueuelen 1000 (Ethernet)
[root@DR1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.4.120:80 rr
-> 192.168.4.118:80 Route 1 0 0
-> 192.168.4.119:80 Route 1 0 0
顾客端举办测验:
[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done #客户端正常访问
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
[root@DR1 ~]# systemctl stop keepalived.service #关闭DR1的keepalived服务
[root@DR2 ~]# systemctl status keepalived.service #观察DR2,可以看到DR2已经进入MASTER状态
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2018-09-04 11:33:04 CST; 7min ago
Process: 12983 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 12985 (keepalived)
CGroup: /system.slice/keepalived.service
├─12985 /usr/sbin/keepalived -D
├─12988 /usr/sbin/keepalived -D
└─12989 /usr/sbin/keepalived -D
Sep 04 11:37:41 happiness Keepalived_healthcheckers[12988]: SMTP alert successfully sent.
Sep 04 11:40:22 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Transition to MASTER STATE
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Entering MASTER STATE
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) setting protocol VIPs.
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Sending/queueing gratuitous ARPs on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done #可以看到客户端正常访问
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
LVS1 LVS2 RS1 RS2
ens4:193.168.140.79 ens4:193.168.140.83 ens4:193.168.140.152
ens4:193.168.140.224
客商端测验:
[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
[root@client ~]# for i in {1..20};do curl http://192.168.4.121;done
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
3.2.2 LVS
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
qingean@163.com #故障接纳联系人
}
notification_email_from admin@test.com #故障发送给旁人
smtp_server 127.0.0.1 #本机发送邮件
smtp_connect_timeout 30
router_id LVS_MASTER #BACKUP上改换为LVS_BACKUP
}
vrrp_instance VI_1 {
state MASTER #BACKUP上改造为BACKUP
interface ens4
virtual_router_id 51 #虚拟路由标志,主从相同
priority 100 #BACKUP上修改革改为90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111 #主导认证密码必须黄金时代律
}
virtual_ipaddress {
193.168.140.80 #虚拟IP(VIP)
}
}
vrrp_instance LAN_GATEWAY { #概念网关
state MASTER #BACKUP上修改正改为BACKUP
interface ens3
virtual_router_id 62 #虚构路由ID,主从相似
priority 100 #BACKUP上改换为90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress { #ens3网关虚构IP
192.168.102.165
}
}
virtual_server 192.168.102.165 80 { #概念内网网关虚构IP和端口
delay_loop 6 #检查RS时间,单位秒
lb_algo rr
#安装负载调整算法,轮叫(rr)、加权轮叫(wrr)、最小连接(lc)、加权最小连接(wlc)、基于局地性最小连接(lblc)、带复制的依照局地性起码链接(lblcr)、指标地点散列(dh)和源地址散列(sh)
lb_kind NAT #设置LVS负载均衡NAT方式
persistence_timeout 50
#同生机勃勃IP的接连60秒内被分配到均等台真正服务器(测验时提出改为0)
protocol TCP #接纳TCP合同检查途锐S状态
real_server 192.168.102.161 80 { #率先个网关节点
weight 3 #节点权重值
TCP_CHECK { #健检形式
connect_timeout 3 #连年超时
nb_get_retry 3 #重试次数
delay_before_retry 3 #重试间隔/S
}
}
real_server 192.168.102.162 80 { #第叁个网关节点
weight 3
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
virtual_server 193.168.140.80 80{ #概念虚构IP
delay_loop 6
lb_algo rr
lb_kind NAT
persistence_timeout 50
protocol TCP
real_server 192.168.102.163 80 { #第一个RS
weight 3
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 192.168.102.164 80 { #第二个RS
weight 3
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
3.2.3 RS
为全部RS增多网关为192.168.102.165:
vim /etc/sysconfig/network-scripts/ifcfg-ens3
GATEWAY=192.168.102.165
重启; 使用route –n查看是还是不是中标
IPVS connection entries
pro expire state source virtual destination
TCP 01:54 FIN_WAIT 10.167.225.60:53882 193.168.140.80:80
192.168.102.163:80
TCP 00:37 NONE 10.167.225.60:0 193.168.140.80:80 192.168.102.163:80
实例拓扑图:
参谋讴歌MDXS1的布局搭建PRADOS2。
搭建RS1:
[root@RS1 ~]# yum -y install nginx #安装nginx
[root@RS1 ~]# vim /usr/share/nginx/html/index.html #修改主页
<h1> 192.168.4.118 RS1 server </h1>
[root@RS1 ~]# systemctl start nginx.service #启动nginx服务
[root@RS1 ~]# vim RS.sh #配置lvs-dr的脚本文件
#!/bin/bash
#
vip=192.168.4.120
mask=255.255.255.255
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig lo:0 $vip netmask $mask broadcast $vip up
route add -host $vip dev lo:0
;;
stop)
ifconfig lo:0 down
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
;;
*)
echo "Usage $(basename $0) start|stop"
exit 1
;;
esac
[root@RS1 ~]# bash RS.sh start
因此命令ifconfig能够查看见是不是展开了MULTICAST:
修改DR1和DR2:
[root@DR1 ~]# vim /etc/keepalived/keepalived.conf #修改DR1的配置文件,添加新的实例,配置服务器组
...
vrrp_instance VIP_2 {
state BACKUP
interface eno16777736
virtual_router_id 2
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass UU**99^^
}
virtual_ipaddress {
192.168.4.121/24 dev eno16777736 label eno16777736:1
}
}
virtual_server_group ngxsrvs {
192.168.4.120 80
192.168.4.121 80
}
virtual_server group ngxsrvs {
...
}
[root@DR1 ~]# systemctl restart keepalived.service #重启服务
[root@DR1 ~]# ifconfig #此时可以看到eno16777736:1,因为DR2还未配置
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.4.116 netmask 255.255.255.0 broadcast 192.168.4.255
inet6 fe80::20c:29ff:fe93:270f prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:93:27:0f txqueuelen 1000 (Ethernet)
RX packets 54318 bytes 5480463 (5.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 38301 bytes 3274990 (3.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eno16777736:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.4.120 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:93:27:0f txqueuelen 1000 (Ethernet)
eno16777736:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.4.121 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:93:27:0f txqueuelen 1000 (Ethernet)
[root@DR1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.4.120:80 rr
-> 192.168.4.118:80 Route 1 0 0
-> 192.168.4.119:80 Route 1 0 0
TCP 192.168.4.121:80 rr
-> 192.168.4.118:80 Route 1 0 0
-> 192.168.4.119:80 Route 1 0 0
[root@DR2 ~]# vim /etc/keepalived/keepalived.conf #修改DR2的配置文件,添加实例,配置服务器组
...
vrrp_instance VIP_2 {
state MASTER
interface eno16777736
virtual_router_id 2
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass UU**99^^
}
virtual_ipaddress {
192.168.4.121/24 dev eno16777736 label eno16777736:1
}
}
virtual_server_group ngxsrvs {
192.168.4.120 80
192.168.4.121 80
}
virtual_server group ngxsrvs {
...
}
[root@DR2 ~]# systemctl restart keepalived.service #重启服务
[root@DR2 ~]# ifconfig
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.4.117 netmask 255.255.255.0 broadcast 192.168.4.255
inet6 fe80::20c:29ff:fe3d:a31b prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:3d:a3:1b txqueuelen 1000 (Ethernet)
RX packets 67943 bytes 6314537 (6.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 23250 bytes 2153847 (2.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eno16777736:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.4.121 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:3d:a3:1b txqueuelen 1000 (Ethernet)
[root@DR2 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.4.120:80 rr
-> 192.168.4.118:80 Route 1 0 0
-> 192.168.4.119:80 Route 1 0 0
TCP 192.168.4.121:80 rr
-> 192.168.4.118:80 Route 1 0 0
-> 192.168.4.119:80 Route 1 0 0