admin 管理员组文章数量: 887021
文章目录
- 1. keepalived简介
- 1.1 keepalived是什么?
- 1.2Keepalived 简要介绍
- 1.3方案规划
- 1.4在主备机上分别安装nginx
- 2.keepalived安装
- keepalived配置文件讲解
- keepalived默认配置文件
- 2.1keepalived配置
- 2.2配置主keepalived
- 2.3配置备keepalived
- 2.4让keepalived监控nginx
- 2.5配置keepalived加入监控脚本的配置
- 2.6配置备keepalived
- 3.测试1高可用
- 3.1测试2高可用
- 4.脑裂
- 4.1 脑裂产生的原因
- 4.2 脑裂的常见解决方案
- 4.3 对脑裂进行监控
1. keepalived简介
1.1 keepalived是什么?
Keepalived 软件起初是专为LVS负载均衡软件设计的,用来管理并监控LVS集群系统中各个服务节点的状态,后来又加入了可以实现高可用的VRRP功能。因此,Keepalived除了能够管理LVS软件外,还可以作为其他服务(例如:Nginx、Haproxy、MySQL等)的高可用解决方案软件。
Keepalived软件主要是通过VRRP协议实现高可用功能的。VRRP是Virtual Router RedundancyProtocol(虚拟路由器冗余协议)的缩写,VRRP出现的目的就是为了解决静态路由单点故障问题的,它能够保证当个别节点宕机时,整个网络可以不间断地运行。
所以,Keepalived 一方面具有配置管理LVS的功能,同时还具有对LVS下面节点进行健康检查的功能,另一方面也可实现系统网络服务的高可用功能。
keepalived官网
1.2Keepalived 简要介绍
Keepalived 是一种高性能的服务器高可用或热备解决方案, Keepalived 可以用来防止服务器单点故障的发生,通过配合 Nginx 可以实现 web 前端服务的高可用。
Keepalived 以 VRRP 协议为实现基础,用 VRRP 协议来实现高可用性(HA)。 VRRP(Virtual RouterRedundancy Protocol)协议是用于实现路由器冗余的协议, VRRP 协议将两台或多台路由器设备虚拟成一个设备,对外提供虚拟路由器 IP(一个或多个),而在路由器组内部,如果实际拥有这个对外 IP 的路由器如果工作正常的话就是 MASTER,或者是通过算法选举产生, MASTER 实现针对虚拟路由器 IP 的各种网络功能,如 ARP 请求, ICMP,以及数据的转发等;其他设备不拥有该虚拟 IP,状态是 BACKUP,除了接收 MASTER 的VRRP 状态通告信息外,不执行对外的网络功能。当主机失效时, BACKUP 将接管原先 MASTER 的网络功能。VRRP 协议使用多播数据来传输 VRRP 数据, VRRP 数据使用特殊的虚拟源 MAC 地址发送数据而不是自身网卡的 MAC 地址, VRRP 运行时只有 MASTER 路由器定时发送 VRRP 通告信息,表示 MASTER 工作正常以及虚拟路由器 IP(组), BACKUP 只接收 VRRP 数据,不发送数据,如果一定时间内没有接收到 MASTER 的通告信息,各 BACKUP 将宣告自己成为 MASTER,发送通告信息,重新进行 MASTER 选举状态。
1.3方案规划
目标效果:对俩个网站进行高可用
VIP | IP | 主机名 | Nginx端口 | 默认主从 |
---|---|---|---|---|
192.168.70.250 | 192.168.70.134 | master | 80 | MASTER |
192.168.70.250 | 192.168.70.138 | backup | 80 | BACKUP |
本次高可用虚拟IP(VIP)地址暂定为 192.168.70.250
企业一般用公网IP
1.4在主备机上分别安装nginx
在master上安装nginx
[root@master ~]# dnf -y module install nginx:1.20
在backup上安装nginx
[root@backup ~]# dnf -y module install nginx:1.20
master添加测试文件
[root@master testpage]# ls
index.html
[root@master testpage]# pwd
/usr/share/testpage
[root@master testpage]# echo 'master page' > index.html
[root@master testpage]# cat index.html
master page
[root@master testpage]# systemctl start nginx
[root@master testpage]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 0.0.0.0:111 0.0.0.0:*
LISTEN 0 128 0.0.0.0:80 //端口 0.0.0.0:*
LISTEN 0 32 192.168.122.1:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 5 127.0.0.1:631 0.0.0.0:*
LISTEN 0 128 [::]:111 [::]:*
LISTEN 0 128 [::]:80 //端口 [::]:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 5 [::1]:631 [::]:*
//关闭防火墙和seliunx
[root@master testpage]# systemctl disable --now firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master testpage]# setenforce 0
[root@master testpage]# vim /etc/selinux/config
//网页访问
backup添加测试文件
[root@backup ~]# echo 'backup page' > /usr/share/testpage/index.html
[root@backup ~]# cat /usr/share/testpage/index.html
backup page
[root@backup ~]# systemctl start nginx
[root@backup ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 0.0.0.0:111 0.0.0.0:*
LISTEN 0 128 0.0.0.0:80 //端口 0.0.0.0:*
LISTEN 0 32 192.168.122.1:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 5 127.0.0.1:631 0.0.0.0:*
LISTEN 0 128 [::]:111 [::]:*
LISTEN 0 128 [::]:80 //端口 [::]:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 5 [::1]:631 [::]:*
//关闭防火墙和seliunx
[root@backup ~]# systemctl disable --now firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@backup ~]# setenforce 0
[root@backup ~]# vim /etc/selinux/config
//网页访问
2.keepalived安装
主备服务器上都要安装keepalived
[root@master ~]# dnf -y install keepalived
[root@backup ~]# dnf -y install keepalived
keepalived配置文件讲解
keepalived默认配置文件
keepalived 的主配置文件是 /etc/keepalived/keepalived.conf。其内容如下:
[root@master ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs { //全局配置
notification_email { //定义报警收件人邮件地址
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc //定义报警发件人邮箱
smtp_server 192.168.200.1 //邮箱服务器地址
smtp_connect_timeout 30 //定义邮箱超时时间
router_id LVS_DEVEL //定义路由标识信息,同局域网内唯一
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 { //定义实例
state MASTER //指定keepalived节点的初始状态,可选值为MASTER|BACKUP
interface eth0 //VRRP实例绑定的网卡接口,用户发送VRRP包
virtual_router_id 51 //虚拟路由的ID,同一集群要一致
priority 100 //定义优先级,按优先级来决定主备角色,优先级越大越优先
nopreempt //设置不抢占
advert_int 1 //主备通讯时间间隔
authentication { //配置认证
auth_type PASS //认证方式,此处为密码
auth_pass 1111 //同一集群中的keepalived配置里的此处必须一致,推荐使用8位随机数
}
virtual_ipaddress { //配置要使用的VIP地址
192.168.200.16
}
}
virtual_server 192.168.200.16 1358 { //配置虚拟服务器
delay_loop 6 //健康检查的时间间隔
lb_algo rr //lvs调度算法
lb_kind NAT //lvs模式
persistence_timeout 50 //持久化超时时间,单位是秒
protocol TCP //4层协议
sorry_server 192.168.200.200 1358 //定义备用服务器,当所有RS都故障时用sorry_server来响应客户端
real_server 192.168.200.2 1358 { //定义真实处理请求的服务器
weight 1 //给服务器指定权重,默认为1
HTTP_GET {
url {
path /testurl/test.jsp //指定要检查的URL路径
digest 640205b7b0fc66c1ea91c463fac6334d //摘要信息
}
url {
path /testurl2/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334d
}
url {
path /testurl3/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334d
}
connect_timeout 3 //连接超时时间
nb_get_retry 3 //get尝试次数
delay_before_retry 3 //在尝试之前延迟多长时间
}
}
real_server 192.168.200.3 1358 {
weight 1
HTTP_GET {
url {
path /testurl/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334c
}
url {
path /testurl2/test.jsp
digest 640205b7b0fc66c1ea91c463fac6334c
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
2.2 定制主配置文件
vrrp_instance段配置
nopreempt //设置为不抢占。默认是抢占的,当高优先级的机器恢复后,会抢占低优先 \
级的机器成为MASTER,而不抢占,则允许低优先级的机器继续成为MASTER,即使高优先级 \
的机器已经上线。如果要使用这个功能,则初始化状态必须为BACKUP。
preempt_delay //设置抢占延迟。单位是秒,范围是0---1000,默认是0.发现低优先 \
级的MASTER后多少秒开始抢占。
vrrp_script段配置
//作用:添加一个周期性执行的脚本。脚本的退出状态码会被调用它的所有的VRRP Instance记录。
//注意:至少有一个VRRP实例调用它并且优先级不能为0.优先级范围是1-254.
vrrp_script <SCRIPT_NAME> {
...
}
//选项说明:
script "/path/to/somewhere" //指定要执行的脚本的路径。
interval <INTEGER> //指定脚本执行的间隔。单位是秒。默认为1s。
timeout <INTEGER> //指定在多少秒后,脚本被认为执行失败。
weight <-254 --- 254> //调整优先级。默认为2.
rise <INTEGER> //执行成功多少次才认为是成功。
fall <INTEGER> //执行失败多少次才认为失败。
user <USERNAME> [GROUPNAME] //运行脚本的用户和组。
init_fail //假设脚本初始状态是失败状态。
//weight说明:
1. 如果脚本执行成功(退出状态码为0),weight大于0,则priority增加。
2. 如果脚本执行失败(退出状态码为非0),weight小于0,则priority减少。
3. 其他情况下,priority不变。
real_server段配置
weight <INT> //给服务器指定权重。默认是1
inhibit_on_failure //当服务器健康检查失败时,将其weight设置为0, \
//而不是从Virtual Server中移除
notify_up <STRING> //当服务器健康检查成功时,执行的脚本
notify_down <STRING> //当服务器健康检查失败时,执行的脚本
uthreshold <INT> //到这台服务器的最大连接数
lthreshold <INT> //到这台服务器的最小连接数
tcp_check段配置
connect_ip <IP ADDRESS> //连接的IP地址。默认是real server的ip地址
connect_port <PORT> //连接的端口。默认是real server的端口
bindto <IP ADDRESS> //发起连接的接口的地址。
bind_port <PORT> //发起连接的源端口。
connect_timeout <INT> //连接超时时间。默认是5s。
fwmark <INTEGER> //使用fwmark对所有出去的检查数据包进行标记。
warmup <INT> //指定一个随机延迟,最大为N秒。可防止网络阻塞。如果为0,则关闭该功能。
retry <INIT> //重试次数。默认是1次。
delay_before_retry <INT> //默认是1秒。在重试之前延迟多少秒。
2.1keepalived配置
2.2配置主keepalived
[root@master keepalived]# pwd
/etc/keepalived
[root@master keepalived]# mv keepalived.conf{,.bak} //备份防止
[root@master keepalived]# ls
keepalived.conf.bak
[root@master keepalived]# vi keepalived.conf
[root@master keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_instance VI_1 {
state MASTER
interface ens192 //与网卡名一致
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 123456 //随便填,最好自己随机生成一个密码与backup一致
}
virtual_ipaddress {
192.168.70.250
}
}
virtual_server 192.168.70.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.70.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.70.138 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@master keepalived]# systemctl enable --now keepalived //keepalived没有端口号
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
[root@master keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:90:2e brd ff:ff:ff:ff:ff:ff
inet 192.168.70.134/24 brd 192.168.70.255 scope global dynamic noprefixroute ens192
valid_lft 1147sec preferred_lft 1147sec
inet 192.168.70.250/32 scope global ens192 //自动生成VIP
valid_lft forever preferred_lft forever
inet6 fe80::17d0:f28b:5740:4c88/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:2e:7b:5b brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
2.3配置备keepalived
[root@backup ~]# cd /etc/keepalived/
[root@backup keepalived]# mv keepalived.conf{,.bak}
[root@backup keepalived]# ls
keepalived.conf.bak
[root@backup keepalived]# vi keepalived.conf
[root@backup keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb02
}
vrrp_instance VI_1 {
state BACKUP
interface ens33 //与网卡名一致
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 123456 //随便填,最好自己随机生成一个密码与master一致
}
virtual_ipaddress {
192.168.70.250
}
}
virtual_server 192.168.70.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.70.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.70.138 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@backup keepalived]# systemctl enable --now keepalived //开启服务
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
[root@backup keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:20:8d:17 brd ff:ff:ff:ff:ff:ff
inet 192.168.70.138/24 brd 192.168.70.255 scope global dynamic noprefixroute ens33
valid_lft 1664sec preferred_lft 1664sec
inet6 fe80::e9:8fd:e2ab:a0c7/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:74:09:10 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:74:09:10 brd ff:ff:ff:ff:ff:ff
[root@backup keepalived]#
访问VIP,如果访问不到就关闭备节点的nginx服务
2.4让keepalived监控nginx
在master上编写脚本
[root@master ~]# mkdir /scripts //规范化,创建目录
[root@master ~]# cd /scripts/
[root@master scripts]# vim check_nginx.sh
[root@master scripts]# cat check_nginx.sh
#!/bin/bash
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -lt 1 ];then
systemctl stop keepalived
fi
[root@master scripts]# chmod +x check_nginx.sh
[root@master scripts]# ls
check_nginx.sh
[root@master scripts]# cat notify.sh
[root@master scripts]# vi notify.sh
#!/bin/bash
VIP=$2
sendmail (){
subject="${VIP}'s server keepalived state is translate"
content="`date +'%F %T'`: `hostname`'s state change to master"
echo $content | mail -s "$subject" 2902314105@qq.com
}
case "$1" in
master)
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -lt 1 ];then
systemctl start nginx
fi
sendmail
;;
backup)
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -gt 0 ];then
systemctl stop nginx
fi
;;
*)
echo "Usage:$0 master|backup VIP"
;;
esac
[root@master scripts]# chmod +x notify.sh
[root@master scripts]# ls
check_nginx.sh notify.sh
在backup上编写脚本
[root@backup ~]# mkdir /scripts
//直接传脚本,到backup
[root@master scripts]# scp notify.sh 192.168.70.138:/scripts/
root@192.168.70.138's password:
notify.sh 100% 662 187.9KB/s 00:00
[root@master scripts]#
[root@backup ~]# cd /scripts/
[root@backup scripts]# ls
notify.sh
2.5配置keepalived加入监控脚本的配置
配置主keepalived
[root@master ~]# vi /etc/keepalived/keepalived.conf
[root@master ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_script nginx_check { //加入,nginx_check调用名
script "/scripts/check_nginx.sh" //脚本路径
interval 1 //加入
weight -20 //加入 ,如果脚本执行成功减优先级20
} //加入
vrrp_instance VI_1 {
state MASTER
interface ens192
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass wangqing
}
virtual_ipaddress {
192.168.70.250
}
track_script { //加入
nginx_check //与上面一致
} //加入
notify_master "/scripts/notify.sh master 192.168.70.250" //加入,VIP
}
}
virtual_server 192.168.70.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.70.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.70.138 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@master ~]# systemctl restart keepalived
[root@master ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; ve>
Active: active (running) since Wed 2022-08-31 00:29:26 CST; 16s ago
Process: 302042 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code>
Main PID: 302046 (keepalived)
Tasks: 3 (limit: 10746)
Memory: 2.8M
CGroup: /system.slice/keepalived.service
├─302046 /usr/sbin/keepalived -D
├─302047 /usr/sbin/keepalived -D
└─302048 /usr/sbin/keepalived -D
8月 31 00:29:29 master Keepalived_vrrp[302048]: Sending gratuitous ARP on >
8月 31 00:29:30 master Keepalived_healthcheckers[302047]: TCP_CHECK on ser>
8月 31 00:29:30 master Keepalived_healthcheckers[302047]: Removing service>
8月 31 00:29:31 master Keepalived_healthcheckers[302047]: TCP connection t
2.6配置备keepalived
backup无需检测nginx是否正常,当升级为MASTER时启动nginx,当降级为BACKUP时关闭
[root@backup keepalived]# vim keepalived.conf
[root@backup keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb02
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.70.250
}
notify_master "/scripts/notify.sh master 192.168.70.250" //加入
notify_backup "/scripts/notify.sh backup 192.168.70.250" //加入
}
virtual_server 192.168.70.250 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.70.134 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.70.138 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@backup keepalived]# systemctl restart keepalived
[root@backup keepalived]# systemctl status keepalived.service
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; ve>
Active: active (running) since Wed 2022-08-31 00:45:45 CST; 14s ago
Process: 333086 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code>
Main PID: 333088 (keepalived)
3.测试1高可用
主master没有挂掉可以访问到网站
关闭主master ,nginx服务,备backup启用
[root@master ~]# systemctl stop nginx
[root@backup ~]# ip a //备backup查看
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:20:8d:17 brd ff:ff:ff:ff:ff:ff
inet 192.168.70.138/24 brd 192.168.70.255 scope global dynamic noprefixroute ens33
valid_lft 1161sec preferred_lft 1161sec
inet 192.168.70.250/32 scope global ens33 //VIP
valid_lft forever preferred_lft forever
inet6 fe80::e9:8fd:e2ab:a0c7/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
设置了抢占,master恢复服务,VIP就会自动抢回
[root@master ~]# systemctl start nginx
[root@master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:90:2e brd ff:ff:ff:ff:ff:ff
inet 192.168.70.134/24 brd 192.168.70.255 scope global dynamic noprefixroute ens192
valid_lft 1665sec preferred_lft 1665sec
inet 192.168.70.250/32 scope global ens192
valid_lft forever preferred_lft forever
inet6 fe80::17d0:f28b:5740:4c88/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3.1测试2高可用
设置不抢占,主master恢复服务,VIP也不会改变,企业一般用这种
//改主文件,备默认
[root@master ~]# vim /etc/keepalived/keepalived.conf
[root@master ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb01
}
vrrp_script nginx_check { //找到刚在写的脚本
script "/scripts/check_nginx.sh"
interval 1
weight -20
}
vrrp_instance VI_1 {
state BACKUP //改为备为主
interface ens160
virtual_router_id 51
priority 100
nopreempt //设置不抢占
advert_int 1
authentication {
auth_type PASS
auth_pass wangqing
}
virtual_ipaddress {
192.168.70.250
}
track_script { //追踪脚本
nginx_check
}
notify_master "/scripts/notify.sh master 192.168.106.250"
}
virtual_server 192.168.70.134 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.70.138 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
//重启
[root@master scripts]# systemctl restart keepalived.service
恢复主服务,没有VIP,VIP还是在备
[root@master ~]# systemctl start nginx.service
[root@master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:d7:90:2e brd ff:ff:ff:ff:ff:ff
inet 192.168.70.134/24 brd 192.168.70.255 scope global dynamic noprefixroute ens192
valid_lft 1103sec preferred_lft 1103sec
inet6 fe80::17d0:f28b:5740:4c88/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@backup ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:20:8d:17 brd ff:ff:ff:ff:ff:ff
inet 192.168.70.138/24 brd 192.168.70.255 scope global dynamic noprefixroute ens33
valid_lft 987sec preferred_lft 987sec
inet 192.168.70.250/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::e9:8fd:e2ab:a0c7/64 scope link noprefixroute
valid_lft forever preferred_lft forever
4.脑裂
在高可用(HA)系统中,当联系2个节点的“心跳线”断开时,本来为一整体、动作协调的HA系统,就分裂成为2个独立的个体。由于相互失去了联系,都以为是对方出了故障。两个节点上的HA软件像“裂脑人”一样,争抢“共享资源”、争起“应用服务”,就会发生严重后果——或者共享资源被瓜分、2边“服务”都起不来了;或者2边“服务”都起来了,但同时读写“共享存储”,导致数据损坏(常见如数据库轮询着的联机日志出错)。
对付HA系统“裂脑”的对策,目前达成共识的的大概有以下几条:
添加冗余的心跳线,例如:双线条线(心跳线也HA),尽量减少“裂脑”发生几率;
启用磁盘锁。正在服务一方锁住共享磁盘,“裂脑”发生时,让对方完全“抢不走”共享磁盘资源。但使用锁磁盘也会有一个不小的问题,如果占用共享盘的一方不主动“解锁”,另一方就永远得不到共享磁盘。现实中假如服务节点突然死机或崩溃,就不可能执行解锁命令。后备节点也就接管不了共享资源和应用服务。于是有人在HA中设计了“智能”锁。即:正在服务的一方只在发现心跳线全部断开(察觉不到对端)时才启用磁盘锁。平时就不上锁了。
设置仲裁机制。例如设置参考IP(如网关IP),当心跳线完全断开时,2个节点都各自ping一下参考IP,不通则表明断点就出在本端。不仅“心跳”、还兼对外“服务”的本端网络链路断了,即使启动(或继续)应用服务也没有用了,那就主动放弃竞争,让能够ping通参考IP的一端去起服务。更保险一些,ping不通参考IP的一方干脆就自我重启,以彻底释放有可能还占用着的那些共享资源
4.1 脑裂产生的原因
一般来说,脑裂的发生,有以下几种原因:
高可用服务器对之间心跳线链路发生故障,导致无法正常通信
因心跳线坏了(包括断了,老化)
因网卡及相关驱动坏了,ip配置及冲突问题(网卡直连)
因心跳线间连接的设备故障(网卡及交换机)
因仲裁的机器出问题(采用仲裁的方案)
高可用服务器上开启了 iptables防火墙阻挡了心跳消息传输
高可用服务器上心跳网卡地址等信息配置不正确,导致发送心跳失败
其他服务配置不当等原因,如心跳方式不同,心跳广插冲突、软件Bug等
注意:
Keepalived配置里同一 VRRP实例如果 virtual_router_id两端参数配置不一致也会导致裂脑问题发生。
4.2 脑裂的常见解决方案
在实际生产环境中,我们可以从以下几个方面来防止裂脑问题的发生:
同时使用串行电缆和以太网电缆连接,同时用两条心跳线路,这样一条线路坏了,另一个还是好的,依然能传送心跳消息
当检测到裂脑时强行关闭一个心跳节点(这个功能需特殊设备支持,如Stonith、feyce)。相当于备节点接收不到心跳消患,通过单独的线路发送关机命令关闭主节点的电源
做好对裂脑的监控报警(如邮件及手机短信等或值班).在问题发生时人为第一时间介入仲裁,降低损失。例如,百度的监控报警短信就有上行和下行的区别。报警消息发送到管理员手机上,管理员可以通过手机回复对应数字或简单的字符串操作返回给服务器.让服务器根据指令自动处理相应故障,这样解决故障的时间更短.
当然,在实施高可用方案时,要根据业务实际需求确定是否能容忍这样的损失。对于一般的网站常规业务.这个损失是可容忍的
4.3 对脑裂进行监控
对脑裂的监控应在备用服务器上进行,通过添加zabbix自定义监控进行。
监控什么信息呢?监控备上有无VIP地址
备机上出现VIP有两种情况:
发生了脑裂
正常的主备切换
监控只是监控发生脑裂的可能性,不能保证一定是发生了脑裂,因为正常的主备切换VIP也是会到备上的。
环境:
主机 | IP | 部署 |
---|---|---|
zabbix | 192.168.70.134 | zabbix,lamp |
master | 192.168.70.138 | keepalived主,nginx |
backup | 192.168.70.128 | keepalived备,nginx, zabbix |
监控脚本如下:
编写脚本
在备主机上编写脚本
[root@backup ~]# cd /scripts
[root@backup scripts]# vi check_process.sh
[root@backup scripts]# cat check_process.sh
#!/bin/bash
if [ `ip a show ens160 |grep '192.168.70.250'|awk -F'[ /]+' '{print $3}'|wc -l` -eq 1 ];then
echo "1"
else
echo "0"
fi
[root@backup scripts]# chmod +x check_process.sh
[root@backup scripts]# ./check_process.sh //测试脚本
0
//开启监控项功能
[root@backup scripts]# vim /usr/local/etc/zabbix_agentd.conf
UnsafeUserParameters=1
//指定自定义监控脚本参数
UserParameter=check_process,/scripts/check_process.sh
//重启zabbix服务
[root@backup scripts]# killall zabbix_agentd
[root@backup scripts]# zabbix_agentd
//在zabbix服务端测试
[root@zabbix ~]# zabbix_get -s 192.168.70.128 -k check_process
0
添加监控主机
添加监控项
添加触发器
开启报警
测试脑裂
[root@backup ~]# vi /etc/keepalived/keepalived.conf
[root@backup ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id lb02
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 50 //虚拟路由IP设为一致,会发生脑裂
priority 90
advert_int 1
[root@master scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:20:8d:17 brd ff:ff:ff:ff:ff:ff
inet 192.168.70.138/24 brd 192.168.70.255 scope global dynamic noprefixroute ens33
valid_lft 1726sec preferred_lft 1726sec
inet 192.168.70.250/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::e9:8fd:e2ab:a0c7/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@backup ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:98:b3:7b brd ff:ff:ff:ff:ff:ff
inet 192.168.70.128/24 brd 192.168.70.255 scope global dynamic noprefixroute ens160
valid_lft 1724sec preferred_lft 1724sec
inet6 fe80::d49:4b50:2a18:781a/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@master scripts]# systemctl stop nginx.service
[root@backup ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:98:b3:7b brd ff:ff:ff:ff:ff:ff
inet 192.168.70.128/24 brd 192.168.70.255 scope global dynamic noprefixroute ens160
valid_lft 1595sec preferred_lft 1595sec
inet 192.168.70.250/32 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::d49:4b50:2a18:781a/64 scope link noprefixroute
valid_lft forever preferred_lft forever
查看仪表盘
本文标签: Keepalived nginx
版权声明:本文标题:keepalived实现nginx高可用 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.freenas.com.cn/jishu/1725920167h892871.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论