主頁(yè) > 知識(shí)庫(kù) > MySQL高可用解決方案MMM(mysql多主復(fù)制管理器)

MySQL高可用解決方案MMM(mysql多主復(fù)制管理器)

熱門標(biāo)簽:怎么申請(qǐng)400電話申請(qǐng) 龍圖酒吧地圖標(biāo)注 好搜地圖標(biāo)注 百度地圖標(biāo)注地方備注 電銷機(jī)器人價(jià)格多少錢一臺(tái) 怎么辦理400電話呢 400電話申請(qǐng)什么好 電話機(jī)器人免費(fèi)嗎 地圖標(biāo)注圖標(biāo)素材入駐

一、MMM簡(jiǎn)介:

MMM即Multi-Master Replication Manager for MySQL:mysql多主復(fù)制管理器,基于perl實(shí)現(xiàn),關(guān)于mysql主主復(fù)制配置的監(jiān)控、故障轉(zhuǎn)移和管理的一套可伸縮的腳本套件(在任何時(shí)候只有一個(gè)節(jié)點(diǎn)可以被寫入),MMM也能對(duì)從服務(wù)器進(jìn)行讀負(fù)載均衡,所以可以用它來在一組用于復(fù)制的服務(wù)器啟動(dòng)虛擬ip,除此之外,它還有實(shí)現(xiàn)數(shù)據(jù)備份、節(jié)點(diǎn)之間重新同步功能的腳本。MySQL本身沒有提供replication failover的解決方案,通過MMM方案能實(shí)現(xiàn)服務(wù)器的故障轉(zhuǎn)移,從而實(shí)現(xiàn)mysql的高可用。MMM不僅能提供浮動(dòng)IP的功能,如果當(dāng)前的主服務(wù)器掛掉后,會(huì)將你后端的從服務(wù)器自動(dòng)轉(zhuǎn)向新的主服務(wù)器進(jìn)行同步復(fù)制,不用手工更改同步配置。這個(gè)方案是目前比較成熟的解決方案。詳情請(qǐng)看官網(wǎng):http://mysql-mmm.org

優(yōu)點(diǎn):高可用性,擴(kuò)展性好,出現(xiàn)故障自動(dòng)切換,對(duì)于主主同步,在同一時(shí)間只提供一臺(tái)數(shù)據(jù)庫(kù)寫操作,保證的數(shù)據(jù)的一致性。當(dāng)主服務(wù)器掛掉以后,另一個(gè)主立即接管,其他的從服務(wù)器能自動(dòng)切換,不用人工干預(yù)。

缺點(diǎn):monitor節(jié)點(diǎn)是單點(diǎn),不過這個(gè)你也可以結(jié)合keepalived或者h(yuǎn)aertbeat做成高可用;至少三個(gè)節(jié)點(diǎn),對(duì)主機(jī)的數(shù)量有要求,需要實(shí)現(xiàn)讀寫分離,還需要在前端編寫讀寫分離程序。在讀寫非常繁忙的業(yè)務(wù)系統(tǒng)下表現(xiàn)不是很穩(wěn)定,可能會(huì)出現(xiàn)復(fù)制延時(shí)、切換失效等問題。MMM方案并不太適應(yīng)于對(duì)數(shù)據(jù)安全性要求很高,并且讀、寫繁忙的環(huán)境中。

適用場(chǎng)景:

MMM的適用場(chǎng)景為數(shù)據(jù)庫(kù)訪問量大,并且能實(shí)現(xiàn)讀寫分離的場(chǎng)景。
Mmm主要功能由下面三個(gè)腳本提供:
mmm_mond 負(fù)責(zé)所有的監(jiān)控工作的監(jiān)控守護(hù)進(jìn)程,決定節(jié)點(diǎn)的移除(mmm_mond進(jìn)程定時(shí)心跳檢測(cè),失敗則將write ip浮動(dòng)到另外一臺(tái)master)等等
mmm_agentd 運(yùn)行在mysql服務(wù)器上的代理守護(hù)進(jìn)程,通過簡(jiǎn)單遠(yuǎn)程服務(wù)集提供給監(jiān)控節(jié)點(diǎn)
mmm_control 通過命令行管理mmm_mond進(jìn)程
在整個(gè)監(jiān)管過程中,需要在mysql中添加相關(guān)授權(quán)用戶,授權(quán)的用戶包括一個(gè)mmm_monitor用戶和一個(gè)mmm_agent用戶,如果想使用mmm的備份工具則還要添加一個(gè)mmm_tools用戶。

二、部署實(shí)施

1、環(huán)境介紹

OS:centos7.2(64位)數(shù)據(jù)庫(kù)系統(tǒng):mysql5.7.13

關(guān)閉selinux

配置ntp,同步時(shí)間

角色

IP

hostname

Server-id

Write vip

Read vip

Master1

192.168.31.83

master1

1

192.168.31.2


Master2(backup)

192.168.31.141

master2

2


192.168.31.3

Slave1

192.168.31.250

slave1

3


192.168.31.4

Slave2

192.168.31.225

slave2

4


192.168.31.5

monitor

192.168.31.106

monitor1


2、在所有主機(jī)上配置/etc/hosts文件,添加如下內(nèi)容:

192.168.31.83 master1
192.168.31.141 master2
192.168.31.250 slave1
192.168.31.225 slave2
192.168.31.106 monitor1

在所有主機(jī)上安裝perl、perl-develperl-CPAN libart_lgpl.x86_64 rrdtool.x86_64 rrdtool-perl.x86_64包
#yum -y install perl-* libart_lgpl.x86_64 rrdtool.x86_64 rrdtool-perl.x86_64

注:使用centos7在線yum源安裝

安裝perl的相關(guān)庫(kù)

#cpan -i Algorithm::Diff Class::Singleton DBI DBD::mysql Log::Dispatch Log::Log4perl Mail::Send Net::Ping Proc::Daemon Time::HiRes Params::Validate Net::ARP

3、在master1、master2、slave1、slave2主機(jī)上安裝mysql5.7和配置復(fù)制

master1和master2互為主從,slave1、slave2為master1的從
在每個(gè)mysql的配置文件/etc/my.cnf中加入以下內(nèi)容, 注意server_id不能重復(fù)。

master1主機(jī):

log-bin = mysql-bin
binlog_format = mixed
server-id = 1
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
log-slave-updates = 1
auto-increment-increment = 2
auto-increment-offset = 1
master2主機(jī):
log-bin = mysql-bin
binlog_format = mixed
server-id = 2
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
log-slave-updates = 1
auto-increment-increment = 2
auto-increment-offset = 2
slave1主機(jī):
server-id = 3
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
read_only  = 1
slave2主機(jī):
server-id = 4
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
read_only  = 1

在完成了對(duì)my.cnf的修改后,通過systemctl restart mysqld重新啟動(dòng)mysql服務(wù)

4臺(tái)數(shù)據(jù)庫(kù)主機(jī)若要開啟防火墻,要么關(guān)閉防火墻或者創(chuàng)建訪問規(guī)則:

firewall-cmd --permanent --add-port=3306/tcp
firewall-cmd --reload
主從配置(master1和master2配置成主主,slave1和slave2配置成master1的從):
在master1上授權(quán):
mysql> grant replication slave on *.* to rep@'192.168.31.%' identified by '123456';
在master2上授權(quán):
mysql> grant replication slave on *.* to rep@'192.168.31.%' identified by '123456';
把master2、slave1和slave2配置成master1的從庫(kù):
在master1上執(zhí)行show master status; 獲取binlog文件和Position點(diǎn)
mysql> show master status;
+------------------+----------+--------------+------------------+--------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+---------------------------------------------------+
| mysql-bin.000001 | 452 | | | |
+------------------+----------+--------------+------------------+-----------------------------------------------------+
在master2、slave1和slave2執(zhí)行

mysql> change master to master_host='192.168.31.83',master_port=3306,master_user='rep',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=452;
mysql>slave start;
驗(yàn)證主從復(fù)制:
master2主機(jī):
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.83
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 452
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
slave1主機(jī):
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.83
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 452
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
slave2主機(jī):
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.83
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 452
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
如果Slave_IO_Running和Slave_SQL_Running都為yes,那么主從就已經(jīng)配置OK了
把master1配置成master2的從庫(kù):
在master2上執(zhí)行show master status ;獲取binlog文件和Position點(diǎn)
mysql> show master status;
+------------------+----------+--------------+------------------+--------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+---------------------------------------------------+
| mysql-bin.000001 | 452 | | | |
+------------------+----------+--------------+------------------+----------------------------------------------------+
在master1上執(zhí)行:
mysql> change master to master_host='192.168.31.141',master_port=3306,master_user='rep',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=452;
mysql> start slave;
驗(yàn)證主從復(fù)制:
master1主機(jī):
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.141
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 452
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
如果Slave_IO_Running和Slave_SQL_Running都為yes,那么主從就已經(jīng)配置OK了
4、mysql-mmm配置:
在4臺(tái)mysql節(jié)點(diǎn)上創(chuàng)建用戶
創(chuàng)建代理賬號(hào):
mysql> grant super,replicationclient,process on *.* to 'mmm_agent'@'192.168.31.%' identified by '123456';
創(chuàng)建監(jiān)控賬號(hào):
mysql> grant replication client on *.* to 'mmm_monitor'@'192.168.31.%' identified by '123456';
注1:因?yàn)橹暗闹鲝膹?fù)制,以及主從已經(jīng)是ok的,所以我在master1服務(wù)器執(zhí)行就ok了。
檢查master2和slave1、slave2三臺(tái)db上是否都存在監(jiān)控和代理賬號(hào)
mysql> select user,host from mysql.user where user in ('mmm_monitor','mmm_agent');
+-------------+----------------------------+
| user | host |
+-------------+----------------------------+
| mmm_agent | 192.168.31.% |
| mmm_monitor | 192.168.31.% |
+-------------+------------------------------+

mysql> show grants for 'mmm_agent'@'192.168.31.%';
+-----------------------------------------------------------------------------------------------------------------------------+
| Grants for mmm_agent@192.168.31.% |
+-----------------------------------------------------------------------------------------------------------------------------+
| GRANT PROCESS, SUPER, REPLICATION CLIENT ON *.* TO 'mmm_agent'@'192.168.31.%' |
+-----------------------------------------------------------------------------------------------------------------------------+
mysql> show grants for 'mmm_monitor'@'192.168.31.%';
+-----------------------------------------------------------------------------------------------------------------------------+
| Grants for mmm_monitor@192.168.31.% |
+-----------------------------------------------------------------------------------------------------------------------------+
| GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.31.%' |
注2:
mmm_monitor用戶:mmm監(jiān)控用于對(duì)mysql服務(wù)器進(jìn)程健康檢查
mmm_agent用戶:mmm代理用來更改只讀模式,復(fù)制的主服務(wù)器等
5、mysql-mmm安裝
在monitor主機(jī)(192.168.31.106) 上安裝監(jiān)控程序
cd /tmp
wget http://pkgs.fedoraproject.org/repo/pkgs/mysql-mmm/mysql-mmm-2.2.1.tar.gz/f5f8b48bdf89251d3183328f0249461e/mysql-mmm-2.2.1.tar.gz
tar -zxf mysql-mmm-2.2.1.tar.gz
cd mysql-mmm-2.2.1
make install
在數(shù)據(jù)庫(kù)服務(wù)器(master1、master2、slave1、slave2)上安裝代理
cd /tmp
wget http://pkgs.fedoraproject.org/repo/pkgs/mysql-mmm/mysql-mmm-2.2.1.tar.gz/f5f8b48bdf89251d3183328f0249461e/mysql-mmm-2.2.1.tar.gz
tar -zxf mysql-mmm-2.2.1.tar.gz
cd mysql-mmm-2.2.1
make install

6、配置mmm

編寫配置文件,五臺(tái)主機(jī)必須一致:
完成安裝后,所有的配置文件都放到了/etc/mysql-mmm/下面。管理服務(wù)器和數(shù)據(jù)庫(kù)服務(wù)器上都要包含一個(gè)共同的文件mmm_common.conf,內(nèi)容如下:
active_master_rolewriter#積極的master角色的標(biāo)示,所有的db服務(wù)器要開啟read_only參數(shù),對(duì)于writer服務(wù)器監(jiān)控代理會(huì)自動(dòng)將read_only屬性關(guān)閉。
host default>
cluster_interfaceeno16777736#群集的網(wǎng)絡(luò)接口
pid_path /var/run/mmm_agentd.pid#pid路徑
bin_path /usr/lib/mysql-mmm/#可執(zhí)行文件路徑
replication_user rep#復(fù)制用戶
replication_password 123456#復(fù)制用戶密碼
agent_usermmm_agent#代理用戶
agent_password 123456#代理用戶密碼
/host>
host master1>#master1的host名
ip 192.168.31.83#master1的ip
mode master#角色屬性,master代表是主
peer master2#與master1對(duì)等的服務(wù)器的host名,也就是master2的服務(wù)器host名
/host>
host master2>#和master的概念一樣
ip 192.168.31.141
mode master
peer master1
/host>
host slave1>#從庫(kù)的host名,如果存在多個(gè)從庫(kù)可以重復(fù)一樣的配置
ip 192.168.31.250#從的ip
mode slave#slave的角色屬性代表當(dāng)前host是從
/host>
host slave2>#和slave的概念一樣
ip 192.168.31.225
mode slave
/host>
role writer>#writer角色配置
hosts master1,master2#能進(jìn)行寫操作的服務(wù)器的host名,如果不想切換寫操作這里可以只配置master,這樣也可以避免因?yàn)榫W(wǎng)絡(luò)延時(shí)而進(jìn)行write的切換,但是一旦master出現(xiàn)故障那么當(dāng)前的MMM就沒有writer了只有對(duì)外的read操作。
ips 192.168.31.2#對(duì)外提供的寫操作的虛擬IP
mode exclusive#exclusive代表只允許存在一個(gè)主,也就是只能提供一個(gè)寫的IP
/role>
role reader>#read角色配置
hosts master2,slave1,slave2#對(duì)外提供讀操作的服務(wù)器的host名,當(dāng)然這里也可以把master加進(jìn)來
ips 192.168.31.3, 192.168.31.4, 192.168.31.5#對(duì)外提供讀操作的虛擬ip,這三個(gè)ip和host不是一一對(duì)應(yīng)的,并且ips也hosts的數(shù)目也可以不相同,如果這樣配置的話其中一個(gè)hosts會(huì)分配兩個(gè)ip
mode balanced#balanced代表負(fù)載均衡
/role>
同時(shí)將這個(gè)文件拷貝到其它的服務(wù)器,配置不變
#for host in master1 master2 slave1 slave2 ; do scp /etc/mysql-mmm/mmm_common.conf $host:/etc/mysql-mmm/ ; done
代理文件配置
編輯 4臺(tái)mysql節(jié)點(diǎn)機(jī)上的/etc/mysql-mmm/mmm_agent.conf
在數(shù)據(jù)庫(kù)服務(wù)器上,還有一個(gè)mmm_agent.conf需要修改,其內(nèi)容是:
includemmm_common.conf
this master1
注意:這個(gè)配置只配置db服務(wù)器,監(jiān)控服務(wù)器不需要配置,this后面的host名改成當(dāng)前服務(wù)器的主機(jī)名。
啟動(dòng)代理進(jìn)程
在 /etc/init.d/mysql-mmm-agent的腳本文件的#!/bin/sh下面,加入如下內(nèi)容
source /root/.bash_profile
添加成系統(tǒng)服務(wù)并設(shè)置為自啟動(dòng)
#chkconfig --add mysql-mmm-agent
#chkconfigmysql-mmm-agent on
#/etc/init.d/mysql-mmm-agent start
注:添加source /root/.bash_profile目的是為了mysql-mmm-agent服務(wù)能啟機(jī)自啟。
自動(dòng)啟動(dòng)和手動(dòng)啟動(dòng)的唯一區(qū)別,就是激活一個(gè)console 。那么說明在作為服務(wù)啟動(dòng)的時(shí)候,可能是由于缺少環(huán)境變量
服務(wù)啟動(dòng)失敗,報(bào)錯(cuò)信息如下:
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Can't locate Proc/Daemon.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/sbin/mmm_agentd line 7.
BEGIN failed--compilation aborted at /usr/sbin/mmm_agentd line 7.
failed

解決方法:

# cpanProc::Daemon
# cpan Log::Log4perl
# /etc/init.d/mysql-mmm-agent start
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Ok
# netstat -antp | grep mmm_agentd
tcp 0 0 192.168.31.83:9989 0.0.0.0:* LISTEN 9693/mmm_agentd
配置防火墻
firewall-cmd --permanent --add-port=9989/tcp
firewall-cmd --reload
編輯 monitor主機(jī)上的/etc/mysql-mmm/mmm_mon.conf
includemmm_common.conf

monitor>
ip 127.0.0.1##為了安全性,設(shè)置只在本機(jī)監(jiān)聽,mmm_mond默認(rèn)監(jiān)聽9988
pid_path /var/run/mmm_mond.pid
bin_path /usr/lib/mysql-mmm/
status_path/var/lib/misc/mmm_mond.status
ping_ips192.168.31.83,192.168.31.141,192.168.31.250,192.168.31.225#用于測(cè)試網(wǎng)絡(luò)可用性 IP 地址列表,只要其中有一個(gè)地址 ping 通,就代表網(wǎng)絡(luò)正常,這里不要寫入本機(jī)地址
auto_set_online 0#設(shè)置自動(dòng)online的時(shí)間,默認(rèn)是超過60s就將它設(shè)置為online,默認(rèn)是60s,這里將其設(shè)為0就是立即online
/monitor>

check default>
check_period 5
trap_period 10
timeout 2
#restart_after 10000
max_backlog 86400
/check>
check_period
描述:檢查周期默認(rèn)為5s
默認(rèn)值:5s
trap_period
描述:一個(gè)節(jié)點(diǎn)被檢測(cè)不成功的時(shí)間持續(xù)trap_period秒,就慎重的認(rèn)為這個(gè)節(jié)點(diǎn)失敗了。
默認(rèn)值:10s
timeout
描述:檢查超時(shí)的時(shí)間
默認(rèn)值:2s
restart_after
描述:在完成restart_after次檢查后,重啟checker進(jìn)程
默認(rèn)值:10000
max_backlog
描述:記錄檢查rep_backlog日志的最大次數(shù)
默認(rèn)值:60

host default>
monitor_usermmm_monitor#監(jiān)控db服務(wù)器的用戶
monitor_password 123456#監(jiān)控db服務(wù)器的密碼
/host>
debug 0#debug 0正常模式,1為debug模式
啟動(dòng)監(jiān)控進(jìn)程:
在 /etc/init.d/mysql-mmm-agent的腳本文件的#!/bin/sh下面,加入如下內(nèi)容
source /root/.bash_profile
添加成系統(tǒng)服務(wù)并設(shè)置為自啟動(dòng)
#chkconfig --add mysql-mmm-monitor
#chkconfigmysql-mmm-monitor on
#/etc/init.d/mysql-mmm-monitor start

啟動(dòng)報(bào)錯(cuò):

Starting MMM Monitor daemon: Can not locate Proc/Daemon.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/sbin/mmm_mond line 11.
BEGIN failed--compilation aborted at /usr/sbin/mmm_mond line 11.
failed
解決方法:安裝下列perl的庫(kù)
#cpanProc::Daemon
#cpan Log::Log4perl
[root@monitor1 ~]# /etc/init.d/mysql-mmm-monitor start
Daemon bin: '/usr/sbin/mmm_mond'
Daemon pid: '/var/run/mmm_mond.pid'
Starting MMM Monitor daemon: Ok
[root@monitor1 ~]# netstat -anpt | grep 9988
tcp 0 0 127.0.0.1:9988 0.0.0.0:* LISTEN 8546/mmm_mond
注1:無論是在db端還是在監(jiān)控端如果有對(duì)配置文件進(jìn)行修改操作都需要重啟代理進(jìn)程和監(jiān)控進(jìn)程。
注2:MMM啟動(dòng)順序:先啟動(dòng)monitor,再啟動(dòng) agent

檢查集群狀態(tài):

[root@monitor1 ~]# mmm_control show
master1(192.168.31.83) master/ONLINE. Roles: writer(192.168.31.2)
master2(192.168.31.141) master/ONLINE. Roles: reader(192.168.31.5)
slave1(192.168.31.250) slave/ONLINE. Roles: reader(192.168.31.4)
slave2(192.168.31.225) slave/ONLINE. Roles: reader(192.168.31.3)

如果服務(wù)器狀態(tài)不是ONLINE,可以用如下命令將服務(wù)器上線,例如:

#mmm_controlset_online主機(jī)名

例如:[root@monitor1 ~]#mmm_controlset_onlinemaster1
從上面的顯示可以看到,寫請(qǐng)求的VIP在master1上,所有從節(jié)點(diǎn)也都把master1當(dāng)做主節(jié)點(diǎn)。
查看是否啟用vip
[root@master1 ~]# ipaddr show dev eno16777736
eno16777736: BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000
link/ether 00:0c:29:6d:2f:82 brdff:ff:ff:ff:ff:ff
inet 192.168.31.83/24 brd 192.168.31.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 192.168.31.2/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe6d:2f82/64 scope link
valid_lft forever preferred_lft forever
[root@master2 ~]# ipaddr show dev eno16777736
eno16777736: BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000
link/ether 00:0c:29:75:1a:9c brdff:ff:ff:ff:ff:ff
inet 192.168.31.141/24 brd 192.168.31.255 scope global dynamic eno16777736
valid_lft 35850sec preferred_lft 35850sec
inet 192.168.31.5/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe75:1a9c/64 scope link
valid_lft forever preferred_lft forever
[root@slave1 ~]# ipaddr show dev eno16777736
eno16777736: BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000
link/ether 00:0c:29:02:21:19 brdff:ff:ff:ff:ff:ff
inet 192.168.31.250/24 brd 192.168.31.255 scope global dynamic eno16777736
valid_lft 35719sec preferred_lft 35719sec
inet 192.168.31.4/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe02:2119/64 scope link
valid_lft forever preferred_lft forever
[root@slave2 ~]# ipaddr show dev eno16777736
eno16777736: BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000
link/ether 00:0c:29:e2:c7:fa brdff:ff:ff:ff:ff:ff
inet 192.168.31.225/24 brd 192.168.31.255 scope global dynamic eno16777736
valid_lft 35930sec preferred_lft 35930sec
inet 192.168.31.3/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fee2:c7fa/64 scope link
valid_lft forever preferred_lft forever
在master2,slave1,slave2主機(jī)上查看主mysql的指向
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.83
Master_User: rep
Master_Port: 3306
Connect_Retry: 60

MMM高可用性測(cè)試:

服務(wù)器讀寫采有VIP地址進(jìn)行讀寫,出現(xiàn)故障時(shí)VIP會(huì)漂移到其它節(jié)點(diǎn),由其它節(jié)點(diǎn)提供服務(wù)。
首先查看整個(gè)集群的狀態(tài),可以看到整個(gè)集群狀態(tài)正常
[root@monitor1 ~]# mmm_control show
master1(192.168.31.83) master/ONLINE. Roles: writer(192.168.31.2)
master2(192.168.31.141) master/ONLINE. Roles: reader(192.168.31.5)
slave1(192.168.31.250) slave/ONLINE. Roles: reader(192.168.31.4)
slave2(192.168.31.225) slave/ONLINE. Roles: reader(192.168.31.3)
模擬master1宕機(jī),手動(dòng)停止mysql服務(wù),觀察monitor日志,master1的日志如下:
[root@monitor1 ~]# tail -f /var/log/mysql-mmm/mmm_mond.log
2017/01/09 22:02:55 WARN Check 'rep_threads' on 'master1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.31.83:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.31.83' (111)
2017/01/09 22:02:55 WARN Check 'rep_backlog' on 'master1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.31.83:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.31.83' (111)
2017/01/09 22:03:05 ERROR Check 'mysql' on 'master1' has failed for 10 seconds! Message: ERROR: Connect error (host = 192.168.31.83:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.31.83' (111)
2017/01/09 22:03:07 FATAL State of host 'master1' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK)
2017/01/09 22:03:07 INFO Removing all roles from host 'master1':
2017/01/09 22:03:07 INFO Removed role 'writer(192.168.31.2)' from host 'master1'
2017/01/09 22:03:07 INFO Orphaned role 'writer(192.168.31.2)' has been assigned to 'master2'
查看群集的最新狀態(tài)
[root@monitor1 ~]# mmm_control show
master1(192.168.31.83) master/HARD_OFFLINE. Roles:
master2(192.168.31.141) master/ONLINE. Roles: reader(192.168.31.5), writer(192.168.31.2)
slave1(192.168.31.250) slave/ONLINE. Roles: reader(192.168.31.4)
slave2(192.168.31.225) slave/ONLINE. Roles: reader(192.168.31.3)
從顯示結(jié)果可以看出master1的狀態(tài)有ONLINE轉(zhuǎn)換為HARD_OFFLINE,寫VIP轉(zhuǎn)移到了master2主機(jī)上。
檢查所有的db服務(wù)器群集狀態(tài)
[root@monitor1 ~]# mmm_control checks all
master1 ping [last change: 2017/01/09 21:31:47] OK
master1 mysql [last change: 2017/01/09 22:03:07] ERROR: Connect error (host = 192.168.31.83:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.31.83' (111)
master1 rep_threads [last change: 2017/01/09 21:31:47] OK
master1 rep_backlog [last change: 2017/01/09 21:31:47] OK: Backlog is null
slave1 ping [last change: 2017/01/09 21:31:47] OK
slave1mysql [last change: 2017/01/09 21:31:47] OK
slave1 rep_threads [last change: 2017/01/09 21:31:47] OK
slave1 rep_backlog [last change: 2017/01/09 21:31:47] OK: Backlog is null
master2 ping [last change: 2017/01/09 21:31:47] OK
master2 mysql [last change: 2017/01/09 21:57:32] OK
master2 rep_threads [last change: 2017/01/09 21:31:47] OK
master2 rep_backlog [last change: 2017/01/09 21:31:47] OK: Backlog is null
slave2 ping [last change: 2017/01/09 21:31:47] OK
slave2mysql [last change: 2017/01/09 21:31:47] OK
slave2 rep_threads [last change: 2017/01/09 21:31:47] OK
slave2 rep_backlog [last change: 2017/01/09 21:31:47] OK: Backlog is null
從上面可以看到master1能ping通,說明只是服務(wù)死掉了。

查看master2主機(jī)的ip地址:

[root@master2 ~]# ipaddr show dev eno16777736
eno16777736: BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000
link/ether 00:0c:29:75:1a:9c brdff:ff:ff:ff:ff:ff
inet 192.168.31.141/24 brd 192.168.31.255 scope global dynamic eno16777736
valid_lft 35519sec preferred_lft 35519sec
inet 192.168.31.5/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet 192.168.31.2/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe75:1a9c/64 scope link
valid_lft forever preferred_lft forever
slave1主機(jī):
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.141
Master_User: rep
Master_Port: 3306
slave2主機(jī):
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.141
Master_User: rep
Master_Port: 3306
啟動(dòng)master1主機(jī)的mysql服務(wù),觀察monitor日志,master1的日志如下:
[root@monitor1 ~]# tail -f /var/log/mysql-mmm/mmm_mond.log
2017/01/09 22:16:56 INFO Check 'mysql' on 'master1' is ok!
2017/01/09 22:16:56 INFO Check 'rep_backlog' on 'master1' is ok!
2017/01/09 22:16:56 INFO Check 'rep_threads' on 'master1' is ok!
2017/01/09 22:16:59 FATAL State of host 'master1' changed from HARD_OFFLINE to AWAITING_RECOVERY
從上面可以看到master1的狀態(tài)由hard_offline改變?yōu)閍waiting_recovery狀態(tài)
用如下命令將服務(wù)器上線:
[root@monitor1 ~]#mmm_controlset_onlinemaster1
查看群集最新狀態(tài)
[root@monitor1 ~]# mmm_control show
master1(192.168.31.83) master/ONLINE. Roles:
master2(192.168.31.141) master/ONLINE. Roles: reader(192.168.31.5), writer(192.168.31.2)
slave1(192.168.31.250) slave/ONLINE. Roles: reader(192.168.31.4)
slave2(192.168.31.225) slave/ONLINE. Roles: reader(192.168.31.3)
可以看到主庫(kù)啟動(dòng)不會(huì)接管主,只到現(xiàn)有的主再次宕機(jī)。
總結(jié)
(1)master2備選主節(jié)點(diǎn)宕機(jī)不影響集群的狀態(tài),就是移除了master2備選節(jié)點(diǎn)的讀狀態(tài)。
(2)master1主節(jié)點(diǎn)宕機(jī),由master2備選主節(jié)點(diǎn)接管寫角色,slave1,slave2指向新master2主庫(kù)進(jìn)行復(fù)制,slave1,slave2會(huì)自動(dòng)change master到master2.
(3)如果master1主庫(kù)宕機(jī),master2復(fù)制應(yīng)用又落后于master1時(shí)就變成了主可寫狀態(tài),這時(shí)的數(shù)據(jù)主無法保證一致性。
如果master2,slave1,slave2延遲于master1主,這個(gè)時(shí)master1宕機(jī),slave1,slave2將會(huì)等待數(shù)據(jù)追上db1后,再重新指向新的主node2進(jìn)行復(fù)制操作,這時(shí)的數(shù)據(jù)也無法保證同步的一致性。
(4)如果采用MMM高可用架構(gòu),主,主備選節(jié)點(diǎn)機(jī)器配置一樣,而且開啟半同步進(jìn)一步提高安全性或采用MariaDB/mysql5.7進(jìn)行多線程從復(fù)制,提高復(fù)制的性能。

附:

1、日志文件:
日志文件往往是分析錯(cuò)誤的關(guān)鍵,所以要善于利用日志文件進(jìn)行問題分析。
db端:/var/log/mysql-mmm/mmm_agentd.log
監(jiān)控端:/var/log/mysql-mmm/mmm_mond.log
2、命令文件:
mmm_agentd:db代理進(jìn)程的啟動(dòng)文件
mmm_mond:監(jiān)控進(jìn)程的啟動(dòng)文件
mmm_backup:備份文件
mmm_restore:還原文件
mmm_control:監(jiān)控操作命令文件
db服務(wù)器端只有mmm_agentd程序,其它的都是在monitor服務(wù)器端。
3、mmm_control用法
mmm_control程序可以用于監(jiān)控群集狀態(tài)、切換writer、設(shè)置online\offline操作等。
Valid commands are:
help - show this message #幫助信息
ping - ping monitor #ping當(dāng)前的群集是否正常
show - show status #群集在線狀態(tài)檢查
checks [host>|all [check>|all]] - show checks status#執(zhí)行監(jiān)控檢查操作
set_onlinehost> - set host host> online #將host設(shè)置為online
set_offlinehost> - set host host> offline #將host設(shè)置為offline
mode - print current mode. #打印輸出當(dāng)前的mode
set_active - switch into active mode.

set_manual - switch into manual mode.
set_passive - switch into passive mode.
move_role [--force] role>host> - move exclusive role role> to host host> #移除writer服務(wù)器為指定的host服務(wù)器(Only use --force if you know what you are doing!)
set_ipip>host> - set role with ipip> to host host>
檢查所有的db服務(wù)器群集狀態(tài):
[root@monitor1 ~]# mmm_control checks all
檢查項(xiàng)包括:ping、mysql是否正常運(yùn)行、復(fù)制線程是否正常等
檢查群集環(huán)境在線狀況:
[root@monitor1 ~]# mmm_control show
對(duì)指定的host執(zhí)行offline操作:
[root@monitor1 ~]# mmm_controlset_offline slave2
對(duì)指定的host執(zhí)行onine操作:
[root@monitor1 ~]# mmm_controlset_online slave2
執(zhí)行write切換(手動(dòng)切換):
查看當(dāng)前的slave對(duì)應(yīng)的master
[root@slave2 ~]# mysql -uroot -p123456 -e 'show slave status\G;'
mysql: [Warning] Using a password on the command line interface can be insecure.
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.141
writer切換,要確保mmm_common.conf文件中的writer屬性有配置對(duì)應(yīng)的host,否則無法切換
[root@monitor1 ~]# mmm_controlmove_role writer master1
OK: Role 'writer' has been moved from 'master2' to 'master1'. Now you can wait some time and check new roles info!
[root@monitor1 ~]# mmm_control show
master1(192.168.31.83) master/ONLINE. Roles: writer(192.168.31.2)
master2(192.168.31.141) master/ONLINE. Roles: reader(192.168.31.5)
slave1(192.168.31.250) slave/ONLINE. Roles: reader(192.168.31.4)
slave2(192.168.31.225) slave/ONLINE. Roles: reader(192.168.31.3)
save從庫(kù)自動(dòng)切換到了新的master
[root@slave2 ~]# mysql -uroot -p123456 -e 'show slave status\G;'
mysql: [Warning] Using a password on the command line interface can be insecure.
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.83

4、其它處理問題

如果不想讓writer從master切換到backup(包括主從的延時(shí)也會(huì)導(dǎo)致寫VIP的切換),那么可以在配置/etc/mysql-mmm/mmm_common.conf時(shí),去掉role write>中的backup
role writer>#writer角色配置
hosts master1 #這里只配置一個(gè)Hosts
ips 192.168.31.2#對(duì)外提供的寫操作的虛擬IP
mode exclusive #exclusive代表只允許存在一個(gè)主,也就是只能提供一個(gè)寫的IP
/role>
這樣的話當(dāng)master1出現(xiàn)故障了writer寫操作不會(huì)切換到master2服務(wù)器,并且slave也不會(huì)指向新的master,此時(shí)當(dāng)前的MMM之前對(duì)外提供寫服務(wù)。

5、總結(jié)

1.對(duì)外提供讀寫的虛擬IP是由monitor程序控制。如果monitor沒有啟動(dòng)那么db服務(wù)器不會(huì)被分配虛擬ip,但是如果已經(jīng)分配好了虛擬ip,當(dāng)monitor程序關(guān)閉了原先分配的虛擬ip不會(huì)立即關(guān)閉外部程序還可以連接訪問(只要不重啟網(wǎng)絡(luò)),這樣的好處就是對(duì)于monitor的可靠性要求就會(huì)低一些,但是如果這個(gè)時(shí)候其中的某一個(gè)db服務(wù)器故障了就無法處理切換,也就是原先的虛擬ip還是維持不變,掛掉的那臺(tái)DB的虛擬ip會(huì)變的不可訪問。

2.agent程序受monitor程序的控制處理write切換,從庫(kù)切換等操作。如果monitor進(jìn)程關(guān)閉了那么agent進(jìn)程就起不到什么作用,它本身不能處理故障。

3.monitor程序負(fù)責(zé)監(jiān)控db服務(wù)器的狀態(tài),包括Mysql數(shù)據(jù)庫(kù)、服務(wù)器是否運(yùn)行、復(fù)制線程是否正常、主從延時(shí)等;它還用于控制agent程序處理故障。

4.monitor會(huì)每隔幾秒鐘監(jiān)控db服務(wù)器的狀態(tài),如果db服務(wù)器已經(jīng)從故障變成了正常,那么monitor會(huì)自動(dòng)在60s之后將其設(shè)置為online狀態(tài)(默認(rèn)是60s可以設(shè)為其它的值),有監(jiān)控端的配置文件參數(shù)“auto_set_online”決定,群集服務(wù)器的狀態(tài)有三種分別是:HARD_OFFLINE→AWAITING_RECOVERY→online
5.默認(rèn)monitor會(huì)控制mmm_agent會(huì)將writer db服務(wù)器read_only修改為OFF,其它的db服務(wù)器read_only修改為ON,所以為了嚴(yán)謹(jǐn)可以在所有的服務(wù)器的my.cnf文件中加入read_only=1由monitor控制來控制writer和read,root用戶和復(fù)制用戶不受read_only參數(shù)的影響。

您可能感興趣的文章:
  • MySQL之高可用集群部署及故障切換實(shí)現(xiàn)
  • MySQL之MHA高可用配置及故障切換實(shí)現(xiàn)詳細(xì)部署步驟
  • MySQL數(shù)據(jù)庫(kù)實(shí)現(xiàn)MMM高可用群集架構(gòu)
  • 基于mysql+mycat搭建穩(wěn)定高可用集群負(fù)載均衡主備復(fù)制讀寫分離操作
  • Oracle和MySQL的高可用方案對(duì)比分析
  • MySQL系列之十四 MySQL的高可用實(shí)現(xiàn)

標(biāo)簽:溫州 防疫工作 浙江 內(nèi)江 廣西 固原 汕尾 撫順

巨人網(wǎng)絡(luò)通訊聲明:本文標(biāo)題《MySQL高可用解決方案MMM(mysql多主復(fù)制管理器)》,本文關(guān)鍵詞  MySQL,高可用,高,可用,解決方案,;如發(fā)現(xiàn)本文內(nèi)容存在版權(quán)問題,煩請(qǐng)?zhí)峁┫嚓P(guān)信息告之我們,我們將及時(shí)溝通與處理。本站內(nèi)容系統(tǒng)采集于網(wǎng)絡(luò),涉及言論、版權(quán)與本站無關(guān)。
  • 相關(guān)文章
  • 下面列出與本文章《MySQL高可用解決方案MMM(mysql多主復(fù)制管理器)》相關(guān)的同類信息!
  • 本頁(yè)收集關(guān)于MySQL高可用解決方案MMM(mysql多主復(fù)制管理器)的相關(guān)信息資訊供網(wǎng)民參考!
  • 推薦文章