要求:
一、能夠在同一網段內直接通信
二、節點名稱,要和uname的結果一樣,並保證可以根據節點名稱解析到節點的IP地址,配置本地/etc/hosts
三、SSH互信通信
四、保證時間同步
環境准備配置:
test1,192.168.10.55配置
1、配置IP地址
[[email protected]~]#vim/etc/sysconfig/network-scripts/ifcfg-eth0
2、配置主機名
[[email protected]~]#uname-n [[email protected]~]#hostnamemaster1.local#臨時生效 [[email protected]~]#vim/etc/sysconfig/network#永久生效
3、配置主機名解析
[[email protected]~]#vim/etc/hosts 添加: 192.168.10.55master1.local 192.168.10.56master2.local
3.2、測試主機名通信
[[email protected]~]#pingmaster1.local [[email protected]~]#pingmaster2.local
4、配置SSH互信認證
[[email protected]~]#ssh-keygen-trsa-P'' [[email protected]~][email protected]
5、使用ntp同步時間
在crontab中加入每5分鐘執行一次ntpdate命令,用來保證服務器時間是同步的
[[email protected]~]#crontab-e */5****/sbin/ntpdate192.168.10.1&>/dev/null
test2,192.168.10.56配置
1、配置IP地址
[[email protected]~]#vim/etc/sysconfig/network-scripts/ifcfg-eth0
2、配置主機名
[[email protected]~]#uname-n [[email protected]~]#hostnametest2.local#臨時生效 [[email protected]~]#vim/etc/sysconfig/network#永久生效
3、配置主機名解析
[[email protected]~]#vim/etc/hosts 添加: 192.168.10.55test1.localtest1 192.168.10.56test2.localtest2
3.2、測試主機名通信
[[email protected]~]#pingtest1.local [[email protected]~]#pingtest1
4、配置SSH互信認證
[[email protected]~]#ssh-keygen-trsa-P'' [[email protected]~][email protected]
5、使用ntp同步時間
在crontab中加入每5分鐘執行一次ntpdate命令,用來保證服務器時間是同步的
[[email protected]~]#crontab-e */5****/sbin/ntpdate192.168.10.1&>/dev/null
安裝配置heartbeat
CentOS直接yum安裝報錯,提示找不到可用的軟件包
解決辦法:
[[email protected]]#wgethttp://mirrors.sohu.com/fedora-epel/6/i386/epel-release-6-8.noarch.rpm [[email protected]]#rpm-ivhepel-release-6-8.noarch.rpm
6.1、安裝heartbeat:
[[email protected]]#yuminstallheartbeat
6.2、copy配置文件:
[[email protected]]#cp/usr/share/doc/heartbeat-3.0.4/{ha.cf,authkeys,haresources}/etc/ha.d/
6.3、配置認證文件:
[[email protected]]#ddif=/dev/randomcount=1bs=512|md5sum#生成隨機數 [[email protected]]#vim/etc/ha.d/authkeys auth1 1md5d0f70c79eeca5293902aiamheartbeat [[email protected]]#chmod600authkeys
test2節點的heartbeat安裝和test1一樣,此處略過。
6.4、heartbeat主配置文件參數:
[[email protected]~]#vim/etc/ha.d/ha.cf #debugfile/var/log/ha-debug#排錯日志 logfile#日志位置 keepalive2#多長時間發送一次心跳檢測,默認2秒,可以使用ms deadtime30#多長時間檢測不到主機就認為掛掉 warntime10#如果沒有收到心跳信息,那麼在等待多長時間就認為對方掛掉 initdead120#第一個節點起來後,等待其他節點的時間 baud19200#串行線纜的發送速率是多少 auto_failbackon#故障恢復後是否轉移回來 ping10.10.10.254#pingnode,萬一節點主機不通,要ping哪個主機 ping_groupgroup110.10.10.25410.10.10.253#pingnodegroup,只要組內有一台主機能ping通就可以 respawnhacluster/usr/lib/heartbeat/ipfail#當一個heartbeat服務停止了,會重啟對端的heartbeat服務 deadping30#pingnodes多長時間ping不通,就真的故障了 #serialserialportname...#串行設備是什麼 serial/dev/ttyS0#Linux serial/dev/cuaa0#FreeBSD serial/dev/cuad0#FreeBSD6.x serial/dev/cua/a#Solaris #Whatinterfacestobroadcastheartbeatsover?#如果使用以太網,定義使用單播、組播還是廣播發送心跳信息 bcasteth0#廣播 mcasteth0225.0.0.169410#組播 ucasteth0192.168.1.2#單播,只有兩個節點的時候才用單播 #定義stonith主機 stonith_host*baytech10.0.0.3myloginmysecretpassword stonith_hostken3rps10/dev/ttyS1kathy0 stonith_hostkathyrps10/dev/ttyS1ken30 #Tellwhatmachinesareinthecluster#告訴集群中有多少個節點,每一個節點用node和主機名寫一行,主機名要和uname-n保持一致 nodeken3 nodekathy 一般只要定義心跳信息的發送方式、和集群中的節點就行。 bcasteth0 nodetest1.local nodetest2.local
6.5、定義haresources資源配置文件:
[[email protected]~]#vim/etc/ha.d/haresources #node110.0.0.170Filesystem::/dev/sda1::/data1::ext2#默認用作主節點的主機名,要跟uname-n一樣。VIP是多少。自動掛載哪個設備,到哪個目錄下,文件類型是什麼。資源類型的參數要用雙冒號隔開 #just.linux-ha.org135.9.216.110http#和上面一樣,這裡使用的資源是在/etc/rc.d/init.d/下面的,默認先到/etc/ha.d/resource.d/目錄下找資源,找不到在到/etc/rc.d/init.d/目錄找 master1.localIPaddr::192.168.10.2/24/eth0mysqld master1.localIPaddr::192.168.10.2/24/eth0drbddisk::dataFilesystem::/dev/drbd1::/data::ext3mysqld#使用IPaddr腳本來配置VIP
6.6、拷貝master1.local的配置文件到master2.local上
[[email protected]~]#scp-pha.cfharesourcesauthkeysmaster2.local:/etc/ha.d/
7、啟動heartbeat
[[email protected]~]#serviceheartbeatstart [[email protected]~]#sshmaster2.local'serviceheartbeatstart'#一定要在test1上通過ssh的方式啟動test2節點的heartbeat
7.1、查看heartbeat啟動日志
[[email protected]~]#tail-f/var/log/messages Feb1615:12:45test-1heartbeat:[16056]:info:Configurationvalidated.Startingheartbeat3.0.4 Feb1615:12:45test-1heartbeat:[16057]:info:heartbeat:version3.0.4 Feb1615:12:45test-1heartbeat:[16057]:info:Heartbeatgeneration:1455603909 Feb1615:12:45test-1heartbeat:[16057]:info:glib:UDPBroadcastheartbeatstartedonport694(694)interfaceeth0 Feb1615:12:45test-1heartbeat:[16057]:info:glib:UDPBroadcastheartbeatclosedonport694interfaceeth0-Status:1 Feb1615:12:45test-1heartbeat:[16057]:info:glib:pingheartbeatstarted. Feb1615:12:45test-1heartbeat:[16057]:info:G_main_add_TriggerHandler:Addedsignalmanualhandler Feb1615:12:45test-1heartbeat:[16057]:info:G_main_add_TriggerHandler:Addedsignalmanualhandler Feb1615:12:45test-1heartbeat:[16057]:info:G_main_add_SignalHandler:Addedsignalhandlerforsignal17 Feb1615:12:45test-1heartbeat:[16057]:info:Localstatusnowsetto:'up' Feb1615:12:45test-1heartbeat:[16057]:info:Link192.168.10.1:192.168.10.1up. Feb1615:12:45test-1heartbeat:[16057]:info:Statusupdatefornode192.168.10.1:statusping Feb1615:12:45test-1heartbeat:[16057]:info:Linktest1.local:eth0up. Feb1615:12:51test-1heartbeat:[16057]:info:Linktest2.local:eth0up. Feb1615:12:51test-1heartbeat:[16057]:info:Statusupdatefornodetest2.local:statusup Feb1615:12:51test-1harc(default)[16068]:info:Running/etc/ha.d//rc.d/statusstatus Feb1615:12:52test-1heartbeat:[16057]:WARN:1lostpacket(s)for[test2.local][3:5] Feb1615:12:52test-1heartbeat:[16057]:info:Nopktsmissingfromtest2.local! Feb1615:12:52test-1heartbeat:[16057]:info:Comm_now_up():updatingstatustoactive Feb1615:12:52test-1heartbeat:[16057]:info:Localstatusnowsetto:'active' Feb1615:12:52test-1heartbeat:[16057]:info:Statusupdatefornodetest2.local:statusactive Feb1615:12:52test-1harc(default)[16086]:info:Running/etc/ha.d//rc.d/statusstatus Feb1615:13:02test-1heartbeat:[16057]:info:localresourcetransitioncompleted. Feb1615:13:02test-1heartbeat:[16057]:info:Initialresourceacquisitioncomplete(T_RESOURCES(us)) Feb1615:13:02test-1heartbeat:[16057]:info:remoteresourcetransitioncompleted. Feb1615:13:02test-1/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.10.2)[16138]:INFO:Resourceisstopped Feb1615:13:02test-1heartbeat:[16102]:info:LocalResourceacquisitioncompleted. Feb1615:13:02test-1harc(default)[16219]:info:Running/etc/ha.d//rc.d/ip-request-respip-request-resp Feb1615:13:02test-1ip-request-resp(default)[16219]:receivedip-request-respIPaddr::192.168.10.2/24/eth0OKyes Feb1615:13:02test-1ResourceManager(default)[16238]:info:Acquiringresourcegroup:test1.localIPaddr::192.168.10.2/24/eth0mysqld Feb1615:13:02test-1/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.10.2)[16264]:INFO:Resourceisstopped Feb1615:13:03test-1ResourceManager(default)[16238]:info:Running/etc/ha.d/resource.d/IPaddr192.168.10.2/24/eth0start Feb1615:13:03test-1IPaddr(IPaddr_192.168.10.2)[16386]:INFO:Addinginetaddress192.168.10.2/24withbroadcastaddress192.168.10.255todeviceeth0 Feb1615:13:03test-1IPaddr(IPaddr_192.168.10.2)[16386]:INFO:Bringingdeviceeth0up Feb1615:13:03test-1IPaddr(IPaddr_192.168.10.2)[16386]:INFO:/usr/libexec/heartbeat/send_arp-i200-r5-p/var/run/resource-agents/send_arp-192.168.10.2eth0192.168.10.2autonot_usednot_used Feb1615:13:03test-1/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.10.2)[16360]:INFO:Success Feb1615:13:03test-1ResourceManager(default)[16238]:info:Running/etc/init.d/mysqldstart Feb1615:13:04test-1ntpd[1605]:Listennormallyon15eth0192.168.10.2UDP123
說明:
1、Link test1.local:eth0 up、Link test2.local:eth0 up #兩個節點連接成功並為UP狀態。
2、Link 192.168.10.1:192.168.10.1 up #ping節點的IP也已經啟動
3、info: Running /etc/init.d/mysqld start #mysql啟動成功
4、Listen normally on 15 eth0 192.168.10.2 UDP 123 #VIP啟動成功
7.2、查看heartbeat的VIP
[[email protected]]#ipadd|grep"10.2" inet192.168.10.55/24brd192.168.10.255scopeglobaleth0 inet192.168.10.2/24brd192.168.10.255scopeglobalsecondaryeth0
[[email protected]]#ipadd|grep"10.2" inet192.168.10.56/24brd192.168.10.255scopeglobaleth0
注:可以看到現在VIP是在master1.local主機上。而master2.local上沒有VIP
8、測試效果
8.1、正常情況下連接mysql
[[email protected]]#mysql-uroot-h'192.168.10.2'-p Enterpassword: WelcometotheMySQLmonitor.Commandsendwith;or\g. YourMySQLconnectionidis2 Serverversion:5.5.44Sourcedistribution Copyright(c)2000,2013,Oracleand/oritsaffiliates.Allrightsreserved. OracleisaregisteredtrademarkofOracleCorporationand/orits affiliates.Othernamesmaybetrademarksoftheirrespective owners. Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement. mysql>showvariableslike'server_id'; +---------------+-------+ |Variable_name|Value| +---------------+-------+ |server_id|1| +---------------+-------+ 1rowinset(0.00sec) mysql>
8.2、關閉master1.local上的heartbeat
[[email protected]]#serviceheartbeatstop StoppingHigh-Availabilityservices:Done. [[email protected]]#ipadd|grep"192.168.10.2" inet192.168.10.55/24brd192.168.10.255scopeglobaleth0 [[email protected]]#ipadd|grep"192.168.10.2" inet192.168.10.56/24brd192.168.10.255scopeglobaleth0 inet192.168.10.2/24brd192.168.10.255scopeglobalsecondaryeth0
注:這個時候VIP已經漂移到了master2.local主機上,我們在來看看連接mysql的server_id
[[email protected]]#mysql-uroot-h'192.168.10.2'-p Enterpassword: WelcometotheMySQLmonitor.Commandsendwith;or\g. YourMySQLconnectionidis2 Serverversion:5.5.44Sourcedistribution Copyright(c)2000,2013,Oracleand/oritsaffiliates.Allrightsreserved. OracleisaregisteredtrademarkofOracleCorporationand/orits affiliates.Othernamesmaybetrademarksoftheirrespective owners. Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement. mysql>showvariableslike'server_id'; +---------------+-------+ |Variable_name|Value| +---------------+-------+ |server_id|2| +---------------+-------+ 1rowinset(0.00sec) mysql>
注:server_id已經從1變成了2,證明此時訪問的是master2.local主機上的mysql服務
測試完畢。下面配置drbd讓兩台mysql服務器之間使用同一個文件系統,以實現mysql的寫高可用。
9、配置DRBD
DRBD:(Distributed Replicated Block Device)分布式復制塊設備,是linux內核中的一個模塊。DRBD作為磁盤鏡像來講,它一定是主從架構的,它決不允許兩個節點同時讀寫,僅允許一個節點能讀寫,從節點不能讀寫和掛載,
但是DRDB有雙主的概念,主、從的角色可以切換。DRBD分別將位於兩台主機上的硬盤或磁盤分區做成鏡像設備,當我們客戶端的程序向主節點發起存儲請求的時候,這個數據會在底層以TCP/IP協議按位同布一份到備節點,
所以這能保證只要我們在主節點上存的數據,備節點上在按位一定有一模一樣的一份數據。這是在兩台主機上實現的,這意味著DRBD是工作在內核模塊當中。不像RAID1的鏡像是在同一台主機上實現的。
DRBD雙主模型的實現:一個節點在數據訪問的時候,它一定會將數據、元數據載入內存的,而且它對於某個文件內核中加鎖的操作,另一個節點的內核是看不到的,那如果它能將它自己施加的鎖通知給另一個節點的內核就可以了。
在這種情況下,我們就只能通過message layer(heartbeat、corosync都可)、pathmaker(把DRBD定義成資源),然後把這兩個主機上對應的鏡像格式化成集群文件系統(GFS2/OCFS2)。
這就是基於結合分布式文件鎖管理器(DLM Distributed Lock Manager)以及集群文件系統所完成的雙主模型。DRBD集群只允許有兩個節點,要麼雙主,要麼主從。
9.1、DRBD的三種工作模型
A、數據在本地DRBD存儲完成後向應用程序返回存儲成功的消息,異步模型。效率高,性能好。數據不安全
B、數據在本地DRBD存儲完成後並且通過TCP/IP把所有數據發送到從DRBD中,才向本地的應用程序返回存儲成功的消息,半同步模型。一般不用。
C、數據在本地DRBD存儲完成後,通過TCP/IP把所有數據發送到從DRBD中,從DRBD存儲完成後才向應用程序返回成功的消息,同步模型。效率低,性能若,但是數據安全可靠性大,用的最多。
9.2、DRBD的資源
1、資源名稱,可以是任意的ascii碼不包含空格的字符
2、DRBD設備,在雙方節點上,此DRBD設備的設備文件,一般為/dev/drbdN,其主設備號相同都是147,此設備號用來標識不通的設備
3、磁盤配置,在雙方節點上,各自提供的存儲設備,可以是個分區,可以是任何類型的塊設備,也可以是lvm
4、網絡配置,雙方數據同步時,所使用的網絡屬性
9.3、安裝DRBD
drbd在2.6.33開始,才整合進內核的。
9.3.1、下載drbd
[[email protected]~]#wget-O/usr/local/srchttp://oss.linbit.com/drbd/8.4/drbd-8.4.3.tar.gz
9.3.2、安裝drbd軟件
[[email protected]~]#cd/usr/local/src [[email protected]]#tar-zxvfdrbd-8.4.3.tar.gz [[email protected]]#cd/usr/local/src/drbd-8.4.3 [[email protected]]#./configure--prefix=/usr/local/drbd--with-km [[email protected]]#makeKDIR=/usr/src/kernels/2.6.32-573.18.1.el6.x86_64
[[email protected]]#makeinstall [[email protected]]#mkdir-p/usr/local/drbd/var/run/drbd [[email protected]]#cp/usr/local/drbd/etc/rc.d/init.d/drbd/etc/rc.d/init.d/
9.3.3、安裝drbd模塊
[[email protected]]#cddrbd/ [[email protected]]#makeclean [[email protected]]#makeKDIR=/usr/src/kernels/2.6.32-573.18.1.el6.x86_64 [[email protected]]#cpdrbd.ko/lib/modules/`uname-r`/kernel/lib/ [[email protected]]#modprobedrbd [[email protected]]#lsmod|grepdrbd
9.3.4、為drbd創建新分區
[[email protected]]#fdisk/dev/sdb WARNING:DOS-compatiblemodeisdeprecated.It'sstronglyrecommendedto switchoffthemode(command'c')andchangedisplayunitsto sectors(command'u'). Command(mforhelp):n Commandaction eextended pprimarypartition(1-4) p Partitionnumber(1-4):1 Firstcylinder(1-1305,default1): Usingdefaultvalue1 Lastcylinder,+cylindersor+size{K,M,G}(1-1305,default1305):+9G Command(mforhelp):w Thepartitiontablehasbeenaltered! Callingioctl()tore-readpartitiontable. AWRNING:Re-readingthepartitiontablefailedwitherror16:Deviceorresourcebusy. Thekernelstillusestheoldtable. Thenewtablewillbeusedatthenextreboot. Syncingdisks. [[email protected]]#partprobe/dev/sdb
test2節點的drbd安裝和分區配置步驟略過,和test1上一樣安裝,test2節點的drbd配置文件保證和test1節點一樣,使用scp傳到test2節點即可
10、配置drbd
10.1、配置drbd的通用配置文件
[[email protected]]#cd/usr/local/drbd/etc/drbd.d [[email protected]]#vimglobal_common.conf global{#global是全局配置 usage-countno;#官方用來統計有多少個客戶使用drbd的 #minor-countdialog-refreshdisable-ip-verification } common{#通用配置,用來配置所有資源那些相同屬性的。為drbd提供默認屬性的 protocolC;#默認使用協議C,即同步模型。 handlers{#處理器段,用來配置drbd的故障切換操作 #TheseareEXAMPLEhandlersonly. #Theymayhavesevereimplications, #likehardresettingthenodeundercertaincircumstances. #Becarefulwhenchosingyourpoison. pri-on-incon-degr"/usr/lib/drbd/notify-pri-on-incon-degr.sh;/usr/lib/drbd/notify-emergency-reboot.sh;echob>/proc/sysrq-trigger;reboot-f";# pri-lost-after-sb"/usr/lib/drbd/notify-pri-lost-after-sb.sh;/usr/lib/drbd/notify-emergency-reboot.sh;echob>/proc/sysrq-trigger;reboot-f";#腦裂之後的操作 local-io-error"/usr/lib/drbd/notify-io-error.sh;/usr/lib/drbd/notify-emergency-shutdown.sh;echoo>/proc/sysrq-trigger;halt-f";#本地i/o錯誤之後的操作 #fence-peer"/usr/lib/drbd/crm-fence-peer.sh"; #split-brain"/usr/lib/drbd/notify-split-brain.shroot"; #out-of-sync"/usr/lib/drbd/notify-out-of-sync.shroot"; #before-resync-target"/usr/lib/drbd/snapshot-resync-target-lvm.sh-p15---c16k"; #after-resync-target/usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } startup{ #wfc-timeoutdegr-wfc-timeoutoutdated-wfc-timeoutwait-after-sb#設備啟動時,兩個節點要同步,設置節點的等待時間,超時時間等 } options{ #cpu-maskon-no-data-accessible } disk{ on-io-errordetach;#一旦發生i/o錯誤,就把磁盤卸載。不繼續進行同步 #sizemax-bio-bvecson-io-errorfencingdisk-barrierdisk-flushes #disk-drainmd-flushesresync-rateresync-afteral-extents #c-plan-aheadc-delay-targetc-fill-targetc-max-rate #c-min-ratedisk-timeout } net{#設置網絡的buffers/cache大小,初始化時間等 #protocoltimeoutmax-epoch-sizemax-buffersunplug-watermark #connect-intping-intsndbuf-sizercvbuf-sizeko-count #allow-two-primariescram-hmac-algshared-secretafter-sb-0pri #after-sb-1priafter-sb-2prialways-asbprr-conflict #ping-timeoutdata-integrity-algtcp-corkon-congestion #congestion-fillcongestion-extentscsums-algverify-alg #use-rle cram-hmac-alg"sha1";#數據加密使用的算法 shared-secret"mydrbd1fa2jg8";#驗證密碼 } syncer{ rate200M;#定義數據傳輸速率 } }
10.2、配置資源文件,資源配置文件的名字要和資源文件中的一樣
[[email protected]]#vimmydrbd.res resourcemydrbd{#資源名稱,可以是任意的ascii碼不包含空格的字符 ontest1.local{#節點1,每個節點必須要能使用名稱的方式解析對方節點 device/dev/drbd0;#drbd設備的文件名叫什麼 disk/dev/sdb1;#分區設備是哪個 address192.168.10.55:7789;#節點ip和監聽的端口 meta-diskinternal;#drbd的meta(原數據)放在什麼地方,internal是放在設備內部 } ontest2.local{ device/dev/drbd0; disk/dev/sdb1; address192.168.10.56:7789; meta-diskinternal; } }
10.3、兩個節點的配置文件一樣,使用工具把配置文件傳到另一個節點
[[email protected]]#scp-r/usr/local/drbd/etc/drbd.*test2.local:/usr/local/drbd/etc/
10.4、在每個節點上初始化已定義的資源
[[email protected]]#drbdadmcreate-mdmydrbd --==Thankyouforparticipatingintheglobalusagesurvey==-- Theserver'sresponseis: Writingmetadata... initializingactivitylog NOTinitializedbitmap Newdrbdmetadatablocksuccessfullycreated. [[email protected]]# [[email protected]]#drbdadmcreate-mdmydrbd --==Thankyouforparticipatingintheglobalusagesurvey==-- Theserver'sresponseis: Writingmetadata... initializingactivitylog NOTinitializedbitmap Newdrbdmetadatablocksuccessfullycreated. [[email protected]]#
10.5、分別啟動兩個節點的drbd服務
[[email protected]]#servicedrbdstart [[email protected]]#servicedrbdstart
11、測試drbd的同步
11.1、查看drbd的啟動狀態
[[email protected]]#cat/proc/drbd version:8.4.3(api:1/proto:86-101) GIT-hash:[email protected],2016-02-2310:23:03 0:cs:Connectedro:Secondary/Secondaryds:Inconsistent/InconsistentCr-----#兩個節點都是從,將來可以把一個提升為主。Inconsistent處於沒有同步的狀態 ns:0nr:0dw:0dr:0al:0bm:0lo:0pe:0ua:0ap:0ep:1wo:boos:0
11.2、提升一個節點為主,並覆蓋從節點的drbd分區數據。在要提升為主的節點上執行
[[email protected]]#drbdadm----overwrite-data-of-peerprimarymydrbd
11.3、查看主節點同步狀態
[[email protected]]#watch-n1cat/proc/drbd Every1.0s:cat/proc/drbdTueFeb2317:10:552016 version:8.4.3(api:1/proto:86-101) GIT-hash:[email protected],2016-02-2310:23:03 0:cs:SyncSourcero:Primary/Secondaryds:UpToDate/InconsistentCr---n- ns:619656nr:0dw:0dr:627840al:0bm:37lo:1pe:8ua:64ap:0ep:1wo:boos:369144 [=============>.......]sync'ed:10.3%(369144/987896)K finish:0:00:12speed:25,632(25,464)K/sec
11.4、查看從節點的狀態
[[email protected]]#cat/proc/drbd version:8.4.3(api:1/proto:86-101) GIT-hash:[email protected],2016-02-2216:05:34 0:cs:Connectedro:Secondary/Primaryds:UpToDate/UpToDateCr----- ns:4nr:9728024dw:9728028dr:1025al:1bm:577lo:0pe:0ua:0ap:0ep:1wo:foos:0
11.5、在主節點格式化分區並掛在寫入數據測試
[[email protected]]#mke2fs-j/dev/drbd0 [[email protected]]#mkdir/mydrbd [[email protected]]#mount/dev/drdb0/mydrbd [[email protected]]#cd/mydrbd [[email protected]]#touchdrbd_test_file [[email protected]]#ls/mydrbd/ drbd_test_filelost+found
11.6、把主節點降級為從,把從節點提升為主。查看數據是否同步
11.1、主節點操作
11.1.1、卸載分區,注意卸載的時候要退出掛在目錄,否則會顯示設備忙,不能卸載
[[email protected]]#cd~ [[email protected]~]#umount/mydrbd [[email protected]~]#drbdadmsecondarymydrbd
11.1.2、查看現在的drbd狀態
[[email protected]~]#cat/proc/drbd version:8.4.3(api:1/proto:86-101) GIT-hash:[email protected],2016-02-2216:05:34 0:cs:Connectedro:Secondary/Secondaryds:UpToDate/UpToDateCr----- ns:4nr:9728024dw:9728028dr:1025al:1bm:577lo:0pe:0ua:0ap:0ep:1wo:foos:0
注:可以看到,現在drbd的兩個節點的狀態都是secondary的,下面把從節點提升為主
11.2、從節點操作
11.2.1、提升操作
[[email protected]~]#drdbadmprimarymydrbd
11.2.2、掛在drbd分區
[[email protected]~]#mkdir/mydrbd [[email protected]~]#mount/dev/drbd0/mydrbd/
11.2.3、查看是否有數據
[[email protected]~]#ls/myddrbd/ drbd_test_filelost+found
注:可以看到從節點切換成主後,已經同步了數據。drbd搭建完成。下面結合corosync+mysql配置雙主高可用。
12、結合corosync+drbd+mysql實現數據庫雙主高可用
將drbd配置為corosync雙節點高可用集群中的資源,能夠實現主從角色的自動切換,注意,要把某一個服務配置為高可用集群的資源,一定不能讓這個服務開機自動啟動。
12.1、關閉兩台節點的drbd開機自啟動
12.1.1、主節點操作
[[email protected]]#chkconfigdrbdoff [[email protected]]#chkconfig--list|grepdrbd drbd0:off1:off2:off3:off4:off5:off6:off
12.1.2、從節點操作
[[email protected]]#chkconfigdrbdoff [[email protected]]#chkconfig--list|grepdrbd drbd0:off1:off2:off3:off4:off5:off6:off
12.2、卸載drbd的文件系統並把主節點降級為從節點
12.2.1、從節點操作,注意,這裡的從節點剛才提升為主了。現在把他降級
[[email protected]]#umount/mydata/ [[email protected]]#drbdadmsecondarymydrbd [[email protected]]#cat/proc/drbd version:8.4.3(api:1/proto:86-101) GIT-hash:[email protected],2016-02-2216:05:34 0:cs:Connectedro:Secondary/Secondaryds:UpToDate/UpToDateCr----- ns:8nr:9728024dw:9728032dr:1073al:1bm:577lo:0pe:0ua:0ap:0ep:1wo:foos:0
注:確保兩個節點都是secondary
12.3、停止兩個節點的drbd服務
12.3.1、從節點操作
[[email protected]]#servicedrbdstop StoppingallDRBDresources:. [root@test2drbd]#
12.3.2、主節點操作
[[email protected]]#servicedrbdstop StoppingallDRBDresources:. [[email protected]]#
12.4、安裝corosync並創建日志目錄
12.4.1、主節點操作
[[email protected]]#wget-P/etc/yum.repos.dhttp://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo [[email protected]]#yuminstallcorosyncpacemakercrmsh [[email protected]]#mkdir/var/log/cluster
12.4.2、從節點操作
[[email protected]]#wget-P/etc/yum.repos.dhttp://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo [[email protected]]#mkdir/var/log/cluster [[email protected]]#yuminstallcorosyncpacemakercrmsh
12.5、corosync配置文件
12.5.1、主節點操作
[[email protected]]#cd/etc/corosync/ [root@test1corosync]#cpcorosync.conf.examplecorosync.conf
12.6、配置主節點配置文件,生成corosync秘鑰文件並復制給從節點(包括主配置文件)
12.6.1、主節點操作
[root@test1corosync]#vimcorosync.conf #Pleasereadthecorosync.conf.5manualpage compatibility:whitetank totem{ version:2#secauth:Enablemutualnodeauthentication.Ifyouchooseto #enablethis("on"),thendoremembertocreateashared #secretwith"corosync-keygen". secauth:on threads:2 #interface:defineatleastoneinterfacetocommunicate #over.Ifyoudefinemorethanoneinterfacestanza,youmust #alsosetrrp_mode. interface{ #Ringsmustbeconsecutivelynumbered,startingat0. ringnumber:0 #Thisisnormallythe*network*addressofthe #interfacetobindto.Thisensuresthatyoucanuse #identicalinstancesofthisconfigurationfile #acrossallyourclusternodes,withouthavingto #modifythisoption. bindnetaddr:192.168.10.0 #However,ifyouhavemultiplephysicalnetwork #interfacesconfiguredforthesamesubnet,thenthe #networkaddressaloneisnotsufficienttoidentify #theinterfaceCorosyncshouldbindto.Inthatcase, #configurethe*host*addressoftheinterface #instead: bindnetaddr:192.168.10.0 #Whenselectingamulticastaddress,considerRFC #2365(which,amongotherthings,specifiesthat #239.255.x.xaddressesarelefttothediscretionof #thenetworkadministrator).Donotreusemulticast #addressesacrossmultipleCorosyncclusterssharing #thesamenetwork. mcastaddr:239.212.16.19 #CorosyncusestheportyouspecifyhereforUDP #messaging,andalsotheimmediatelypreceding #port.Thusifyousetthisto5405,Corosyncsends #messagesoverUDPports5405and5404. mcastport:5405 #Time-to-liveforclustercommunicationpackets.The #numberofhops(routers)thatthisringwillallow #itselftopass.Notethatmulticastroutingmustbe #specificallyenabledonmostnetworkrouters. ttl:1#每一個數據報文不允許經過路由 } } logging{ #Logthesourcefileandlinewheremessagesarebeing #generated.Whenindoubt,leaveoff.Potentiallyusefulfor #debugging. fileline:off #Logtostandarderror.Whenindoubt,settono.Usefulwhen #runningintheforeground(wheninvoking"corosync-f") to_stderr:no #Logtoalogfile.Whensetto"no",the"logfile"option #mustnotbeset. to_logfile:yes logfile:/var/log/cluster/corosync.log #Logtothesystemlogdaemon.Whenindoubt,settoyes. to_syslog:no #Logdebugmessages(veryverbose).Whenindoubt,leaveoff. debug:off #Logmessageswithtimestamps.Whenindoubt,settoon #(unlessyouareonlyloggingtosyslog,wheredouble #timestampscanbeannoying). timestamp:on logger_subsys{ subsys:AMF debug:off } } service{ ver:0 name:pacemaker } aisexec{ user:root group:root } [root@test1corosync]#corosync-keygen [root@test1corosync]#scp-pauthkeycorosync.conftest2.local:/etc/corosync/
12.7、啟動corosync
12.7.1、主節點操作(注意:兩個corosync的啟動操作都要在主節點上進行)
[root@test1corosync]#servicecorosyncstart StartingCorosyncClusterEngine(corosync):[OK] [root@test1corosync]#sshtest2.local'servicecorosyncstart' [email protected]'spassword: StartingCorosyncClusterEngine(corosync):[OK]
12.8、查看集群狀態
12.8.1、問題:安裝之後系統沒有crm命令,不能使用crm的交互式模式
[root@test1corosync]#crmstatus -bash:crm:commandnotfound
解決辦法:安裝ha-cluster的yum源,在安裝crmsh軟件包
[root@test1corosync]#wget-P/etc/yum.repos.d/http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo
參考文檔:http://www.dwhd.org/20150530_014731.html 中的第二,安裝crmsh步驟
http://serverfault.com/questions/487309/crm-commandcluster-managment-for-pacemaker-not-found-in-latest-centos-6
安裝完成後就可以使用crm命令行模式了
12.8.2、查看集群節點狀態
[root@test1corosync]#crmstatus Lastupdated:WedFeb2413:47:172016 Lastchange:WedFeb2411:26:062016 Stack:classicopenais(withplugin) CurrentDC:test2.local-partitionwithquorum Version:1.1.11-97629de 2Nodesconfigured,2expectedvotes 0Resourcesconfigured Online:[test1.localtest2.local] Fulllistofresources:
注:顯示有2個nodes已配置並處於在線狀態
12.8.3、查看pacemaker是否啟動
[root@test1corosync]#greppcmk_startup/var/log/cluster/corosync.log Feb2411:05:15corosync[pcmk]info:pcmk_startup:CRM:Initialized Feb2411:05:15corosync[pcmk]Logging:Initializedpcmk_startup
12.8.4、檢查集群引擎是否啟動
[root@test1corosync]#grep-e"CorosyncClusterEngine"-e"configurationfile"/var/log/cluster/corosync.log Feb2411:04:16corosync[MAIN]CorosyncClusterEngine('1.4.7'):startedandreadytoprovideservice. Feb2411:04:16corosync[MAIN]Successfullyreadmainconfigurationfile'/etc/corosync/corosync.conf'.
注:經過上面的步驟,可以確認corosync的服務已經啟動並沒有問題了
12.9、配置corosync的屬性
12.9.1、禁用STONITH設備以保證verify不會出錯
[root@test1corosync]#crmconfigure crm(live)configure#propertystonith-enabled=false crm(live)configure#verify crm(live)configure#commit
12.9.2、配置當不具備法定票數的時候不能關閉服務
crm(live)configure#propertyno-quorum-policy=ignore crm(live)configure#verify crm(live)configure#commit
12.9.3、配置資源默認粘性值
crm(live)configure#rsc_defaultsresource-stickiness=100 crm(live)configure#verify crm(live)configure#commit
12.9.3、查看當前的配置
crm(live)configure#show nodetest1.local nodetest2.local propertycib-bootstrap-options:\ dc-version=1.1.11-97629de\ cluster-infrastructure="classicopenais(withplugin)"\ expected-quorum-votes=2\ stonith-enabled=false\ no-quorum-policy=ignore rsc_defaultsrsc-options:\ resource-stickiness=100
12.10、配置集群資源
12.10.1、定義drbd的資源
crm(live)configure#primitivemysqldrbdocf:linbit:drbdparamsdrbd_resource=mydrbdopstarttimeout=240opstoptimeout=100opmonitorrole=Masterinterval=10timeout=20opmonitorrole=Slaveinterval=20timeout=20
注:ocf:資源代理,這裡用linbit代理drbd。drbd_resource=mydrbd:drbd的資源名字。starttimeout:啟動超時時間。stoptimeout:停止超時時間。monitorrole=Master:定義主節點的監控時間,interval:監控間隔,timeout:超時時間。monitorrole=Slave:定義從節點的監控時間,interval:監控間隔,timeout:超時時間
crm(live)configure#show#查看配置 nodetest1.local nodetest2.local primitivemydrbdocf:linbit:drbd\ paramsdrbd_resource=mydrbd\ opstarttimeout=240interval=0\ opstoptimeout=100interval=0\ opmonitorrole=Masterinterval=10stimeout=20s\ opmonitorrole=Slaveinterval=20stimeout=20s propertycib-bootstrap-options:\ dc-version=1.1.11-97629de\ cluster-infrastructure="classicopenais(withplugin)"\ expected-quorum-votes=2\ stonith-enabled=false\ no-quorum-policy=ignore rsc_defaultsrsc-options:\ resource-stickiness=100 crm(live)configure#verify#驗證配置
12.10.2、定義drbd的主從資源
crm(live)configure#msms_mysqldrbdmysqldrbdmetamaster-max=1master-node-max=1clone-max=2clone-node-max=1notify=true 注:msms_mydrbdmydrbd:把mydrbd做成主從,這個資源名稱叫ms_mydrbd。meta:定義元數據屬性。master-max=1:最多有1個主資源,master-node-max=1:最多有1個主節點,clone-max=2:最多有2個克隆,clone-node-max=1:每個節點上,可以啟動幾個克隆資源。
crm(live)configure#verify crm(live)configure#commit crm(live)configure#show nodetest1.local nodetest2.local primitivemydrbdocf:linbit:drbd\ paramsdrbd_resource=mydrbd\ opstarttimeout=240interval=0\ opstoptimeout=100interval=0\ opmonitorrole=Masterinterval=10stimeout=20s\ opmonitorrole=Slaveinterval=20stimeout=20s msms_mydrbdmydrbd\ metamaster-max=1master-node-max=1clone-max=2clone-node-max=1 propertycib-bootstrap-options:\ dc-version=1.1.11-97629de\ cluster-infrastructure="classicopenais(withplugin)"\ expected-quorum-votes=2\ stonith-enabled=false\ no-quorum-policy=ignore rsc_defaultsrsc-options:\ resource-stickiness=100
注:可以看到現在配置有一個primitive mydrbd的資源,一個ms的主從類型。
12.10.3、查看集群節點狀態
crm(live)configure#cd crm(live)#status Lastupdated:ThuFeb2514:45:522016 Lastchange:ThuFeb2514:44:442016 Stack:classicopenais(withplugin) CurrentDC:test1.local-partitionwithquorum Version:1.1.11-97629de 2Nodesconfigured,2expectedvotes 2Resourcesconfigured Online:[test1.localtest2.local] Fulllistofresources: Master/SlaveSet:ms_mysqldrbd[mysqldrbd] Masters:[test1.local] Slaves:[test2.local]
注:正常狀態,已經可以看到主從狀態,test1是master。
錯誤狀態:
Master/SlaveSet:ms_mydrbd[mydrbd] mydrbd(ocf::linbit:drbd):FAILEDtest2.local(unmanaged) mydrbd(ocf::linbit:drbd):FAILEDtest1.local(unmanaged) Failedactions: mydrbd_stop_0ontest2.local'notconfigured'(6):call=87,status=complete,last-rc-change='ThuFeb2514:17:342016',queued=0ms,exec=34ms mydrbd_stop_0ontest2.local'notconfigured'(6):call=87,status=complete,last-rc-change='ThuFeb2514:17:342016',queued=0ms,exec=34ms mydrbd_stop_0ontest1.local'notconfigured'(6):call=72,status=complete,last-rc-change='ThuFeb2514:17:342016',queued=0ms,exec=34ms mydrbd_stop_0ontest1.local'notconfigured'(6):call=72,status=complete,last-rc-change='ThuFeb2514:17:342016',queued=0ms,exec=34ms解決辦法:定義資源的時候要注意,drbd_resource=mydrbd的名字是drbd資源的名字且主從資源的名稱不能和drbd資源的名稱一樣,還有各種超時設置中不要加s。測試了很久,才找到這個問題。。。。如有不同看法,請各位大神賜教,謝謝。
12.10.4、驗證主從的切換
[root@test1~]#crmnodestandbytest1.local#將主節點下線 [root@test1~]#crmstatus注:查看狀態,顯示主節點已經不在線,而test2成為了master Lastupdated:ThuFeb2514:51:582016 Lastchange:ThuFeb2514:51:442016 Stack:classicopenais(withplugin) CurrentDC:test1.local-partitionwithquorum Version:1.1.11-97629de 2Nodesconfigured,2expectedvotes 2Resourcesconfigured Nodetest1.local:standby Online:[test2.local] Fulllistofresources: Master/SlaveSet:ms_mysqldrbd[mysqldrbd] Masters:[test2.local] Stopped:[test1.local] [root@test1~]#crmnodeonlinetest1.local#重新將test1上線 [root@test1~]#crmstatus#查看狀態,顯示test依舊為master,而test1成為了slave Lastupdated:ThuFeb2514:52:552016 Lastchange:ThuFeb2514:52:392016 Stack:classicopenais(withplugin) CurrentDC:test1.local-partitionwithquorum Version:1.1.11-97629de 2Nodesconfigured,2expectedvotes 2Resourcesconfigured Online:[test1.localtest2.local] Fulllistofresources: Master/SlaveSet:ms_mysqldrbd[mysqldrbd] Masters:[test2.local] Slaves:[test1.local]
12.10.5、定義文件系統資源
[root@test1~]#crm crm(live)#configure crm(live)configure#primitivemystoreocf:heartbeat:Filesystemparamsdevice=/dev/drbd0directory=/mydrbdfstype=ext3opstarttimeout=60opstoptimeout=60注:這裡的語句表示,primitivemystore:定義一個資源mystore,使用heartbeat的文件系統,params參數定義:drbd的設備名,掛在目錄,文件系統類型,和啟動停止超時時間 crm(live)configure#verify
12.10.6、定義排列約束以確保Filesystem和主節點在一起。
crm(live)configure#colocationmystore_withms_mysqldrbdinf:mystorems_mysqldrbd:Master crm(live)configure#verify
12.10.7、定義Order約束,以確保主從資源必須要先成為主節點以後才能掛在文件系統
crm(live)configure#ordermystore_after_ms_mysqldrbdmandatory:ms_mysqldrbd:promotemystore:start注:這裡的語句表示,mystore_after_ms_mysqldrbdmandatory:,mystore在ms_mysqldrbd之後啟動,mandatory(代表強制的),先啟動ms_mysqldrbd,promote(角色切換成功後),在啟動mystore:start crm(live)configure#verify crm(live)configure#commit crm(live)configure#cd crm(live)#status#查看狀態,可以看到文件系統已經自動掛在到主節點test1.local上了。 Lastupdated:ThuFeb2515:29:392016 Lastchange:ThuFeb2515:29:362016 Stack:classicopenais(withplugin) CurrentDC:test1.local-partitionwithquorum Version:1.1.11-97629de 2Nodesconfigured,2expectedvotes 3Resourcesconfigured Online:[test1.localtest2.local] Fulllistofresources: Master/SlaveSet:ms_mysqldrbd[mysqldrbd] Masters:[test1.local] Slaves:[test2.local] mystore(ocf::heartbeat:Filesystem):Startedtest1.local [root@test1~]#ls/mydrbd/#已經有之前創建的文件了。 inittablost+found
12.10.8、切換主從節點,驗證文件系統是否會自動掛載
[root@test1~]#crmnodestandbytest1.local [root@test1~]#crmnodeonlinetest1.local [root@test1~]#crmstatus Lastupdated:ThuFeb2515:32:392016 Lastchange:ThuFeb2515:32:362016 Stack:classicopenais(withplugin) CurrentDC:test1.local-partitionwithquorum Version:1.1.11-97629de 2Nodesconfigured,2expectedvotes 3Resourcesconfigured Online:[test1.localtest2.local] Fulllistofresources: Master/SlaveSet:ms_mysqldrbd[mysqldrbd] Masters:[test2.local] Slaves:[test1.local] mystore(ocf::heartbeat:Filesystem):Startedtest2.local [root@test2~]#ls/mydrbd/ inittablost+found
12.9.12、配置mysql
12.9.12.1、關閉mysqld的開機自啟動
[root@test1~]#chkconfigmysqldoff [root@test2~]#chkconfigmysqldoff#一定要記住,只要是高可用集群中的資源的服務,一定不能開機自啟動
12.9.12.2、配置主節點1的mysql服務(mysql安裝就不寫了,直接進入mysql的配置)
[root@test1mysql]#mkdir/mydrbd/data [root@test1mysql]#chown-Rmysql.mysql/mydrbd/data/ [root@test1mysql]#./scripts/mysql_install_db--user=mysql--datadir=/mydrbd/data/--basedir=/usr/local/mysql/ InstallingMySQLsystemtables... 16022516:07:12[Note]/usr/local/mysql//bin/mysqld(mysqld5.5.44)startingasprocess18694... OK Fillinghelptables... 16022516:07:18[Note]/usr/local/mysql//bin/mysqld(mysqld5.5.44)startingasprocess18701... OK Tostartmysqldatboottimeyouhavetocopy support-files/mysql.servertotherightplaceforyoursystem PLEASEREMEMBERTOSETAPASSWORDFORTHEMySQLrootUSER! Todoso,starttheserver,thenissuethefollowingcommands: /usr/local/mysql//bin/mysqladmin-urootpassword'new-password' /usr/local/mysql//bin/mysqladmin-uroot-htest1.localpassword'new-password' Alternativelyyoucanrun: /usr/local/mysql//bin/mysql_secure_installation whichwillalsogiveyoutheoptionofremovingthetest databasesandanonymoususercreatedbydefault.Thisis stronglyrecommendedforproductionservers. Seethemanualformoreinstructions. YoucanstarttheMySQLdaemonwith: cd/usr/local/mysql/;/usr/local/mysql//bin/mysqld_safe& YoucantesttheMySQLdaemonwithmysql-test-run.pl cd/usr/local/mysql//mysql-test;perlmysql-test-run.pl Pleasereportanyproblemsat [root@test1mysql]#servicemysqldstart StartingMySQL.....[OK] [root@test1mysql]#mysql-uroot WelcometotheMySQLmonitor.Commandsendwith;or\g. YourMySQLconnectionidis2 Serverversion:5.5.44Sourcedistribution Copyright(c)2000,2013,Oracleand/oritsaffiliates.Allrightsreserved. OracleisaregisteredtrademarkofOracleCorporationand/orits affiliates.Othernamesmaybetrademarksoftheirrespective owners. Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement. mysql>showdatabases; +--------------------+ |Database| +--------------------+ |information_schema| |mysql| |performance_schema| |test| +--------------------+ 4rowsinset(0.06sec) mysql>createdatabasedrbd_mysql; QueryOK,1rowaffected(0.00sec) mysql>showdatabases; +--------------------+ |Database| +--------------------+ |information_schema| |drbd_mysql| |mysql| |performance_schema| |test| +--------------------+ 5rowsinset(0.00sec) mysql>
12.9.12.3、配置從節點的mysql
注:因為剛才已經在test1.local上的共享存儲上初始化了mysql的data目錄,在test2.local上就不用重復初始化了。
[root@test1mysql]#crmstatus Lastupdated:ThuFeb2516:14:142016 Lastchange:ThuFeb2515:35:162016 Stack:classicopenais(withplugin) CurrentDC:test1.local-partitionwithquorum Version:1.1.11-97629de 2Nodesconfigured,2expectedvotes 3Resourcesconfigured Online:[test1.localtest2.local] Fulllistofresources: Master/SlaveSet:ms_mysqldrbd[mysqldrbd] Masters:[test1.local] Slaves:[test2.local] mystore(ocf::heartbeat:Filesystem):Startedtest1.local
1、先讓test2.local成為master才能繼續操作
[root@test1mysql]#crmnodestandbytest1.local [root@test1mysql]#crmnodeonlinetest1.local [root@test1mysql]#crmstatus Lastupdated:ThuFeb2516:14:462016 Lastchange:ThuFeb2516:14:302016 Stack:classicopenais(withplugin) CurrentDC:test1.local-partitionwithquorum Version:1.1.11-97629de 2Nodesconfigured,2expectedvotes 3Resourcesconfigured Nodetest1.local:standby Online:[test2.local] Fulllistofresources: Master/SlaveSet:ms_mysqldrbd[mysqldrbd] Masters:[test2.local] Stopped:[test1.local] mystore(ocf::heartbeat:Filesystem):Startedtest2.local#要確保讓test2.local成為master節點
2、test2.local的mysql配置
[root@test2~]#vim/etc/my.cnf 添加: datadir=/mydrbd/data [root@test2~]#servicemysqldstart StartingMySQL.[OK] [root@test2~]#mysql-uroot WelcometotheMySQLmonitor.Commandsendwith;or\g. YourMySQLconnectionidis1 Serverversion:5.5.44Sourcedistribution Copyright(c)2000,2013,Oracleand/oritsaffiliates.Allrightsreserved. OracleisaregisteredtrademarkofOracleCorporationand/orits affiliates.Othernamesmaybetrademarksoftheirrespective owners. Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement. mysql>showdatabases; +--------------------+ |Database| +--------------------+ |information_schema| |drbd_mysql| |mysql| |performance_schema| |test| +--------------------+ 5rowsinset(0.10sec) mysql>
12.9.13、定義mysql資源
1、停止mysql
[root@test2~]#servicemysqldstop ShuttingdownMySQL.[OK]
2、定義mysql資源
[root@test1mysql]#crmconfigure crm(live)configure#primitivemysqldlsb:mysqld crm(live)configure#verify
3、定義mysql和主節點約束
crm(live)configure#colocationmysqld_with_mystoreinf:mysqldmystore#注:因為mystore一定和主節點在一起,那麼我們就定義mysql和mystore的約束。 crm(live)configure#verify
4、定義mysql和mystore啟動次序約束
crm(live)configure#ordermysqld_after_mystoremandatory:mystoremysqld#一定要弄清楚啟動的先後次序,mysql是在mystore之後啟動的。 crm(live)configure#verify crm(live)configure#commit crm(live)configure#cd crm(live)#status Lastupdated:ThuFeb2516:44:292016 Lastchange:ThuFeb2516:42:162016 Stack:classicopenais(withplugin) CurrentDC:test1.local-partitionwithquorum Version:1.1.11-97629de 2Nodesconfigured,2expectedvotes 4Resourcesconfigured Online:[test1.localtest2.local] Fulllistofresources: Master/SlaveSet:ms_mysqldrbd[mysqldrbd] Masters:[test2.local] Slaves:[test1.local] mystore(ocf::heartbeat:Filesystem):Startedtest2.local mysqld(lsb:mysqld):Startedtest2.local注:現在主節點在test2.local上。
5、驗證test2.local上的mysql登錄是否正常和角色切換後是否正常
[root@test2~]#mysql-uroot WelcometotheMySQLmonitor.Commandsendwith;or\g. YourMySQLconnectionidis2 Serverversion:5.5.44Sourcedistribution Copyright(c)2000,2013,Oracleand/oritsaffiliates.Allrightsreserved. OracleisaregisteredtrademarkofOracleCorporationand/orits affiliates.Othernamesmaybetrademarksoftheirrespective owners. Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement. mysql>showdatabases;#test2.local上已經啟動好了mysql並自動掛在了drbd資源 +--------------------+ |Database| +--------------------+ |information_schema| |drbd_mysql| |mysql| |performance_schema| |test| +--------------------+ 5rowsinset(0.07sec) mysql>createdatabasemydb;#創建一個數據庫,然後切換到test1.local節點上,看是否正常 QueryOK,1rowaffected(0.00sec) mysql>showdatabases; +--------------------+ |Database| +--------------------+ |information_schema| |drbd_mysql| |mydb| |mysql| |performance_schema| |test| +--------------------+ 6rowsinset(0.00sec) mysql>exit [root@test1mysql]#crmnodestandbytest2.local#將test2.local主節點standby,讓test1.local自動成為master [root@test1mysql]#crmnodeonlinetest2.local [root@test1mysql]#crmstatus Lastupdated:ThuFeb2516:53:242016 Lastchange:ThuFeb2516:53:192016 Stack:classicopenais(withplugin) CurrentDC:test1.local-partitionwithquorum Version:1.1.11-97629de 2Nodesconfigured,2expectedvotes 4Resourcesconfigured Online:[test1.localtest2.local] Fulllistofresources: Master/SlaveSet:ms_mysqldrbd[mysqldrbd] Masters:[test1.local] Slaves:[test2.local] mystore(ocf::heartbeat:Filesystem):Startedtest1.local#test1.local已經成為master mysqld(lsb:mysqld):Startedtest1.local [root@test1mysql]#mysql-uroot#在test1.local上登錄mysql WelcometotheMySQLmonitor.Commandsendwith;or\g. YourMySQLconnectionidis1 Serverversion:5.5.44Sourcedistribution Copyright(c)2000,2013,Oracleand/oritsaffiliates.Allrightsreserved. OracleisaregisteredtrademarkofOracleCorporationand/orits affiliates.Othernamesmaybetrademarksoftheirrespective owners. Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement. mysql>showdatabases; +--------------------+ |Database| +--------------------+ |information_schema| |drbd_mysql| |mydb|#已經有剛才在test2.local上創建的mydb數據庫。目前,一切正常 |mysql| |performance_schema| |test| +--------------------+ 6rowsinset(0.15sec) mysql>
12.9.14、定義VIP資源及約束關系
crm(live)configure#primitivemyipocf:heartbeat:IPaddrparamsip=192.168.10.3nic=eth0cidr_netmask=24#這裡出了一個錯誤,浪費半天時間,是因為子網掩碼寫的255.255.255.0,應該寫成24,忘記命令的時候,要多使用table和help。 crm(live)configure#verify crm(live)configure#colocationmyip_with_ms_mysqldrbdinf:ms_mysqldrbd:Mastermyip#定義vip和ms_mysqldrbd的約束關系 crm(live)configure#verify crm(live)configure#commit crm(live)configure#cd crm(live)#status#查看狀態 Lastupdated:FriFeb2610:05:162016 Lastchange:FriFeb2610:05:122016 Stack:classicopenais(withplugin) CurrentDC:test1.local-partitionwithquorum Version:1.1.11-97629de 2Nodesconfigured,2expectedvotes 5Resourcesconfigured Online:[test1.localtest2.local] Fulllistofresources: Master/SlaveSet:ms_mysqldrbd[mysqldrbd] Masters:[test1.local] Slaves:[test2.local] mystore(ocf::heartbeat:Filesystem):Startedtest1.local mysqld(lsb:mysqld):Startedtest1.local myip(ocf::heartbeat:IPaddr):Startedtest1.local#vip已經啟動在test1.local節點上了。
12.9.15、測試連接VIP
[root@test1~]#ipaddr#查看test1.local上是否綁定VIP 1:lo:mtu65536qdiscnoqueuestateUNKNOWN link/loopback00:00:00:00:00:00brd00:00:00:00:00:00 inet127.0.0.1/8scopehostlo inet6::1/128scopehost valid_lftforeverpreferred_lftforever 2:eth0: mtu1500qdiscpfifo_faststateUPqlen1000 link/ether00:0c:29:34:7d:9fbrdff:ff:ff:ff:ff:ff inet192.168.10.55/24brd192.168.10.255scopeglobaleth0 inet192.168.10.3/24brd192.168.10.255scopeglobalsecondaryeth0#VIP已經綁定到test1.local上的eth0接口上 inet6fe80::20c:29ff:fe34:7d9f/64scopelink valid_lftforeverpreferred_lftforever [root@test-3~]#mysql-uroot-h192.168.10.3-p#使用VIP連接mysql,這裡要給連接的客戶端授權,不然不能登錄 Enterpassword: WelcometotheMySQLmonitor.Commandsendwith;or\g. YourMySQLconnectionidis6 Serverversion:5.5.44Sourcedistribution Copyright(c)2000,2013,Oracleand/oritsaffiliates.Allrightsreserved. OracleisaregisteredtrademarkofOracleCorporationand/orits affiliates.Othernamesmaybetrademarksoftheirrespective owners. Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement. mysql>showdatabases;#下面有我們剛才創建的兩個庫 +--------------------+ |Database| +--------------------+ |information_schema| |drbd_mysql| |mydb| |mysql| |performance_schema| |test| +--------------------+ 6rowsinset(0.08sec)
模擬test1.local故障:
[root@test1mysql]#crmnodestandbytest1.local#將test1.local降級為slave,看vip是否會自動切換 [root@test1mysql]#crmnodeonlinetest1.local [root@test1mysql]#crmstatus Lastupdated:FriFeb2610:20:382016 Lastchange:FriFeb2610:20:352016 Stack:classicopenais(withplugin) CurrentDC:test1.local-partitionwithquorum Version:1.1.11-97629de 2Nodesconfigured,2expectedvotes 5Resourcesconfigured Online:[test1.localtest2.local] Fulllistofresources: Master/SlaveSet:ms_mysqldrbd[mysqldrbd] Masters:[test2.local] Slaves:[test1.local] mystore(ocf::heartbeat:Filesystem):Startedtest2.local mysqld(lsb:mysqld):Startedtest2.local myip(ocf::heartbeat:IPaddr):Startedtest2.local#vip已經在test2.local節點上了。
查看test2的IP信息:
[root@test2~]#ipaddr#在test2.local上查看VIP綁定信息 1:lo:mtu65536qdiscnoqueuestateUNKNOWN link/loopback00:00:00:00:00:00brd00:00:00:00:00:00 inet127.0.0.1/8scopehostlo inet6::1/128scopehost valid_lftforeverpreferred_lftforever 2:eth0: mtu1500qdiscpfifo_faststateUPqlen1000 link/ether00:0c:29:fd:7f:e5brdff:ff:ff:ff:ff:ff inet192.168.10.56/24brd192.168.10.255scopeglobaleth0 inet192.168.10.3/24brd192.168.10.255scopeglobalsecondaryeth0#VIP已經在test2.local上的eth0接口上。 inet6fe80::20c:29ff:fefd:7fe5/64scopelink valid_lftforeverpreferred_lftforever
測試連接MySQL:
[root@test-3~]#mysql-uroot-h192.168.10.3-p#連接mysql Enterpassword: WelcometotheMySQLmonitor.Commandsendwith;or\g. YourMySQLconnectionidis1 Serverversion:5.5.44Sourcedistribution Copyright(c)2000,2013,Oracleand/oritsaffiliates.Allrightsreserved. OracleisaregisteredtrademarkofOracleCorporationand/orits affiliates.Othernamesmaybetrademarksoftheirrespective owners. Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement. mysql>showdatabases;#一切正常。 +--------------------+ |Database| +--------------------+ |information_schema| |drbd_mysql| |mydb| |mysql| |performance_schema| |test| +--------------------+ 6rowsinset(0.06sec)
到此,corosync+drbd+mysql已經配置完畢,文檔不夠詳細,沒有corosync、heartbeat、drbd、mysql結合的原理講解,沒有發生腦裂後的處理辦法和預防方法。以後有時間在加上吧。