節點層:
olsnodes
-n:顯示每個節點編號。
[oracle@rac1 ~]# olsnodes -n
rac1 1
rac2 2
-p:顯示每個節點用於private interconnect的網絡接口名稱。
[oracle@rac1 ~]# olsnodes -p
rac1 rac1-priv
rac2 rac2-priv
-i:顯示每個節點的VIP
[oracle@rac1 ~]# olsnodes -i
rac1 rac1-vip
rac2 rac2-vip
-g:打印日志信息
[oracle@rac1 ~]# olsnodes -g
rac1
rac2
-v:打印詳細日志信息
[oracle@rac1 ~]# olsnodes -v
prlslms: Initializing LXL global
prlsndmain: Initializing CLSS context
prlsmemberlist: No of cluster members configured = 256
prlsmemberlist: Getting information for nodenum = 1
prlsmemberlist: node_name = rac1
prlsmemberlist: ctx->lsdata->node_num = 1
prls_printdata: Printing the node data
rac1
prlsmemberlist: Getting information for nodenum = 2
prlsmemberlist: node_name = rac2
prlsmemberlist: ctx->lsdata->node_num = 2
prls_printdata: Printing the node data
rac2
prlsndmain: olsnodes executed successfully
prlsndterm: Terminating LSF
網絡層:
[oracle@rac1 ~]# oifcfg
Name:
oifcfg - Oracle Interface Configuration Tool.
Usage: oifcfg iflist [-p [-n]]
oifcfg setif {-node <nodename> | -global} {<if_name>/<subnet>:<if_type>}...
oifcfg getif [-node <nodename> | -global] [ -if <if_name>[/<subnet>] [-type <if_type>] ]
oifcfg delif [-node <nodename> | -global] [<if_name>[/<subnet>]]
oifcfg [-help]
<nodename> - name of the host, as known to a communications network
<if_name> - name by which the interface is configured in the system
<subnet> - subnet address of the interface
<if_type> - type of the interface { cluster_interconnect | public | storage }
iflist顯示網口列表:
[oracle@rac1 ~]# oifcfg iflist
eth0 192.168.1.0
eth1 192.168.2.0
eth2 192.168.101.0
getif可以獲得單個網口的信息
[oracle@rac1 ~]# oifcfg getif
eth0 192.168.1.0 global public
eth1 192.168.2.0 global cluster_interconnect
[oracle@rac1 ~]# oifcfg getif -type public
eth0 192.168.1.0 global public
[oracle@rac1 ~]# oifcfg getif -type cluster_interconnect
eth1 192.168.2.0 global cluster_interconnect
setif配置單個網口:
[oracle@rac1 ~]# oifcfg setif -global eth2/192.168.101.0:public
delif刪除網口:
[oracle@rac1 ~]# oifcfg delif -global eth2/192.168.101.0
集群層:
crsctl
[oracle@rac1 ~]# crsctl
Usage: crsctl check crs - checks the viability of the CRS stack
crsctl check cssd - checks the viability of CSS
crsctl check crsd - checks the viability of CRS
crsctl check evmd - checks the viability of EVM
crsctl set css <parameter> <value> - sets a parameter override
crsctl get css <parameter> - gets the value of a CSS parameter
crsctl unset css <parameter> - sets CSS parameter to its default
crsctl query css votedisk - lists the voting disks used by CSS
crsctl add css votedisk <path> - adds a new voting disk
crsctl delete css votedisk <path> - removes a voting disk
crsctl enable crs - enables startup for all CRS daemons
crsctl disable crs - disables startup for all CRS daemons
crsctl start crs - starts all CRS daemons.
crsctl stop crs - stops all CRS daemons. Stops CRS resources in case of cluster.
crsctl start resources - starts CRS resources.
crsctl stop resources - stops CRS resources.
crsctl debug statedump evm - dumps state info for evm objects
crsctl debug statedump crs - dumps state info for crs objects
crsctl debug statedump css - dumps state info for css objects
crsctl debug log css [module:level]{,module:level} ...
- Turns on debugging for CSS
crsctl debug trace css - dumps CSS in-memory tracing cache
crsctl debug log crs [module:level]{,module:level} ...
- Turns on debugging for CRS
crsctl debug trace crs - dumps CRS in-memory tracing cache
crsctl debug log evm [module:level]{,module:level} ...
- Turns on debugging for EVM
crsctl debug trace evm - dumps EVM in-memory tracing cache
crsctl debug log res <resname:level> turns on debugging for resources
crsctl query crs softwareversion [<nodename>] - lists the version of CRS software installed
crsctl query crs activeversion - lists the CRS software operating version
crsctl lsmodules css - lists the CSS modules that can be used for debugging
crsctl lsmodules crs - lists the CRS modules that can be used for debugging
crsctl lsmodules evm - lists the EVM modules that can be used for debugging
If necesary any of these commands can be run with additional tracing by
adding a "trace" argument at the very front.
Example: crsctl trace check css
檢查crs狀態:
[oracle@rac1 ~]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
crs關閉的情況下:
[oracle@rac1 ~]# crsctl check crs
Failure 1 contacting CSS daemon
Cannot communicate with CRS
Cannot communicate with EVM
分別檢查cssd、crsd、evmd三個組件的狀態:
[oracle@rac1 ~]$ crsctl check cssd
CSS appears healthy
[oracle@rac1 ~]$ crsctl check crsd
CRS appears healthy
[oracle@rac1 ~]$ crsctl check evmd
EVM appears healthy
禁止CRS自動啟動(root):
[root@rac1 ~]# crsctl disable crs
配置CRS自動啟動(root):
[root@rac1 ~]# crsctl enable crs
這兩個命令實際是修改了/etc/oracle/scls_scr/rac1/root/crsstart文件
啟動CRS(root)
[root@rac1 ~]# crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
停止CRS(root)
[root@rac1 ~]# crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
查看votedisk的位置:
[oracle@rac1 ~]# crsctl query css votedisk
0. 0 /dev/raw/raw2
located 1 votedisk(s).
get查看設置
[oracle@rac1 ~]# crsctl get css misscount
60
set修改參數(要慎用,不能亂動)
[oracle@rac1 ~]# crsctl set css misscount 60
Configuration parameter misscount is now set to 60.
跟蹤CRS模塊,提供診斷輔助
CRS由CRS、CSS、EVM這3個服務組成,而每個服務又是由一系列module(模塊)組成,crsctl允許對於每個module進行跟蹤,並把跟蹤內容記錄記到日志中。
crs lsmodules可以用來查看每個服務下的module:
[oracle@rac1 ~]$ crsctl lsmodules css
The following are the CSS modules ::
CSSD
COMMCRS
COMMNS
[oracle@rac1 ~]$ crsctl lsmodules crs
The following are the CRS modules ::
CRSUI
CRSCOMM
CRSRTI
CRSMAIN
CRSPLACE
CRSAPP
CRSRES
CRSCOMM
CRSOCR
CRSTIMER
CRSEVT
CRSD
CLUCLS
CSSCLNT
COMMCRS
COMMNS
[oracle@rac1 ~]$ crsctl lsmodules evm
The following are the EVM modules ::
EVMD
EVMDMAIN
EVMCOMM
EVMEVT
EVMAPP
EVMAGENT
CRSOCR
CLUCLS
CSSCLNT
COMMCRS
COMMNS
跟蹤CSSD模塊,需要root用戶執行:
[root@rac1 ~]# crsctl debug log css "CSSD:1"
Configuration parameter trace is now set to 1.
Set CRSD Debug Module: CSSD Level: 1
$CRS_HOME/log/rac1/cssd/ocssd.log中的跟蹤內容
[ CSSD]2014-08-20 22:32:38.992 [102603664] >TRACE: clssgmClientConnectMsg: Connect from con(0x8584fa8) proc(0x8584db0) pid() proto(10:2:1:1)
添加Votedisk:
查看目前的votedisk
[root@rac1 cssd]# crsctl query css votedisk
關閉CRS
[root@rac1 cssd]# crsctl stop crs
添加votedisk:
[root@rac1 cssd]# crsctl add css votedisk /dev/raw/raw2
此時會添加失敗,因為即使是在CRS關閉以後,也必須通過-force參數來增加或刪除votedisk。並且-force參數只有在crs關閉的場合下使用才安全。
正確添加votedisk的命令:
[root@rac1 cssd]# crsctl add css votedisk /dev/raw/raw2 -force
添加完後查看
[root@rac1 cssd]# crsctl query css votedisk
啟動crs
[root@rac1 cssd]# crsctl start crs
OCR命令系列:
1.ocrdump
打印出ocr中的內容進行查看,不能進行備份恢復,只能用於閱讀。
ocrdump [-stdout] [filename] [-keyname name] [-xml]
參數說明:
-stdout:把內容打印輸出到屏幕上。
Filename:內容輸出到文件中。
-keyname:只打印某個鍵及其子鍵的內容。
-xml:以xml格式打印輸出。
下面例子把SYSTEM.css鍵的內容以.xml格式打印輸出到屏幕。
[oracle@rac1 ~]$ ocrdump -stdout -keyname SYSTEM.css -xml|more
<OCRDUMP>
<TIMESTAMP>08/20/2014 23:01:32</TIMESTAMP>
<COMMAND>/u01/crs1020/bin/ocrdump.bin -stdout -keyname SYSTEM.css -xml </COMMAND
>
<KEY>
<NAME>SYSTEM.css</NAME>
<VALUE_TYPE>UNDEF</VALUE_TYPE>
<VALUE><![CDATA[]]></VALUE>
<USER_PERMISSION>PROCR_ALL_ACCESS</USER_PERMISSION>
<GROUP_PERMISSION>PROCR_READ</GROUP_PERMISSION>
<OTHER_PERMISSION>PROCR_READ</OTHER_PERMISSION>
<USER_NAME>root</USER_NAME>
<GROUP_NAME>root</GROUP_NAME>
<KEY>
<NAME>SYSTEM.css.interfaces</NAME>
<VALUE_TYPE>UNDEF</VALUE_TYPE>
<VALUE><![CDATA[]]></VALUE>
<USER_PERMISSION>PROCR_ALL_ACCESS</USER_PERMISSION>
<GROUP_PERMISSION>PROCR_CREATE_SUB_KEY</GROUP_PERMISSION>
<OTHER_PERMISSION>PROCR_READ</OTHER_PERMISSION>
...........
如果此命令執行有問題,會在$CRS_HOME/log/rac1/client目錄下產生文件,文件名ocrdump_<pid>.log。如果命令執行出現問題,可以從這個日志查看問題原因。
2.ocrcheck
如果OCR內容一致,產生如下輸出:
[oracle@rac1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 104344
Used space (kbytes) : 3808
Available space (kbytes) : 100536
ID : 503754514
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File not configured
Cluster registry integrity check succeeded
如果OCR內容不一致,輸出中會出現以下提示:
Device/File needs to be synvhronixed with the other device
執行這個命令過程中,會在$CRS_HOME/log/<nodename>/client目錄下產生ocrcheck_<pid>.log日志文件。
Oracle Database 10g CRS Release 10.2.0.1.0 Production Copyright 1996, 2005 Oracle. All rights reserved.
2014-08-20 19:34:20.922: [OCRCHECK][3066518304]ocrcheck starts...
2014-08-20 19:34:21.654: [OCRCHECK][3066518304]protchcheck: OCR status : total = [104344], used = [1980], avail = [102364]
2014-08-20 19:34:21.654: [OCRCHECK][3066518304]Exiting [status=success]...
3.ocrconfig
[oracle@rac1 client]$ ocrconfig -help
Name:
ocrconfig - Configuration tool for Oracle Cluster Registry.
Synopsis:
ocrconfig [option]
option:
-export <filename> [-s online]
- Export cluster register contents to a file
-import <filename> - Import cluster registry contents from a file
-upgrade [<user> [<group>]]
- Upgrade cluster registry from previous version
-downgrade [-version <version string>]
- Downgrade cluster registry to the specified version
-backuploc <dirname> - Configure periodic backup location
-showbackup - Show backup information
-restore <filename> - Restore from physical backup
-replace ocr|ocrmirror [<filename>] - Add/replace/remove a OCR device/file
-overwrite - Overwrite OCR configuration on disk
-repair ocr|ocrmirror <filename> - Repair local OCR configuration
-help - Print out this help information
Note:
A log file will be created in
$ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure
you have file creation privileges in the above directory before
running this tool.
查看ocr的自動備份(ocr的備份由master節點完成,默認四小時一次,如果在某個節點上看到的備份為幾個月前的,則該節點不是當前的master節點):
[oracle@rac1 client]$ ocrconfig -showbackup
使用導出導入進行OCR的備份和恢復:
在對集群做調整前,如:增刪節點等操作前,應該對OCR進行一次備份,可以使用export備份到指定文件。如果做了replace或restore等操作,Oracle建議使用”cluvfy comp ocr -n all”命令做一次全面檢查。
(1)關閉所有節點的CRS
節點1:
[root@rac1 crsd]# crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
節點2:
[root@rac2 ~]# crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
(2)導出OCR的內容。
[root@rac1 crsd]# cd
[root@rac1 ~]# ocrconfig -export ocrexp.exp
(3)啟動CRS
節點一:
[root@rac1 ~]# crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
節點二:
[root@rac2 ~]# crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
[root@rac2 ~]# crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....B1.inst application ONLINE ONLINE rac1
ora....B2.inst application ONLINE ONLINE rac2
ora.RACDB.db application ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
(4)破壞OCR中的內容
[root@rac2 ~]# dd if=/dev/zero of=/dev/raw/raw1 bs=1024 count=102400
102400+0 records in
102400+0 records out
104857600 bytes (105 MB) copied, 41.7897 seconds, 2.5 MB/s
(5)檢查集群狀態,OCR一致性:
[root@rac2 ~]# crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.
[root@rac2 ~]# ocrcheck
PROT-601: Failed to initialize ocrcheck
集群已經掛掉,一致性無法檢查。
(6)使用Clusterware安裝包中的clufy工具檢查一致性
[root@rac1 cluvfy]# ./runcluvfy.sh comp ocr -n all
Verifying OCR integrity
Unable to retrieve nodelist from Oracle clusterware.
Verification cannot proceed.
檢查失敗。
(7)使用import恢復ocr內容。
[root@rac1 ~]# ocrconfig -import ocrexp.exp
(8)再次檢查OCR
[root@rac1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 104344
Used space (kbytes) : 3820
Available space (kbytes) : 100524
ID : 1731255225
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File not configured
Cluster registry integrity check succeeded
(9)cluvfy工具檢查
[root@rac1 cluvfy]# ./runcluvfy.sh comp ocr -n all
Verifying OCR integrity
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.
Uniqueness check for OCR device passed.
Checking the version of OCR...
OCR of correct Version "2" exists.
Checking data integrity of OCR...
Data integrity check for OCR passed.
OCR integrity check passed.
Verification of OCR integrity was successful.
檢查也沒問題。
(10)關閉CRS
OCR被破壞後CRS已經異常停止,但有些進程依然存活,如果不先關閉下CRS,直接啟動CRS會無法啟動。
節點一:
[root@rac1 crsd]# crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
節點二:
[root@rac2 crsd]# crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
(10)啟動CRS
節點一:
[root@rac1 cluvfy]# crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
節點二:
[root@rac2 ~]# crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
[root@rac2 crsd]# crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....B1.inst application ONLINE ONLINE rac1
ora....B2.inst application ONLINE ONLINE rac2
ora.RACDB.db application ONLINE ONLINE rac2
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
[root@rac2 crsd]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy