1、查看集群栈状态:
[grid@node2 ~]$ ************************************************************** ---**************************************************************
---**************************************************************
2、查看集群的资源:
[grid@node2 ~]$ -------------------------------------------------------------------------------- --------------------------------------------------------------------------------
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE node1 ora.cvu 1 ONLINE ONLINE node1 ora.gzyt.db 1 ONLINE ONLINE node1 Open 2 ONLINE ONLINE node2 Open ora.node1.vip 1 ONLINE ONLINE node1 ora.node2.vip 1 ONLINE ONLINE node2 ora.oc4j 1 ONLINE ONLINE node1 ora.scan1.vip 1 ONLINE ONLINE node1
3、查看集群后台daemon:
[grid@node2 ~]$ -------------------------------------------------------------------------------- --------------------------------------------------------------------------------
--------------------------------------------------------------------------------
ONLINE ONLINE node2 ACTIVE:
4、查看集群的名字
[grid@node2 ~]$ cemutlo -nnode-cluster
5、查看节点应用状态:
[grid@node2 ~]$ srvctl status nodeappsVIP node1-vip is enabled VIP node1-vip is running on node: node1 VIP node2-vip is enabled VIP node2-vip is running on node: node2 Network is enabled Network is running on node: node1 Network is running on node: node2 GSD is disabled GSD is not running on node: node1 GSD is not running on node: node2 ONS is enabled ONS daemon is running on node: node1 ONS daemon is running on node: node2
6、检查SCAN-IP的配置:
[grid@node2 ~]$ srvctl config scan SCAN name: scanip, Network: 1/192.168.41.0/255.255.255.0/eth1 SCAN VIP name: scan1, IP: /scanip/192.168.41.139
7、检查SCAN-IP实际分布:
[grid@node2 ~]$ srvctl status scan SCAN VIP scan1 is enabled SCAN VIP scan1 is running on node node1
8、检查SCAN-IP监听情况:
[grid@node2 ~]$ srvctl status scan_listener SCAN Listener LISTENER_SCAN1 is enabled SCAN listener LISTENER_SCAN1 is running on node node1
9、检查节点VIP状态:
[grid@node2 ~]$ srvctl status vip -n node1 VIP node1-vip is enabled VIP node1-vip is running on node: node1 [grid@node2 ~]$ srvctl status vip -n node2 VIP node2-vip is enabled VIP node2-vip is running on node: node2
10、检查节点VIP配置:
[grid@node2 ~]$ srvctl config vip -n node1 VIP exists: /node1-vip/192.168.41.143/192.168.41.0/255.255.255.0/eth1, hosting node node1 [grid@node2 ~]$ srvctl config vip -n node2 VIP exists: /node2-vip/192.168.41.144/192.168.41.0/255.255.255.0/eth1, hosting node node2
11、检查节点本地监听配置:
[grid@node2 ~]$ srvctl config listener -a Name: LISTENER Network: 1, Owner: grid Home: <CRS home> /u01/app/11.2.0/grid on node(s) node2,node1 End points: TCP:1521
12、检查节点本地监听状态:
[grid@node2 ~]$ srvctl status listener Listener LISTENER is enabled Listener LISTENER is running on node(s): node2,node1
13、检查ASM实例状态:
[grid@node2 ~]$ srvctl status asm -a ASM is running on node2,node1 ASM is enabled.
14、检查磁盘组资源:
[grid@node2 ~]$ srvctl status diskgroup -g DATA Disk Group DATA is running on node2,node1
15、检查ASM实例配置:
[grid@node2 ~]$ srvctl config asm -a ASM home: /grid/11.2.0/grid ASM listener: LISTENER ASM is enabled.
16、检查数据库状态:
[grid@node2 ~]$ srvctl status database -d orcl Instance orcl1 is running on node node1 Instance orcl2 is running on node node2
17、检查单个实例的状态:
[grid@node2 ~]$ srvctl status instance -d orcl -i orcl2 Instance orcl2 is running on node node2
18、检查数据库配置:
[grid@node2 ~]$ srvctl config database -d orcl -aDatabase unique name: orcl Database name: orcl Oracle home: /oracle/dbhome_1/oracle Oracle user: oracle Spfile: +DATA/orcl/spfileorcl.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: orcl Database instances: orcl1,orcl2 Disk Groups: DATA Mount point paths: Services: Type: RAC Database is enabled Database is administrator managed
19、检查节点应用的配置:
[grid@node2 ~]$ srvctl config nodeapps -a -g -s -l Warning:-l option has been deprecated and will be ignored. Network exists: 1/192.168.231.0/255.255.255.0/eth0, type static VIP exists: /node1_vip/192.168.231.101/192.168.231.0/255.255.255.0/eth0, hosting node node1 VIP exists: /node2_vip/192.168.231.201/192.168.231.0/255.255.255.0/eth0, hosting node node2 GSD exists ONS exists: Local port 6100, remote port 6200, EM port 2016 Name: LISTENER Network: 1, Owner: grid Home: <CRS home> /grid/11.2.0/grid on node(s) node2,node1 End points: TCP:1521
20、检查所有集群节点间的时钟同步:
[grid@node2 ~]$ cluvfy comp clocksync -verbose Verifying Clock Synchronization across the cluster nodes Checking if Clusterware is installed on all nodes... Check of Clusterware install passed Checking if CTSS Resource is running on all nodes... Check: CTSS Resource running on all nodes Node Name Status ------------------------------------ ------------------------ node2 passed Result: CTSS resource check passed Querying CTSS for time offset on all nodes... Result: Query of CTSS for time offset passed Check CTSS state started... Check: CTSS state Node Name State ------------------------------------ ------------------------ node2 Active CTSS is in Active state. Proceeding with check of clock time offsets on all nodes... Reference Time Offset Limit: 1000.0 msecs Check: Reference Time Offset Node Name Time Offset Status ------------ ------------------------ ------------------------ node2 0.0 passed Time offset is within the specified limits on the following set of nodes: "[node2]" Result: Check of clock time offsets passed Oracle Cluster Time Synchronization Services check passed Verification of Clock Synchronization across the cluster nodes was successful.
21、在所有节点启动和停止集群:
[root@node2 bin]# ./crsctl start cluster -all [root@node2 bin]# ./crsctl stop cluster -all
22、在所有节点启动和停止数据库:
[oracle@node2 ~]$ srvctl start database -d orcl [oracle@node2 ~]$ srvctl stop database -d orcl
23、启动和停止单个实例:
[oracle@node2 ~]$ srvctl stop instance -d orcl -i orcl2 [oracle@node2 ~]$ srvctl start instance -d orcl -i orcl2
24、检查节点网卡列表:
[grid@node2 ~]$ oifcfg iflist -n -p eth0 192.168.231.0 PRIVATE 255.255.255.0 eth1 10.10.10.0 PRIVATE 255.255.255.0 eth1 169.254.0.0 UNKNOWN 255.255.0.0
25、检查网卡的属性:
[grid@node2 ~]$ oifcfg getif eth0 10.0.0.0 global cluster_interconnect eth1 192.168.41.0 global public
编辑推荐:
- RAC集群常用管理命令03-03
- 基础架构迁云(一)03-03
- Oracle面试宝典-内存结构篇03-03
- Oracle面试宝典-事务篇03-03
- 7天酒店在此次疫情中坚信危机过后都是商机03-03
- Oracle面试宝典-参数篇03-03
- Oracle 11g 没有监听配置文件(listener.ora),也能监听实例?03-03
- Linux下设置ORACLE自启动03-03
下一篇:
相关推荐
-
雷神推出 MIX PRO II 迷你主机:基于 Ultra 200H,玻璃上盖 + ARGB 灯效
2 月 9 日消息,雷神 (THUNDEROBOT) 现已宣布推出基于英
-
制造商 Musnap 推出彩色墨水屏电纸书 Ocean C:支持手写笔、第三方安卓应用
2 月 10 日消息,制造商 Musnap 现已在海外推出一款 Oce
热文推荐
- RAC集群常用管理命令
RAC集群常用管理命令
26-03-03 - 基础架构迁云(一)
基础架构迁云(一)
26-03-03 - Oracle面试宝典-内存结构篇
Oracle面试宝典-内存结构篇
26-03-03 - Oracle面试宝典-事务篇
Oracle面试宝典-事务篇
26-03-03 - 7天酒店在此次疫情中坚信危机过后都是商机
7天酒店在此次疫情中坚信危机过后都是商机
26-03-03 - Oracle面试宝典-参数篇
Oracle面试宝典-参数篇
26-03-03 - Linux下设置ORACLE自启动
Linux下设置ORACLE自启动
26-03-03 - 微课sql优化(1)、基础概念介绍
微课sql优化(1)、基础概念介绍
26-03-03 - 微课sql优化(5)、统计信息收集(3)-关于默认采样率
微课sql优化(5)、统计信息收集(3)-关于默认采样率
26-03-03 - 微课sql优化(11) 、如何查看执行计划
微课sql优化(11) 、如何查看执行计划
26-03-03
