合 一步一步在RHEL6.5+VMware Workstation 10上搭建 oracle 11gR2 rac + dg
Tags: Oracle安装部署racDG集群部署VMware WorkStation11gR2RHEL6.5
- crs安装出现的问题
- Oracle 11g RAC CRS-4535/ORA-15077
- Oracle 11gR2 RAC ohasd failed to start 解决方法
- Oracle RAC 11gR2安装的必要条件
- Oracle RAC 11gR2安装到65%的时候挂起
- RAC解决单节点报 CRS-4047: No Oracle Clusterware components configured.
- root脚本
- 利用UDEV服务解决RAC ASM存储设备名
- 三思共享
- 11gr2安装Grid 时 node2 root.sh失败Start of resource “ora.asm -init” failed
- vmware server2 RAC的共享盘设置问题
- 11G GRID 安装缺少到libcap.so.1而终止的解决方法 2013-10-11 10:28:26
- oracle cluster utility failed
- 11gR2 grid安装最后报错[INS-20802] Oracle Cluster Verification Utility failed
- database 安装出现的问题
- 11gr2 RAC安装INS-35354问题一例
一步一步在RHEL6.5+VMware Workstation 10上搭建 oracle 11gR2 rac + dg 之安装rac出现的问题 (六)
本章主要是搜集了一些安装rac的过程中出现的问题及解决办法,如果没有出现问题的话那么这一章可以不看的
crs安装出现的问题
Oracle 11g RAC CRS-4535/ORA-15077
新安装了Oracle 11g rac之后,不知道是什么原因导致第二个节点上的crsd无法启动?其错误消息是CRS-4535: Cannot communicate with Cluster Ready Services。其具体的错误信息还需要查看crsd.log日志才知道。
1、环境
[root@linux2 ~]# cat /etc/issue
Enterprise Linux Enterprise Linux Server release 5.5 (Carthage)
Kernel \r on an \m
[root@linux2 bin]# ./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.1.0]
#注意下文中描述中使用了grid与root用户操作不同的对象。
2、错误症状
[root@linux2 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services #CRS-4535
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[root@linux2 bin]# ps -ef | grep d.bin #下面的查询中没有crsd.bin
root 3886 1 1 09:50 ? 00:00:11 /u01/app/11.2.0/grid/bin/ohasd.bin reboot
grid 3938 1 0 09:51 ? 00:00:04 /u01/app/11.2.0/grid/bin/oraagent.bin
grid 4009 1 0 09:51 ? 00:00:00 /u01/app/11.2.0/grid/bin/gipcd.bin
grid 4014 1 0 09:51 ? 00:00:00 /u01/app/11.2.0/grid/bin/mdnsd.bin
grid 4028 1 0 09:51 ? 00:00:02 /u01/app/11.2.0/grid/bin/gpnpd.bin
root 4040 1 0 09:51 ? 00:00:03 /u01/app/11.2.0/grid/bin/cssdmonitor
root 4058 1 0 09:51 ? 00:00:04 /u01/app/11.2.0/grid/bin/cssdagent
root 4060 1 0 09:51 ? 00:00:00 /u01/app/11.2.0/grid/bin/orarootagent.bin
grid 4090 1 2 09:51 ? 00:00:15 /u01/app/11.2.0/grid/bin/ocssd.bin
grid 4094 1 0 09:51 ? 00:00:02 /u01/app/11.2.0/grid/bin/diskmon.bin -d -f
root 4928 1 0 09:51 ? 00:00:00 /u01/app/11.2.0/grid/bin/octssd.bin reboot
grid 4945 1 0 09:51 ? 00:00:02 /u01/app/11.2.0/grid/bin/evmd.bin
root 6514 5886 0 10:00 pts/1 00:00:00 grep d.bin
[root@linux2 bin]# ./crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1 ONLINE ONLINE linux2 Cluster Reconfigura
tion
ora.crsd
1 ONLINE OFFLINE #crsd处于offline状态
ora.cssd
1 ONLINE ONLINE linux2
ora.cssdmonitor
1 ONLINE ONLINE linux2
ora.ctssd
1 ONLINE ONLINE linux2 OBSERVER
ora.diskmon
1 ONLINE ONLINE linux2
ora.drivers.acfs
1 ONLINE OFFLINE #acfs处于offline状态
ora.evmd
1 ONLINE ONLINE linux2
ora.gipcd
1 ONLINE ONLINE linux2
ora.gpnpd
1 ONLINE ONLINE linux2
ora.mdnsd
1 ONLINE ONLINE linux2
#下面查看crsd对应的日志文件
[grid@linux2 ~]$ view $ORACLE_HOME/log/linux2/crsd/crsd.log
2013-01-05 10:28:27.107: [GIPCXCPT][1768145488] gipcShutdownF: skipping shutdown, count 1, from [ clsgpnp0.c : 1021],
ret gipcretSuccess (0)
2013-01-05 10:28:27.107: [ OCRASM][1768145488]proprasmo: Error in open/create file in dg [OCR_VOTE] #打开磁盘组错误
[ OCRASM][1768145488]SLOS : SLOS: cat=7, opn=kgfoAl06, dep=15077, loc=kgfokge
ORA-15077: could not locate ASM instance serving a required diskgroup #出现了ORA错误
2013-01-05 10:28:27.107: [ OCRASM][1768145488]proprasmo: kgfoCheckMount returned [7]
2013-01-05 10:28:27.107: [ OCRASM][1768145488]proprasmo: The ASM instance is down #实例处于关闭状态
2013-01-05 10:28:27.107: [ OCRRAW][1768145488]proprioo: Failed to open [+OCR_VOTE]. Returned proprasmo() with [26].
Marking location as UNAVAILABLE.
2013-01-05 10:28:27.107: [ OCRRAW][1768145488]proprioo: No OCR/OLR devices are usable #OCR/OLR设备不可用
2013-01-05 10:28:27.107: [ OCRASM][1768145488]proprasmcl: asmhandle is NULL
2013-01-05 10:28:27.107: [ OCRRAW][1768145488]proprinit: Could not open raw device
2013-01-05 10:28:27.107: [ OCRASM][1768145488]proprasmcl: asmhandle is NULL
2013-01-05 10:28:27.107: [ OCRAPI][1768145488]a_init:16!: Backend init unsuccessful : [26]
2013-01-05 10:28:27.107: [ CRSOCR][1768145488] OCR context init failure. Error: PROC-26: Error while accessing the
physical storage ASM error [SLOS: cat=7, opn=kgfoAl06, dep=15077, loc=kgfokge
ORA-15077: could not locate ASM instance serving a required diskgroup
] [7]
2013-01-05 10:28:27.107: [ CRSD][1768145488][PANIC] CRSD exiting: Could not init OCR, code: 26
2013-01-05 10:28:27.107: [ CRSD][1768145488] Done.
[root@linux2 bin]# ps -ef | grep pmon #查看pmon进程,此处也表明ASM实例没有启动
root 7447 7184 0 10:48 pts/2 00:00:00 grep pmon
#从上面的分析可知,应该是ASM实例没有启动的原因导致了crsd进程无法启动
3、解决
[grid@linux2 ~]$ asmcmd
Connected to an idle instance.
ASMCMD> startup #启动asm实例
ASM instance started
Total System Global Area 283930624 bytes
Fixed Size 2212656 bytes
Variable Size 256552144 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
ASMCMD> exit
#Author : Robinson
#Blog : http://blog.csdn.net/robinson_0612
#再次查看集群资源的状态
[root@linux2 bin]# ./crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1 ONLINE ONLINE linux2 Started
ora.crsd
1 ONLINE INTERMEDIATE linux2
ora.cssd
1 ONLINE ONLINE linux2
ora.cssdmonitor
1 ONLINE ONLINE linux2
ora.ctssd
1 ONLINE ONLINE linux2 OBSERVER
ora.diskmon
1 ONLINE ONLINE linux2
ora.drivers.acfs
1 ONLINE OFFLINE
ora.evmd
1 ONLINE ONLINE linux2
ora.gipcd
1 ONLINE ONLINE linux2
ora.gpnpd
1 ONLINE ONLINE linux2
ora.mdnsd
1 ONLINE ONLINE linux2
#启动acfs
[root@linux2 bin]# ./crsctl start res ora.drivers.acfs -init
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'linux2'
CRS-2676: Start of 'ora.drivers.acfs' on 'linux2' succeeded
#之后所有的状态都处于online状态
[root@linux2 bin]# ./crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1 ONLINE ONLINE linux2 Started
ora.crsd
1 ONLINE ONLINE linux2
ora.cssd
1 ONLINE ONLINE linux2
ora.cssdmonitor
1 ONLINE ONLINE linux2
ora.ctssd
1 ONLINE ONLINE linux2 OBSERVER
ora.diskmon
1 ONLINE ONLINE linux2
ora.drivers.acfs
1 ONLINE ONLINE linux2
ora.evmd
1 ONLINE ONLINE linux2
ora.gipcd
1 ONLINE ONLINE linux2
ora.gpnpd
1 ONLINE ONLINE linux2
ora.mdnsd
1 ONLINE ONLINE linux2
Oracle 11gR2 RAC ohasd failed to start 解决方法
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
ohasd failed to start: Inappropriate ioctl for device
ohasd failed to start at/u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.
第一次安装11gR2 RAC的时候就遇到了这个11.0.2.1的经典问题,上网一查才知道这是个bug,解决办法也很简单,
就是在执行root.sh之前执行以下命令
/bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
如果出现
/bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory
的时候文件还没生成就继续执行,直到能执行为止,一般出现Adding daemon to inittab这条信息的时候执行dd命令。
另外还有一种解决方法就是更改文件权限
chown root:oinstall /var/tmp/.oracle/npohasd
重新执行root.sh之前别忘了删除配置:/u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force-verbose
Oracle RAC 11gR2安装的必要条件
其中一个必要条件是网卡要一致,一致性表现在这几个方面
1. 网卡名要一模一样,比如都叫eth0, eth1,不能出现一台节点eth0,eth1, 另一台eth2, eth3
我在安装的时候就出现这种错误,造成的现象就是第一台节点能正常安装,但是第二台执行root.sh的时候总是报错Failed to Start CSSD。
2. 不仅名字要一样,而且对应的public, private也要一致,也就是说不能一台
eth0: 192.168.1.2
eth1: 10.10.1.2
另一台
eth0: 10.10.1.3
eth1: 192.168.1.3
3. 不仅地址要对应,还要求子网掩码要一致,也就是同一个public,private网络不能一台子网掩码
255.255.0.0
另一台的子网掩码是255.255.255.0
采用虚拟机克隆的话,网卡名不一致是最常见的。
Redhat Enterprise Linux 6 下网卡名的修改方法:(比如eth5需要改成eth0)
1. 修改配置文件 /etc/udev/rules.d/70-persistent-net.rules,把其中的网卡名改成新的网卡名
2. 配置文件/etc/sysconfig/network-script/ifcfg-eth5也需要改成ifcfg-eth0
3. /etc/rc.d/init.d/network restart 重新启动
Oracle RAC的安装真是麻烦。一旦设置不对,后面就会出现各种错误。
补充:建议用同型号的网卡,不一样行不行没有试过但是根据资料,至少网卡的MTU(最大传输单元)必须要一致,否则也会导致错误。
Oracle RAC 11gR2安装到65%的时候挂起
现象:
在Redhat Linux 6上安装Oracle RAC的过程中,到65%的时候就没有任何反应。
原因:
因为防火墙开着
解决方法
chkconfig iptables off
service iptables stop
RAC解决单节点报 CRS-4047: No Oracle Clusterware components configured.
[root@his2 bin]# ./crsctl check crs 检查服务状态
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Check failed, or completed with errors.
[root@his2 bin]# ./crsctl stat res -t
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Status failed, or completed with errors.
[root@his2 bin]# ./crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.
/app/grid/product/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force 重置crs注册表
Using configuration parameter file: /app/grid/product/11.2.0/grid/crs/install/crsconfig_params
Network exists: 1/192.168.20.0/255.255.255.0/eth0, type static
VIP exists: /his1-vip/192.168.20.6/192.168.20.0/255.255.255.0/eth0, hosting node his1
VIP exists: /his2-vip/192.168.20.7/192.168.20.0/255.255.255.0/eth0, hosting node his2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
ACFS-9200: Supported
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'his2'
CRS-2677: Stop of 'ora.registry.acfs' on 'his2' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'his2'
CRS-2673: Attempting to stop 'ora.crsd' on 'his2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'his2'
CRS-2673: Attempting to stop 'ora.ORACRS.dg' on 'his2'
CRS-2673: Attempting to stop 'ora.crds3db.db' on 'his2'
CRS-2677: Stop of 'ora.ORACRS.dg' on 'his2' succeeded
CRS-2677: Stop of 'ora.crds3db.db' on 'his2' succeeded
CRS-2673: Attempting to stop 'ora.ORAARCH.dg' on 'his2'
CRS-2673: Attempting to stop 'ora.ORADATA.dg' on 'his2'
CRS-2677: Stop of 'ora.ORAARCH.dg' on 'his2' succeeded
CRS-2677: Stop of 'ora.ORADATA.dg' on 'his2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'his2'
CRS-2677: Stop of 'ora.asm' on 'his2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'his2' has completed
CRS-2677: Stop of 'ora.crsd' on 'his2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'his2'
CRS-2673: Attempting to stop 'ora.evmd' on 'his2'
CRS-2673: Attempting to stop 'ora.asm' on 'his2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'his2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'his2'
CRS-2677: Stop of 'ora.asm' on 'his2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'his2'
CRS-2677: Stop of 'ora.evmd' on 'his2' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'his2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'his2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'his2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'his2'
CRS-2677: Stop of 'ora.cssd' on 'his2' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'his2'
CRS-2673: Attempting to stop 'ora.crf' on 'his2'
CRS-2677: Stop of 'ora.diskmon' on 'his2' succeeded
CRS-2677: Stop of 'ora.crf' on 'his2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'his2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'his2' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'his2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'his2'
CRS-2677: Stop of 'ora.gpnpd' on 'his2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'his2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
[root@his1 ~]# su - grid
[grid@his1 ~]$ crs_stat -t 在节点1上面只能看到节点1的服务这说明节点2有问题了
Name Type Target State Host
------------------------------------------------------------
ora....ER.lsnr ora....er.type ONLINE ONLINE his1
ora....N1.lsnr ora....er.type ONLINE ONLINE his1
ora.ORAARCH.dg ora....up.type ONLINE ONLINE his1
ora.ORACRS.dg ora....up.type ONLINE ONLINE his1
ora.ORADATA.dg ora....up.type ONLINE ONLINE his1
ora.asm ora.asm.type ONLINE ONLINE his1
ora.crds3db.db ora....se.type ONLINE ONLINE his1
ora.cvu ora.cvu.type ONLINE ONLINE his1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....SM1.asm application ONLINE ONLINE his1
ora....S1.lsnr application ONLINE ONLINE his1
ora.his1.gsd application OFFLINE OFFLINE
ora.his1.ons application ONLINE ONLINE his1
ora.his1.vip ora....t1.type ONLINE ONLINE his1
ora....network ora....rk.type ONLINE ONLINE his1
ora.oc4j ora.oc4j.type ONLINE ONLINE his1
ora.ons ora.ons.type ONLINE ONLINE his1
ora....ry.acfs ora....fs.type ONLINE ONLINE his1
ora.scan1.vip ora....ip.type ONLINE ONLINE his1
[grid@his1 ~]$ crsctl stat res -t 在1节点上面只能看到1资源说明节点2有问题了
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE his1
ora.ORAARCH.dg
ONLINE ONLINE his1
ora.ORACRS.dg
ONLINE ONLINE his1
ora.ORADATA.dg
ONLINE ONLINE his1
ora.asm
ONLINE ONLINE his1 Started
ora.gsd
OFFLINE OFFLINE his1
ora.net1.network
ONLINE ONLINE his1
ora.ons
ONLINE ONLINE his1
ora.registry.acfs
ONLINE ONLINE his1
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE his1
ora.crds3db.db
1 ONLINE ONLINE his1 Open
2 ONLINE OFFLINE Instance Shutdown
ora.cvu
1 ONLINE ONLINE his1
ora.his1.vip
1 ONLINE ONLINE his1
ora.oc4j
1 ONLINE ONLINE his1
ora.scan1.vip
1 ONLINE ONLINE his1
[root@his2 bin]# /app/grid/product/11.2.0/grid/root.sh 执行roo.sh重新配置集群所有服务
Running Oracle 11g root script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /app/grid/product/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /app/grid/product/11.2.0/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node his1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@his2 bin]# ./crsctl stat res -t 看结果已经恢复正常
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE his1
ONLINE ONLINE his2
ora.ORAARCH.dg
ONLINE ONLINE his1
ONLINE ONLINE his2
ora.ORACRS.dg
ONLINE ONLINE his1
ONLINE ONLINE his2
ora.ORADATA.dg
ONLINE ONLINE his1
ONLINE ONLINE his2
ora.asm
ONLINE ONLINE his1 Started
ONLINE ONLINE his2
ora.gsd
OFFLINE OFFLINE his1
OFFLINE OFFLINE his2
ora.net1.network
ONLINE ONLINE his1
ONLINE ONLINE his2
ora.ons
ONLINE ONLINE his1
ONLINE ONLINE his2
ora.registry.acfs
ONLINE ONLINE his1
ONLINE ONLINE his2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE his1
ora.crds3db.db
1 ONLINE ONLINE his1 Open
2 ONLINE ONLINE his2 Open
ora.cvu
1 ONLINE ONLINE his1
ora.his1.vip
1 ONLINE ONLINE his1
ora.his2.vip
1 ONLINE ONLINE his2
ora.oc4j
1 ONLINE ONLINE his1
ora.scan1.vip
1 ONLINE ONLINE his1
--------------------------------------------
root脚本
1. When I run the script at the second node, the error info as below:
[root@rac2 ~]# /oracle/app/grid/product/11.2.0/root.sh
Running Oracle 11g root script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/app/grid/product/11.2.0
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/app/grid/product/11.2.0/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
Disk Group CRS creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15031: disk specification '/dev/oracleasm/disks/CRS3' matches no disks
ORA-15025: could not open disk "/dev/oracleasm/disks/CRS3"
ORA-15056: additional error message
Configuration of ASM ... failed
see asmca logs at /oracle/app/oracle/cfgtoollogs/asmca for details
Did not succssfully configure and start ASM at/oracle/app/grid/product/11.2.0/crs/install/crsconfig_lib.pm line 6464.
/oracle/app/grid/product/11.2.0/perl/bin/perl -I/oracle/app/grid/product/11.2.0/perl/lib -I/oracle/app/grid/product/11.2.0/crs/install /oracle/app/grid/product/11.2.0/crs/install/rootcrs.pl execution failed
[grid@rac2 ~]$ vi oracle/app/oracle/cfgtoollogs/asmca/asmca-110428PM061902.log
……………….
[main] [ 2011-04-28 18:19:38.135 CST ] [UsmcaLogger.logInfo:142] Diskstring in createDG to be updated: '/dev/oracleasm/disks/*'
[main] [ 2011-04-28 18:19:38.136 CST ] [UsmcaLogger.logInfo:142] update param sql ALTER SYSTEM SET asm_diskstring='/dev/oracleasm/disks/*' SID='*'
[main] [ 2011-04-28 18:19:38.262 CST ] [InitParamAttributes.loadDBParams:4450] Checking if SPFILE is used
[main] [ 2011-04-28 18:19:38.276 CST ] [InitParamAttributes.loadDBParams:4461] spParams = [Ljava.lang.String;@1a001ff
[main] [ 2011-04-28 18:19:38.277 CST ] [ASMParameters.loadASMParameters:459] useSPFile=false
[main] [ 2011-04-28 18:19:38.277 CST ] [SQLEngine.doSQLSubstitution:2392] The substituted sql statement:=select count(*) from v$ASM_DISKGROUP where name=upper('CRS')
[main] [ 2011-04-28 18:19:38.423 CST ] [UsmcaLogger.logInfo:142] CREATE DISKGROUP SQL: CREATE DISKGROUP CRS EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/CRS1',
'/dev/oracleasm/disks/CRS2',
'/dev/oracleasm/disks/CRS3' ATTRIBUTE 'compatible.asm'='11.2.0.0.0'
[main] [ 2011-04-28 18:19:38.724 CST ] [SQLEngine.done:2167] Done called
[main] [ 2011-04-28 18:19:38.731 CST ] [UsmcaLogger.logException:172] SEVERE:method oracle.sysman.assistants.usmca.backend.USMDiskGroupManager:createDiskGroups
[main] [ 2011-04-28 18:19:38.731 CST ] [UsmcaLogger.logException:173] ORA-15018: diskgroup cannot be created
ORA-15031: disk specification '/dev/oracleasm/disks/CRS3' matches no disks
ORA-15025: could not open disk "/dev/oracleasm/disks/CRS3"
ORA-15056: additional error message
[main] [ 2011-04-28 18:19:38.731 CST ] [UsmcaLogger.logException:174] oracle.sysman.assistants.util.sqlEngine.SQLFatalErrorException: ORA-15018: diskgroup cannot be created
ORA-15031: disk specification '/dev/oracleasm/disks/CRS3' matches no disks
ORA-15025: could not open disk "/dev/oracleasm/disks/CRS3"
ORA-15056: additional error message
oracle.sysman.assistants.util.sqlEngine.SQLEngine.executeImpl(SQLEngine.java:1655)
oracle.sysman.assistants.util.sqlEngine.SQLEngine.executeSql(SQLEngine.java:1903)
oracle.sysman.assistants.usmca.backend.USMDiskGroupManager.createDiskGroups(USMDiskGroupManager.java:236)
oracle.sysman.assistants.usmca.backend.USMDiskGroupManager.createDiskGroups(USMDiskGroupManager.java:121)
oracle.sysman.assistants.usmca.backend.USMDiskGroupManager.createDiskGroupsLocal(USMDiskGroupManager.java:2209)
oracle.sysman.assistants.usmca.backend.USMInstance.configureLocalASM(USMInstance.java:3093)
oracle.sysman.assistants.usmca.service.UsmcaService.configureLocalASM(UsmcaService.java:1047)
oracle.sysman.assistants.usmca.model.UsmcaModel.performConfigureLocalASM(UsmcaModel.java:903)
oracle.sysman.assistants.usmca.model.UsmcaModel.performOperation(UsmcaModel.java:779)
oracle.sysman.assistants.usmca.Usmca.execute(Usmca.java:171)
oracle.sysman.assistants.usmca.Usmca.main(Usmca.java:366)
[main] [ 2011-04-28 18:19:38.732 CST ] [UsmcaLogger.logExit:123] Exiting oracle.sysman.assistants.usmca.backend.USMDiskGroupManager Method : createDiskGroups
[main] [ 2011-04-28 18:19:38.732 CST ] [UsmcaLogger.logInfo:142] Diskgroups created
[main] [ 2011-04-28 18:19:38.733 CST ] [UsmcaLogger.logInfo:142] Diskgroup creation is not successful.
[main] [ 2011-04-28 18:19:38.733 CST ] [UsmcaLogger.logExit:123] Exiting oracle.sysman.assistants.usmca.model.UsmcaModel Method : performConfigureLocalASM
[main] [ 2011-04-28 18:19:38.733 CST ] [UsmcaLogger.logExit:123] Exiting oracle.sysman.assistants.usmca.model.UsmcaModel Method : performOperation
Solution:
Add permission to the /dev/oraclease for user grid:
[root@rac2 ~]# chown -R grid.oinstall /dev/oracleasm
[root@rac2 ~]# chmod -R 775 /dev/oracleasm
2. The ora.asm cannot run at rac2
[grid@rac2 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.registry.acfs
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.cvu
1 ONLINE ONLINE rac1
ora.oc4j
1 ONLINE ONLINE rac1
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac1
[root@rac2 ~]# /oracle/app/grid/product/11.2.0/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode
Using configuration parameter file: /oracle/app/grid/product/11.2.0/crs/install/crsconfig_params
Network exists: 1/10.157.45.0/255.255.255.0/eth0, type static
VIP exists: /rac1vip/10.157.45.174/10.157.45.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2vip/10.157.45.157/10.157.45.0/255.255.255.0/eth0, hosting node rac2
GSD exists