Categories
Database

How to Fix Oracle RAC CRS not Start Issue?

Home » Database » How to Fix Oracle RAC CRS not Start Issue?

In my last post, we talked about 4 Solutions for “There has been an error” During PostgreSQL Installation,. Today, let’s see how to fix the issue when you can’t stat Oracle RAC CRS.

Oracle RAC CRS Issue Description

1. The RAC cluster has two nodes: linuxdb1 and linuxdb2. The linuxdb2 node encountered a failure and cannot start.

2. Only linuxdb1 needs to be started for the testing department to use.

3. An error occurred when trying to start CRS with crsctl: CRS-1714: Unable to discover any voting files.

#crsctl start cluster -n linuxdb1
 
CRS-2672: Attempting to start 'ora.cssd' on 'linuxdb1'
CRS-2672: Attempting to start 'ora.diskmon' on 'linuxdb1'
CRS-2676: Start of 'ora.diskmon' on 'linuxdb1' succeeded
CRS-2674: Start of 'ora.cssd' on 'linuxdb1' failed
CRS-2679: Attempting to clean 'ora.cssd' on 'linuxdb1'
CRS-2681: Clean of 'ora.cssd' on 'linuxdb1' succeeded
CRS-4000: Command Start failed, or completed with errors.

Oracle RAC CRS not Starting Troubleshooting Process

Check the logs:

[cssd(1934)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/11.2.0/grid/log/linuxdb1/cssd/ocssd.log
2014-02-12 07:57:03.698
[/u01/app/11.2.0/grid/bin/cssdagent(1920)]CRS-5818:Aborted command 'start' for resource 'ora.cssd'. Details at (:CRSAGF00113:) {0:52:6} in /u01/app/11.2.0/grid/log/linuxdb1/agent/ohasd/oracssdagent_root/oracssdagent_root.log.
2014-02-12 07:57:03.699
[cssd(1934)]CRS-1656:The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00012:) in /u01/app/11.2.0/grid/log/linuxdb1/cssd/ocssd.log
2014-02-12 07:57:03.699
[cssd(1934)]CRS-1603:CSSD on node linuxdb1 shutdown by user.
2014-02-12 07:57:05.492
[cssd(1934)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/11.2.0/grid/log/linuxdb1/cssd/ocssd.log
2014-02-12 07:57:09.753
[ohasd(6518)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'linuxdb1'.
2014-02-12 07:57:21.551
[cssd(2113)]CRS-1713:CSSD daemon is started in clustered mode
2014-02-12 07:57:21.676
[cssd(2113)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/11.2.0/grid/log/linuxdb1/cssd/ocssd.log
2014-02-12 07:57:36.694
[cssd(2113)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/11.2.0/grid/log/linuxdb1/cssd/ocssd.log
2014-02-12 07:57:51.713
[cssd(2113)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/11.2.0/grid/log/linuxdb1/cssd/ocssd.log
2014-02-12 07:58:06.732
[cssd(2113)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/11.2.0/grid/log/linuxdb1/cssd/ocssd.log

Check the multipath and ASMLib status:

Running /etc/init.d/oracleasm listdisks showed everything was normal.

Check disk information:

Running /etc/init.d/oracleasm listdisks showed everything was normal.
 
Running /etc/init.d/oracleasm listdisks showed everything was normal.
 
Check disk information.
 
#fdisk -l
 
Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      267350  2147483647+  ee  EFI GPT
 
WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.
 
 
WARNING: The size of this disk is 2.2 TB (2199023255552 bytes).
DOS partition table format can not be used on drives for volumes
larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID
partition table format (GPT).
 
 
Disk /dev/sdc: 2199.0 GB, 2199023255552 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      267350  2147483647+  ee  EFI GPT
 
WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util fdisk doesn't support GPT. Use GNU Parted.
 
 
WARNING: The size of this disk is 2.2 TB (2199023255552 bytes).
DOS partition table format can not be used on drives for volumes
larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID
partition table format (GPT).
 
 
Disk /dev/sdd: 2199.0 GB, 2199023255552 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1      267350  2147483647+  ee  EFI GPT
 
WARNING: GPT (GUID Partition Table) detected on '/dev/sde'! The util fdisk doesn't support GPT. Use GNU Parted.
 
 
WARNING: The size of this disk is 2.2 TB (2199023255552 bytes).
DOS partition table format can not be used on drives for volumes
larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID
partition table format (GPT).
 
 
Disk /dev/sde: 2199.0 GB, 2199023255552 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1      267350  2147483647+  ee  EFI GPT

Check ASMLib information:

# This is a configuration file for automatic loading of the Oracle
# Automatic Storage Management library kernel driver.  It is generated
# By running /etc/init.d/oracleasm configure.  Please use that method
# to modify this file
#
 
# ORACLEASM_ENABELED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true
 
# ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
ORACLEASM_UID=grid
 
# ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
ORACLEASM_GID=asmadmin
 
# ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
ORACLEASM_SCANBOOT=true
 
# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER="multipath sd"
 
# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE="sdf sdg sdh sdi"

I found that the disk information from fdisk -l did not match the ASMLib disk information. fdisk -l showed the disks as sdb, sdc, sdd, and sde. Therefore, I modified the ORACLEASM_SCANEXCLUDE parameter from “sdf sdg sdh sdi” to “sdb sdc sdd sde”. After restarting CRS, the issue was resolved.

Finally, let’s check the database status:

$ srvctl status asm
 
$ srvctl status instance -d devdb -i devdb1
 
$ srvctl status database -d devd

The linuxdb1 node connected to the database successfully.

By Jaxon Tisdale

I am Jaxon Tisdale. I will share you with my experience in Network, AWS, and databases.

Leave a Reply

Your email address will not be published. Required fields are marked *