ADD/REMOVE/REPLACE/MOVE Oracle Cluster Registry (OCR) and Voting Disk

Posted by Sagar Patil

Note: You must be logged in as the root user, because root owns the OCR files.  Make sure there is a recent copy of the OCR file before making any changes: ocrconfig ­showbackup

If there is not a recent backup copy of the OCR file, an export can be taken for the current OCR file.

Use the following command to generate an export of the online OCR file:
ocrconfig ­export <OCR export_filename> -s online
If you should need to recover using this file, the fol owing command can be used:
ocrconfig import <OCR export_filename>

1. To add an OCR device:
To add an OCR device, provide the ful path including file name.

ocrconfig -replace ocr <filename>
To add an OCR mirror device, provide the ful path including file name.

ocrconfig -replace ocrmirror <filename>
2. To remove an OCR device:
To remove an OCR device:

ocrconfig -replace ocr
To remove an OCR mirror device

ocrconfig -replace ocrmirror
3. To replace or move the location of an OCR device:
To replace the OCR device with <filename>, provide the ful path including file name.

ocrconfig -replace ocr <filename>
To replace the OCR mirror device with <filename>, provide the ful path including file name.

ocrconfig -replace ocrmirror <filename>

Example moving OCR file from OCFS to raw devices
The OCR disk must be owned by root, must be in the oinstal group, and must have permissions set to 640.
In this example the OCR files are located in the ocfs2 file system:
/ocfs2/ocr1
/ocfs2/ocr2

Create raw device files of at least 100 MB. In this example the new OCR file wil be on the fol owing devices:
/dev/raw/raw1
/dev/raw/raw2
Once the raw devices are created, use the dd command to zero out the device and make sure no data is written
to the raw devices:
dd if=/dev/zero of=/dev/raw/raw1
dd if=/dev/zero of=/dev/raw/raw2

Note: Use UNIX man pages for additional information on the dd command.Now you are ready to move/replace the OCR file to the new storage location.

Move/Replace the OCR device
ocrconfig -replace ocr /dev/raw/raw1
Add /dev/raw/raw2 as OCR mirror device

ocrconfig -replace ocr /dev/raw/raw2
Example of adding an OCR device file
If you have upgraded your environment from a previous version, where you only had one OCR device file, you can
use the fol owing step to add an additional OCR file.
In this example a second OCR device file is added:
Add /dev/raw/raw2 as OCR mirror device

ocrconfig -replace ocr /dev/raw/raw2
ADD/DELETE/MOVE Voting Disk
Note: crsctl votedisk commands must be run as root
Note: Only use the -force flag when CRS is down
Shutdown the Oracle Clusterware (crsctl stop crs as root) on al nodes before making any modification to the
voting disk. Determine the current voting disk location using:
crsctl query css votedisk
Take a backup of al voting disk:

dd if=voting_disk_name of=backup_file_name
Note: Use UNIX man pages for additional information on the dd command.  The following can be used to restore the voting disk from the backup file created.

dd if=backup_file_name of=voting_disk_name
1. To add a Voting Disk, provide the full path including file name.:

crsctl add css votedisk <RAW_LOCATION> -force
2. To delete a Voting Disk, provide the full path including file name.:

crsctl delete css votedisk <RAW_LOCATION> -force
3. To move a Voting Disk, provide the full path including file name.:

crsctl delete css votedisk <OLD_LOCATION> ­force
crsctl add css votedisk <NEW_LOCATION> ­force
After modifying the voting disk, start the Oracle Clusterware stack on al nodes

crsctl start crs
Verify the voting disk location using

crsctl query css votedisk

1> Example : Moving Voting Disk from OCFS to raw devices. The voting disk is a partition that Oracle Clusterware uses to verify cluster node membership and status.
The voting disk must be owned by the oracle user, must be in the dba group, and must have permissions set to 644. Provide at least 20 MB disk space for the voting disk.
In this example the Voting Disks are located in the ocfs2 file system:
/ocfs2/voting1
/ocfs2/voting2
/ocfs2/voting3
Create raw device files of at least 20 MB. In this example the new voting disks wil be on the fol owing devices:
/dev/raw/raw3
/dev/raw/raw4
/dev/raw/raw5

Once the raw devices are created, use the dd command to zero out the device and make sure no data is written to the raw devices:
dd if=/dev/zero of=/dev/raw/raw3
dd if=/dev/zero of=/dev/raw/raw4
dd if=/dev/zero of=/dev/raw/raw5

Now you are ready to move/replace the voting disks to the new storage location.

To move a Voting Disk to new storage location:
crsctl delete css votedisk /ocfs2/voting1 ­force
crsctl add css votedisk /dev/raw/raw3 ­force
crsctl delete css votedisk /ocfs2/voting2 ­force
crsctl add css votedisk /dev/raw/raw4 ­force
crsctl delete css votedisk /ocfs2/voting3 ­force
crsctl add css votedisk /dev/raw/raw5 ­force

2> Example of adding Voting Disks
If you have upgraded your environment from a previous version, where you only had one voting disk, you can use
the fol owing steps to add additional voting disk.
In this example 2 additional Voting Disks are added:

crsctl add css votedisk /dev/raw/raw4 ­force
crsctl add css votedisk /dev/raw/raw5 ­force
After modifying the voting disk, start the Oracle Clusterware stack on al nodes

crsctl start crs
Verify the voting disk location using
crsctl query css votedisk

References
Note 390880.1 – OCR Corruption after Adding/Removing voting disk to a cluster when CRS stack is running

Managing CRS/ Commands

Posted by Sagar Patil

CRS DAEMON FUNCTIONALITY

CRSD: Performs high availability recovery and management operations such as maintaining the OCR and managing application resources.
– Engine for HA operation
– Manages ‘application resources’
– Starts, stops, and fails ‘application resources’ over
– Spawns separate ‘actions’ to start/stop/check application resources
– Maintains configuration profiles in the OCR (Oracle Configuration Repository)
– Stores current known state in the OCR.
– Runs as root
– Is restarted automatically on failure

OCSSD:
– OCSSD is part of RAC and Single Instance with ASM
– Provides access to node membership
– Provides group services
– Provides basic cluster locking
– Integrates with existing vendor clusteware, when present
– Can also runs without integration to vendor clustware
– Runs as Oracle.
– Failure exit causes machine reboot.
— This is a feature to prevent data corruption in event of a split brain.

EVMD: Event manager daemon. This process also starts the racgevt process to manage FAN server callouts.
– Generates events when things happen
– Spawns a permanent child evmlogger
– Evmlogger, on demand, spawns children
– Scans callout directory and invokes callouts.
– Runs as Oracle.
– Restarted automatically on failure

RESOURCE STATUS
Status of the database, all instances and all services

srvctl status database -d ORACLE -v

Status of named instances with their current services.

srvctl status instance -d ORACLE -i RAC01, RAC02 -v

Status of a named services

srvctl status service -d ORACLE -s ERP -v

Status of all nodes supporting database applications

srvctl status node

START RESOURCES
Start the database with all enabled instances

srvctl start database -d ORACLE

Start named instances

srvctl start instance -d ORACLE -i RAC03, RAC04

Start named services. Dependent instances are started as needed

srvctl start service -d ORACLE -s CRM

Start a service at the named instance

srvctl start service -d ORACLE -s CRM -i RAC04

Start node applications

srvctl start nodeapps -n myclust-4

STOP RESOURCES
Stop the database, all instances and all services

srvctl stop database -d ORACLE

Stop named instances, first relocating all existing services

srvctl stop instance -d ORACLE -i RAC03,RAC04

Stop the service

srvctl stop service -d ORACLE -s CRM

Stop the service at the named instances

srvctl stop service -d ORACLE -s CRM -i RAC04

Stop node applications. Note that instances and services also stop

srvctl stop nodeapps -n myclust-4

ADD RESOURCES

Add a new node

srvctl add nodeapps -n myclust-1 -o $ORACLE_HOME –A 139.184.201.1/255.255.255.0/hme0

Add a new database

srvctl add database -d ORACLE -o $ORACLE_HOME

Add named instances to an existing database

srvctl add instance -d ORACLE -i RAC01 -n myclust-1
srvctl add instance -d ORACLE -i RAC02 -n myclust-2
srvctl add instance -d ORACLE -i RAC03 -n myclust-3

Add a service to an existing database with preferred instances (-r) and available instances (-a). Use basic failover to the available instances

srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04

Add a service to an existing database with preferred instances in list one and available instances in list two. Use preconnect at the available instances

srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04 -P PRECONNECT

REMOVE  RESOURCES
Remove the applications for a database.
srvctl remove database -d ORACLE
Remove the applications for named instances of an existing database.
srvctl remove instance -d ORACLE -i RAC03
srvctl remove instance -d ORACLE -i RAC04
Remove the service.
srvctl remove service -d ORACLE -s STD_BATCH
Remove the service from the instances.
srvctl remove service -d ORACLE -s STD_BATCH -i RAC03,RAC04
Remove all node applications from a node.
srvctl remove nodeapps -n myclust-4

MODIFY RESOURCES
Modify an instance to execute on another node.
srvctl modify instance -d ORACLE -n my

Oracle Clusterware Administration Quick Reference

Posted by Sagar Patil

Sequence of events to bring cluster database back..

1.    Start all services using “start -nodeapps”
2.    Start ASM instnace using “srvctl start asm -n (node)”
3.    Start RAC instances using “srvctl start instance -d (database) -I (instance)”
4.    Finish up by bringing our load balanced/TAF service online “srvctl start service -d orcl -s RAC”

List of nodes and other information for all nodes participating in the cluster:

[oracle@oradb4 oracle]$ olsnodes -n
oradb4 oradb3 oradb2 oradb1

List all nodes participating in the cluster with their assigned node numbers:

[oracle@oradb4 tmp]$ olsnodes -n
oradb4 1 oradb3 2 oradb2 3 oradb1 4

List all nodes participating in the cluster with the private interconnect assigned to each node:

[oracle@oradb4 tmp]$ olsnodes -p
oradb4 oradb4-priv oradb3 oradb3-priv oradb2 oradb2-priv oradb1 oradb1-pr

Check the health of the Oracle Clusterware daemon processes:

[oracle@oradb4 oracle]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

Query and administer css vote disks :

[root@oradb4 root]# crsctl add css votedisk /u03/oradata/ CssVoteDisk.dbf
Now formatting voting disk: /u03/oradata/CssVoteDisk.dbfRead -1 bytes of 512 at offset 0 in voting device (CssVoteDisk.dbf) successful addition of votedisk /u03/oradata/CssVoteDisk.dbf

For dynamic state dump of the CRS:

[root@oradb4 root]# crsctl debug statedump crs
dumping State for crs objects Dynamic state dump information is appended to the crsd log file located in the $ORA_CRS_HOME/log/oradb4/crsd directory.

Verify the Oracle Clusterware version:

[oracle@oradb4 log]$ crsctl query crs softwareversion
CRS software version on node [oradb4] is [10.2.0.0.0]

Verify the current version of Oracle Clusterware being used:

[oracle@oradb4 log]$ crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.0.0]

CRSCTL : Oracle Clusterware Service Administration

Posted by Sagar Patil

Read more…

RAC : Managing OCR Backup and Recovering OCR

Posted by Sagar Patil

Read more…

Oracle Clusterware Log/Clusterware log files

Posted by Sagar Patil

In Oracle 10.2, Oracle Clusterware log files are created in the $ORA_CRS_HOME/log directory.

Read more…

Top of Page

Top menu