Cloning Oracle Clusterware (Applicable only to 11.2.0.2.0 and not for any previous Releases)

Posted By Sagar Patil

Cloning is the process of copying an existing Oracle installation to a different location and then updating the copied installation to work in the new environment.

The following list describes some situations in which cloning is useful:

  • Cloning provides a way to prepare a Oracle Clusterware home once and deploy it to many hosts simultaneously. You can complete the installation in silent mode, as a noninteractive process. You do not need to use a graphical user interface (GUI) console, and you can perform cloning from a Secure Shell (SSH) terminal session, if required.
  • Cloning enables you to create a new installation (copy of a production, test, or development installation) with all patches applied to it in a single step. Once you have performed the base installation and applied all patch sets and patches on the source system, the clone performs all of these individual steps as a single procedure. This is in contrast to going through the installation process to perform the separate steps to install, configure, and patch the installation on each node in the cluster.
  • Installing Oracle Clusterware by cloning is a quick process. For example, cloning an Oracle Clusterware home to a new cluster with more than two nodes requires a few minutes to install the Oracle software, plus a few minutes more for each node (approximately the amount of time it takes to run the root.sh script).
  • Cloning provides a guaranteed method of repeating the same Oracle Clusterware installation on multiple clusters.

The steps to create a new cluster through cloning are as follows:

Prepare the new cluster nodes
Deploy Oracle Clusterware on the destination nodes
Run the clone.pl script on each destination node
Run the orainstRoot.sh script on each node
Run the CRS_home/root.sh script
Run the configuration assistants and the Oracle Cluster Verify utility

Step 1: Prepare Oracle Clusterware Home for Cloning
Install the Oracle Clusterware 11g Release 1 (11.2.0.2.0).
Install any patches that are required (for example, 11.2.0.2.n, if necessary.
Apply one-off patches, if necessary.

Step 2   Shutdown Oracle Clusterware
[root@RAC1 root]# crsctl stop crs

Step 3   Create a Gold copy of Oracle Clusterware Installation
cd /opt/app/grid/product/11.2/grid_1
tar -czvf /mnt/backup/CRS_build_gold_image_rac02a2.tgz grid_1

Step 4   Copy Oracle Clusterware on the destination nodes
[root@rac02a1 backup]# scp CRS_build_gold_image_rac02a1.tgz  oracle@RAC1:/opt/app/grid/product/11.2
Warning: Permanently added ‘RAC1,192.168.31.120′ (RSA) to the list of known hosts.
oracle@RAC1′s password:
CRS_build_gold_image_rac02a1.tgz                         100%  987MB  17.3MB/s   00:57

Step 5   Remove unnecessary files from the copy of the Oracle Clusterware home
The Oracle Clusterware home contains files that are relevant only to the source node, so you can remove the unnecessary files from the copy in the log, crs/init, racg/dump, srvm/log, and cdata directories. The following example for Linux and UNIX systems shows the commands you can run to remove unnecessary files from the copy of the Oracle Clusterware home:

[root@node1 root]# cd /opt/app/grid/product/11.2/grid_1
[root@node1 crs]# rm -rf ./opt/app/grid/product/11.2/grid_1/log/hostname
[root@node1 crs]# find . -name ‘*.ouibak’ -exec rm {} \;
[root@node1 crs]# find . -name ‘*.ouibak.1′ -exec rm {} \;
[root@node1 crs]# rm -rf root.sh*
[root@node1 crs]# cd cfgtoollogs
[root@node1 cfgtoollogs]# find . -type f -exec rm -f {} \;

Step 6  Deploy Oracle Clusterware on the destination nodes (RUN it at EACH NODE ****)
Change the ownership of all files to oracle:oinstall group, and create a directory for the Oracle Inventory

[root@node1 crs]# chown -R oracle:oinstall /opt/app/grid/product/11.2/grid_1
[root@node1 crs]# mkdir -p /opt/app/oracle/oraInventory/
[root@node1 crs]# chown oracle:oinstall /opt/app/oracle/oraInventory/

Goto $GRID_HOME/clone/bin directory on each destination node and run clone.pl script  which performs main Oracle Clusterware cloning tasks
$perl clone.pl -silent ORACLE_BASE=/opt/app/oracle ORACLE_HOME=/opt/app/grid/product/11.2/grid_1 ORACLE_HOME_NAME=OraHome1Grid INVENTORY_LOCATION=/opt/app/oracle/oraInventory

[oracle@RAC1 bin]$ perl clone.pl -silent ORACLE_BASE=/opt/app/oracle ORACLE_HOME=/opt/app/grid/product/11.2/grid_1 ORACLE_HOME_NAME=OraHome1Grid INVENTORY_LOCATION=/opt/app/oracle/oraInventory
./runInstaller -clone -waitForCompletion  “ORACLE_BASE=/opt/app/oracle” “ORACLE_HOME=/opt/app/grid/product/11.2/grid_1″ “ORACLE_HOME_NAME=OraHome1Grid” “INVENTORY_LOCATION=/opt/app/oracle/oraInventory” -silent -noConfig -nowait
Starting Oracle Universal Installer…
Checking swap space: must be greater than 500 MB.   Actual 1983 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-04-01_05-05-56PM. Please wait …Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.

You can find the log of this install session at:
/opt/app/oracle/oraInventory/logs/cloneActions2011-04-01_05-05-56PM.log
………………………………………………………………………………………. 100% Done.
Installation in progress (Friday, 1 April 2011 17:06:08 o’clock BST)
………………………………………………………………72% Done.
Install successful
Linking in progress (Friday, 1 April 2011 17:06:10 o’clock BST)
Link successful
Setup in progress (Friday, 1 April 2011 17:06:50 o’clock BST)
…………….                                                100% Done.
Setup successful
End of install phases.(Friday, 1 April 2011 17:07:00 o’clock BST)
WARNING:
The following configuration scripts need to be executed as the “root” user.
/opt/app/grid/product/11.2/grid_1/root.sh

To execute the configuration scripts:
1. Open a terminal window
2. Log in as “root”
3. Run the scripts

Run the script on the local node.

The cloning of OraHome1Grid was successful. Please check ‘/opt/app/oracle/oraInventory/logs/cloneActions2011-04-01_05-05-56PM.log’ for more details.

Launch the Configuration Wizard
[oracle@RAC2 bin]$ nslookup rac04scan
Server:         10.20.11.11
Address:        10.20.11.11#53
Name:   rac04scan
Address: 192.168.31.188
Name:   rac04scan
Address: 192.168.31.187
Name:   rac04scan
Address: 192.168.31.189

$ $GRID_HOME/crs/config/config.sh

 

 

 

 

 

 

RUN root.sh screen on NODE A

[root@RAC1 ~]# /opt/app/grid/product/11.2/grid_1/root.sh
Check /opt/app/grid/product/11.2/grid_1/install/root_RAC1_2011-04-04_12-41-24.log for the output of root script

 

[oracle@RAC1 ~]$ tail -f /opt/app/grid/product/11.2/grid_1/install/root_RAC1_2011-04-04_12-41-24.log

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=  /opt/app/grid/product/11.2/grid_1
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /opt/app/grid/product/11.2/grid_1/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
OLR initialization – successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4′
ACFS-9201: Not Supported
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4′
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘RAC1′
CRS-2676: Start of ‘ora.mdnsd’ on ‘RAC1′ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘RAC1′
CRS-2676: Start of ‘ora.gpnpd’ on ‘RAC1′ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘RAC1′
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘RAC1′
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘RAC1′ succeeded
CRS-2676: Start of ‘ora.gipcd’ on ‘RAC1′ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘RAC1′
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘RAC1′
CRS-2676: Start of ‘ora.diskmon’ on ‘RAC1′ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘RAC1′ succeeded
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
Now formatting voting disk: /mnt/crs1/vdisk/rac04vdsk1.
Now formatting voting disk: /mnt/crs2/vdisk/rac04vdsk2.
Now formatting voting disk: /mnt/crs3/vdisk/rac04vdsk3.
CRS-4603: Successful addition of voting disk /mnt/crs1/vdisk/rac04vdsk1.
CRS-4603: Successful addition of voting disk /mnt/crs2/vdisk/rac04vdsk2.
CRS-4603: Successful addition of voting disk /mnt/crs3/vdisk/rac04vdsk3.
##  STATE    File Universal Id                File Name Disk group
–  —–    —————–                ——— ———
1. ONLINE   a77b9ecfd10c4f8abf9dae8e403458e6 (/mnt/crs1/vdisk/rac04vdsk1) []
2. ONLINE   3a2c370ffe014f20bff0673b01d8164c (/mnt/crs2/vdisk/rac04vdsk2) []
3. ONLINE   8597ee290c994fd8bf23a4b3c97a98bb (/mnt/crs3/vdisk/rac04vdsk3) []
Located 3 voting disk(s).
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4′
ACFS-9201: Not Supported
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4′
ACFS-9201: Not Supported
Preparing packages for installation…
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster … succeeded

RUN root.sh screen on NODE B

[root@RAC2 ~]# /opt/app/grid/product/11.2/grid_1/root.sh
Check /opt/app/grid/product/11.2/grid_1/install/root_RAC2_2011-04-04_12-50-53.log for the output of root script

[oracle@RAC2 ~]$ tail -f /opt/app/grid/product/11.2/grid_1/install/root_RAC2_2011-04-04_12-50-53.log
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /opt/app/grid/product/11.2/grid_1/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
OLR initialization – successful
Adding daemon to inittab
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4′
ACFS-9201: Not Supported
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4′
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node RAC1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster

[root@RAC2 ~]# /opt/app/grid/product/11.2/grid_1/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

[root@RAC1 ~]# /opt/app/grid/product/11.2/grid_1/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

Step 6. Locating and Viewing Log Files Generated During Cloning
The cloning script runs multiple tools, each of which can generate log files.
After the clone.pl script finishes running, you can view log files to obtain more information about the status of your cloning procedures. Table 4-4 lists the log files that are generated during cloning that are the key log files for diagnostic purposes.

Ref : http://download.oracle.com/docs/cd/E11882_01/rac.112/e16794/clonecluster.htm

Leave a Reply

You must be logged in to post a comment.

Top of Page

Top menu