Cloning Oracle Clusterware (Applicable only to 11.2.0.2.0 and not for any previous Releases)

Posted by Sagar Patil

Cloning is the process of copying an existing Oracle installation to a different location and then updating the copied installation to work in the new environment.

The following list describes some situations in which cloning is useful:

  • Cloning provides a way to prepare a Oracle Clusterware home once and deploy it to many hosts simultaneously. You can complete the installation in silent mode, as a noninteractive process. You do not need to use a graphical user interface (GUI) console, and you can perform cloning from a Secure Shell (SSH) terminal session, if required.
  • Cloning enables you to create a new installation (copy of a production, test, or development installation) with all patches applied to it in a single step. Once you have performed the base installation and applied all patch sets and patches on the source system, the clone performs all of these individual steps as a single procedure. This is in contrast to going through the installation process to perform the separate steps to install, configure, and patch the installation on each node in the cluster.
  • Installing Oracle Clusterware by cloning is a quick process. For example, cloning an Oracle Clusterware home to a new cluster with more than two nodes requires a few minutes to install the Oracle software, plus a few minutes more for each node (approximately the amount of time it takes to run the root.sh script).
  • Cloning provides a guaranteed method of repeating the same Oracle Clusterware installation on multiple clusters.

The steps to create a new cluster through cloning are as follows:

Prepare the new cluster nodes
Deploy Oracle Clusterware on the destination nodes
Run the clone.pl script on each destination node
Run the orainstRoot.sh script on each node
Run the CRS_home/root.sh script
Run the configuration assistants and the Oracle Cluster Verify utility

Step 1: Prepare Oracle Clusterware Home for Cloning
Install the Oracle Clusterware 11g Release 1 (11.2.0.2.0).
Install any patches that are required (for example, 11.2.0.2.n, if necessary.
Apply one-off patches, if necessary.

Step 2   Shutdown Oracle Clusterware
[root@RAC1 root]# crsctl stop crs

Step 3   Create a Gold copy of Oracle Clusterware Installation
cd /opt/app/grid/product/11.2/grid_1
tar -czvf /mnt/backup/CRS_build_gold_image_rac02a2.tgz grid_1

Step 4   Copy Oracle Clusterware on the destination nodes
[root@rac02a1 backup]# scp CRS_build_gold_image_rac02a1.tgz  oracle@RAC1:/opt/app/grid/product/11.2
Warning: Permanently added ‘RAC1,192.168.31.120’ (RSA) to the list of known hosts.
oracle@RAC1’s password:
CRS_build_gold_image_rac02a1.tgz                         100%  987MB  17.3MB/s   00:57

Step 5   Remove unnecessary files from the copy of the Oracle Clusterware home
The Oracle Clusterware home contains files that are relevant only to the source node, so you can remove the unnecessary files from the copy in the log, crs/init, racg/dump, srvm/log, and cdata directories. The following example for Linux and UNIX systems shows the commands you can run to remove unnecessary files from the copy of the Oracle Clusterware home:

[root@node1 root]# cd /opt/app/grid/product/11.2/grid_1
[root@node1 crs]# rm -rf ./opt/app/grid/product/11.2/grid_1/log/hostname
[root@node1 crs]# find . -name ‘*.ouibak’ -exec rm {} \;
[root@node1 crs]# find . -name ‘*.ouibak.1’ -exec rm {} \;
[root@node1 crs]# rm -rf root.sh*
[root@node1 crs]# cd cfgtoollogs
[root@node1 cfgtoollogs]# find . -type f -exec rm -f {} \;

Step 6  Deploy Oracle Clusterware on the destination nodes (RUN it at EACH NODE ****)
Change the ownership of all files to oracle:oinstall group, and create a directory for the Oracle Inventory

[root@node1 crs]# chown -R oracle:oinstall /opt/app/grid/product/11.2/grid_1
[root@node1 crs]# mkdir -p /opt/app/oracle/oraInventory/
[root@node1 crs]# chown oracle:oinstall /opt/app/oracle/oraInventory/

Goto $GRID_HOME/clone/bin directory on each destination node and run clone.pl script  which performs main Oracle Clusterware cloning tasks
$perl clone.pl -silent ORACLE_BASE=/opt/app/oracle ORACLE_HOME=/opt/app/grid/product/11.2/grid_1 ORACLE_HOME_NAME=OraHome1Grid INVENTORY_LOCATION=/opt/app/oracle/oraInventory

[oracle@RAC1 bin]$ perl clone.pl -silent ORACLE_BASE=/opt/app/oracle ORACLE_HOME=/opt/app/grid/product/11.2/grid_1 ORACLE_HOME_NAME=OraHome1Grid INVENTORY_LOCATION=/opt/app/oracle/oraInventory
./runInstaller -clone -waitForCompletion  “ORACLE_BASE=/opt/app/oracle” “ORACLE_HOME=/opt/app/grid/product/11.2/grid_1” “ORACLE_HOME_NAME=OraHome1Grid” “INVENTORY_LOCATION=/opt/app/oracle/oraInventory” -silent -noConfig -nowait
Starting Oracle Universal Installer…
Checking swap space: must be greater than 500 MB.   Actual 1983 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-04-01_05-05-56PM. Please wait …Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.

You can find the log of this install session at:
/opt/app/oracle/oraInventory/logs/cloneActions2011-04-01_05-05-56PM.log
………………………………………………………………………………………. 100% Done.
Installation in progress (Friday, 1 April 2011 17:06:08 o’clock BST)
………………………………………………………………72% Done.
Install successful
Linking in progress (Friday, 1 April 2011 17:06:10 o’clock BST)
Link successful
Setup in progress (Friday, 1 April 2011 17:06:50 o’clock BST)
…………….                                                100% Done.
Setup successful
End of install phases.(Friday, 1 April 2011 17:07:00 o’clock BST)
WARNING:
The following configuration scripts need to be executed as the “root” user.
/opt/app/grid/product/11.2/grid_1/root.sh

To execute the configuration scripts:
1. Open a terminal window
2. Log in as “root”
3. Run the scripts

Run the script on the local node.

The cloning of OraHome1Grid was successful. Please check ‘/opt/app/oracle/oraInventory/logs/cloneActions2011-04-01_05-05-56PM.log’ for more details.

Launch the Configuration Wizard
[oracle@RAC2 bin]$ nslookup rac04scan
Server:         10.20.11.11
Address:        10.20.11.11#53
Name:   rac04scan
Address: 192.168.31.188
Name:   rac04scan
Address: 192.168.31.187
Name:   rac04scan
Address: 192.168.31.189

$ $GRID_HOME/crs/config/config.sh

 

 

 

 

 

 

RUN root.sh screen on NODE A

[root@RAC1 ~]# /opt/app/grid/product/11.2/grid_1/root.sh
Check /opt/app/grid/product/11.2/grid_1/install/root_RAC1_2011-04-04_12-41-24.log for the output of root script

 

[oracle@RAC1 ~]$ tail -f /opt/app/grid/product/11.2/grid_1/install/root_RAC1_2011-04-04_12-41-24.log

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=  /opt/app/grid/product/11.2/grid_1
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /opt/app/grid/product/11.2/grid_1/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
OLR initialization – successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4’
ACFS-9201: Not Supported
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4’
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘RAC1’
CRS-2676: Start of ‘ora.mdnsd’ on ‘RAC1’ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘RAC1’
CRS-2676: Start of ‘ora.gpnpd’ on ‘RAC1’ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘RAC1’
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘RAC1’
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘RAC1’ succeeded
CRS-2676: Start of ‘ora.gipcd’ on ‘RAC1’ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘RAC1’
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘RAC1’
CRS-2676: Start of ‘ora.diskmon’ on ‘RAC1’ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘RAC1’ succeeded
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
Now formatting voting disk: /mnt/crs1/vdisk/rac04vdsk1.
Now formatting voting disk: /mnt/crs2/vdisk/rac04vdsk2.
Now formatting voting disk: /mnt/crs3/vdisk/rac04vdsk3.
CRS-4603: Successful addition of voting disk /mnt/crs1/vdisk/rac04vdsk1.
CRS-4603: Successful addition of voting disk /mnt/crs2/vdisk/rac04vdsk2.
CRS-4603: Successful addition of voting disk /mnt/crs3/vdisk/rac04vdsk3.
##  STATE    File Universal Id                File Name Disk group
—  —–    —————–                ——— ———
1. ONLINE   a77b9ecfd10c4f8abf9dae8e403458e6 (/mnt/crs1/vdisk/rac04vdsk1) []
2. ONLINE   3a2c370ffe014f20bff0673b01d8164c (/mnt/crs2/vdisk/rac04vdsk2) []
3. ONLINE   8597ee290c994fd8bf23a4b3c97a98bb (/mnt/crs3/vdisk/rac04vdsk3) []
Located 3 voting disk(s).
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4’
ACFS-9201: Not Supported
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4’
ACFS-9201: Not Supported
Preparing packages for installation…
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster … succeeded

RUN root.sh screen on NODE B

[root@RAC2 ~]# /opt/app/grid/product/11.2/grid_1/root.sh
Check /opt/app/grid/product/11.2/grid_1/install/root_RAC2_2011-04-04_12-50-53.log for the output of root script

[oracle@RAC2 ~]$ tail -f /opt/app/grid/product/11.2/grid_1/install/root_RAC2_2011-04-04_12-50-53.log
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /opt/app/grid/product/11.2/grid_1/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
OLR initialization – successful
Adding daemon to inittab
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4’
ACFS-9201: Not Supported
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4’
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node RAC1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster

[root@RAC2 ~]# /opt/app/grid/product/11.2/grid_1/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

[root@RAC1 ~]# /opt/app/grid/product/11.2/grid_1/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

Step 6. Locating and Viewing Log Files Generated During Cloning
The cloning script runs multiple tools, each of which can generate log files.
After the clone.pl script finishes running, you can view log files to obtain more information about the status of your cloning procedures. Table 4-4 lists the log files that are generated during cloning that are the key log files for diagnostic purposes.

Ref : http://download.oracle.com/docs/cd/E11882_01/rac.112/e16794/clonecluster.htm

Cleaning up a machine with previous Oracle 11g Clusterware/RAC install

Posted by Sagar Patil

Here I will be deleting everything from a 2 node 11g RAC cluster

  1. Use “crs_stop -all” to stop all services on RAC nodes
  2. Use DBCA GUI to delete all RAC databases from nodes
  3. Use netca to delete LISTENER config
  4. Deinstall Grid Infrastructure from Server
  5. Deinstall Oracle database software from Server

Steps 1-3 are self-explanatory

4.Deinstall Grid Infrastructure from Server :

[oracle@RAC2 backup]$ $GRID_HOME/deinstall/deinstall

Checking for required files and bootstrapping …
Please wait …
Location of logs /opt/app/oracle/oraInventory/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################
Install check configuration START
Checking for existence of the Oracle home location /opt/app/grid/product/11.2/grid_1
Oracle Home type selected for de-install is: CRS
Oracle Base selected for de-install is: /opt/app/oracle
Checking for existence of central inventory location /opt/app/oracle/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/app/grid/product/11.2/grid_1
The following nodes are part of this cluster: RAC1,RAC2
Install check configuration END
Skipping Windows and .NET products configuration check
Checking Windows and .NET products configuration END
Traces log file: /opt/app/oracle/oraInventory/logs//crsdc.log
Network Configuration check config START
Network de-configuration trace file location: /opt/app/oracle/oraInventory/logs/netdc_check2011-03-31_10-14-05-AM.log
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /opt/app/oracle/oraInventory/logs/asmcadc_check2011-03-31_10-14-06-AM.log
ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]:
######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/app/grid/product/11.2/grid_1
The cluster node(s) on which the Oracle home de-installation will be performed are:RAC1,RAC2
Oracle Home selected for de-install is: /opt/app/grid/product/11.2/grid_1
Inventory Location where the Oracle home registered is: /opt/app/oracle/oraInventory
Skipping Windows and .NET products configuration check
ASM was not detected in the Oracle Home
Do you want to continue (y – yes, n – no)? [n]: y
A log of this session will be written to: ‘/opt/app/oracle/oraInventory/logs/deinstall_deconfig2011-03-31_10-14-02-AM.out’
Any error messages from this session will be written to: ‘/opt/app/oracle/oraInventory/logs/deinstall_deconfig2011-03-31_10-14-02-AM.err’

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /opt/app/oracle/oraInventory/logs/asmcadc_clean2011-03-31_10-14-44-AM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /opt/app/oracle/oraInventory/logs/netdc_clean2011-03-31_10-14-44-AM.log
De-configuring Naming Methods configuration file on all nodes…
Naming Methods configuration file de-configured successfully.
De-configuring Local Net Service Names configuration file on all nodes…
Local Net Service Names configuration file de-configured successfully.
De-configuring Directory Usage configuration file on all nodes…
Directory Usage configuration file de-configured successfully.
De-configuring backup files on all nodes…
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
—————————————->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node
Run the following command as the root user or the administrator on node “RAC1″.
/tmp/deinstall2011-03-31_10-13-56AM/perl/bin/perl -I/tmp/deinstall2011-03-31_10-13-56AM/perl/lib -I/tmp/deinstall2011-mp/deinstall2011-03-31_10-13-56AM/response/deinstall_Ora11g_gridinfrahome1.rsp”
Run the following command as the root user or the administrator on node “RAC2″.
/tmp/deinstall2011-03-31_10-13-56AM/perl/bin/perl -I/tmp/deinstall2011-03-31_10-13-56AM/perl/lib -I/tmp/deinstall2011-mp/deinstall2011-03-31_10-13-56AM/response/deinstall_Ora11g_gridinfrahome1.rsp” -lastnode
Press Enter after you finish running the above commands
<—————————————-

Let’s run these comamnds on Nodes

[oracle@RAC1 app]$ /tmp/deinstall2011-03-31_10-13-56AM/perl/bin/perl -I/tmp/deinstall2011-03-31_10-13-56AM/perl/lib -I/tmp/deinstall2011mp/deinstall2011-03-31_10-13-56AM/response/deinstall_Ora11g_gridinfrahome1.rsp
[oracle@RAC1 app]$ su –
Password:
[root@RAC1 ~]# /tmp/deinstall2011-03-31_10-13-56AM/perl/bin/perl -I/tmp/deinstall2011-03-31_10-13-56AM/perl/lib -I/tmp/deinstall2011-mp/deinstall2011-03-31_10-13-56AM/response/deinstall_Ora11g_gridinfrahome1.rsp”
>
[root@RAC1 ~]# /tmp/deinstall2011-03-31_10-22-37AM/perl/bin/perl -I/tmp/deinstall2011-03-31_10-22-37AM/perl/lib -I/tmp/deinstall2011-03-31_10-22-37AM/crs/install /tmp/deinstall2011-03-31_10-22-37AM/crs/install/rootcrs.pl -force  -deconfig -paramfile “/tmp/deinstall2011-03-31_10-22-37AM/response/deinstall_Ora11g_gridinfrahome1.rsp”
Using configuration parameter file: /tmp/deinstall2011-03-31_10-22-37AM/response/deinstall_Ora11g_gridinfrahome1.rsp
Network exists: 1/192.168.31.0/255.255.255.0/bond0, type static
VIP exists: /RAC1-vip/192.168.31.21/192.168.31.0/255.255.255.0/bond0, hosting node RAC1
VIP exists: /RAC2-vip/192.168.31.23/192.168.31.0/255.255.255.0/bond0, hosting node RAC2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
ACFS-9200: Supported
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘RAC1’
CRS-2677: Stop of ‘ora.crsd’ on ‘RAC1’ succeeded
CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.crf’ on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.evmd’ on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘RAC1’
CRS-2677: Stop of ‘ora.crf’ on ‘RAC1’ succeeded
CRS-2677: Stop of ‘ora.mdnsd’ on ‘RAC1’ succeeded
CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘RAC1’ succeeded
CRS-2677: Stop of ‘ora.evmd’ on ‘RAC1’ succeeded
CRS-2677: Stop of ‘ora.ctssd’ on ‘RAC1’ succeeded
CRS-2673: Attempting to stop ‘ora.cssd’ on ‘RAC1’
CRS-2677: Stop of ‘ora.cssd’ on ‘RAC1’ succeeded
CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.diskmon’ on ‘RAC1’
CRS-2677: Stop of ‘ora.diskmon’ on ‘RAC1’ succeeded
CRS-2677: Stop of ‘ora.gipcd’ on ‘RAC1’ succeeded
CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘RAC1’
CRS-2677: Stop of ‘ora.gpnpd’ on ‘RAC1’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘RAC1’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
************** **************

… continue as below once above commands compiled successfully

Removing Windows and .NET products configuration END
Oracle Universal Installer clean START
Detach Oracle home ‘/opt/app/grid/product/11.2/grid_1’ from the central inventory on the local node : Done
Failed to delete the directory ‘/opt/app/grid/product/11.2/grid_1’. The directory is in use.
Delete directory ‘/opt/app/grid/product/11.2/grid_1’ on the local node : Failed <<<<
The Oracle Base directory ‘/opt/app/oracle’ will not be removed on local node. The directory is in use by Oracle Home ‘/opt/app/oracle/product/11.2/db_1’.
The Oracle Base directory ‘/opt/app/oracle’ will not be removed on local node. The directory is in use by central inventory.
Detach Oracle home ‘/opt/app/grid/product/11.2/grid_1’ from the central inventory on the remote nodes ‘RAC1’ : Done
Delete directory ‘/opt/app/grid/product/11.2/grid_1’ on the remote nodes ‘RAC1’ : Done
The Oracle Base directory ‘/opt/app/oracle’ will not be removed on node ‘RAC1’. The directory is in use by Oracle Home ‘/opt/app/oracle/product/11.2/db_1’.
The Oracle Base directory ‘/opt/app/oracle’ will not be removed on node ‘RAC1’. The directory is in use by central inventory.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
Oracle install clean START
Clean install operation removing temporary directory ‘/tmp/deinstall2011-03-31_10-22-37AM’ on node ‘RAC2’
Clean install operation removing temporary directory ‘/tmp/deinstall2011-03-31_10-22-37AM’ on node ‘RAC1’
Oracle install clean END
######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################
Oracle Clusterware is stopped and successfully de-configured on node “RAC2”
Oracle Clusterware is stopped and successfully de-configured on node “RAC1”
Oracle Clusterware is stopped and de-configured successfully.
Skipping Windows and .NET products configuration clean
Successfully detached Oracle home ‘/opt/app/grid/product/11.2/grid_1’ from the central inventory on the local node.
Failed to delete directory ‘/opt/app/grid/product/11.2/grid_1’ on the local node.
Successfully detached Oracle home ‘/opt/app/grid/product/11.2/grid_1’ from the central inventory on the remote nodes ‘RAC1’.
Successfully deleted directory ‘/opt/app/grid/product/11.2/grid_1’ on the remote nodes ‘RAC1’.
Oracle Universal Installer cleanup was successful.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############

[oracle@RAC2 11.2]$ cd $GRID_HOME
[oracle@RAC2 grid_1]$ pwd
/opt/app/grid/product/11.2/grid_1
[oracle@RAC2 grid_1]$ ls -lrt
total 0

Oracle clusterware was clearly removed from $CRS_HOME /$GRID_HOME. Lets proceed with next step.

5. Deinstall Oracle database software from Server

Note: Always use the Oracle Universal Installer to remove Oracle software. Do not delete any Oracle home directories without first using the Installer to remove the software.

[oracle@RAC2 11.2]$ pwd
/opt/app/oracle/product/11.2
oracle@RAC2 11.2]$ du db_1/
4095784 db_1/

Start the Installer as follows:
[oracle@RAC2 11.2]$ $ORACLE_HOME/oui/bin/runInstaller
Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-03-31_10-37-33AM. Please wait …[oracle@RAC2 11.2]$ Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.

####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home de-installation will be performed are:RAC1,RAC2
Oracle Home selected for de-install is: /opt/app/oracle/product/11.2/db_1
Inventory Location where the Oracle home registered is: /opt/app/oracle/oraInventory
Skipping Windows and .NET products configuration check
Following RAC listener(s) will be de-configured: LISTENER
No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
RAC1 : Oracle Home exists with CCR directory, but CCR is not configured
RAC2 : Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y – yes, n – no)? [n]:

……………………………………….  You will see lots of messages

####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Skipping Windows and .NET products configuration clean
Successfully detached Oracle home ‘/opt/app/oracle/product/11.2/db_1’ from the central inventory on the local node.
Successfully deleted directory ‘/opt/app/oracle/product/11.2/db_1’ on the local node.
Successfully detached Oracle home ‘/opt/app/oracle/product/11.2/db_1’ from the central inventory on the remote nodes ‘RAC2’.
Successfully deleted directory ‘/opt/app/oracle/product/11.2/db_1’ on the remote nodes ‘RAC2’.
Oracle Universal Installer cleanup completed with errors.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############

Let’s go to $ORACLE_HOME and see if any executables are remaining?

[oracle@RAC1 app]$ cd $ORACLE_HOME
-bash: cd: /opt/app/oracle/product/11.2/db_1: No such file or directory
[oracle@RAC2 product]$ pwd
/opt/app/oracle/product
[oracle@RAC2 product]$ du 11.2/
4       11.2/
(clearly no files available here)

10g RAC Install under RHEL/OEL 4.5

Posted by Sagar Patil

1.Objectives

5 Installation
5.1 CRS install
2 System Configuration 5.2 ASM Install
2.1 Machine Configuration 5.3 Install Database Software
2.2 External/Shared Storage 5.4 create RAC Database
2.3 Kernel Parameters 6 Scripts and profile files
5.4 .bash_profile rac01
3 Oracle Software Configuration 5.5 .bash_profile rac02
3.1 Directory Structure
3.2 Database Layout
3.3 Redo Logs 6 RAC Infrastructure Testing
3.4 Controlfiles 6.1 RAC Voting Disk Test
6.2 RAC Cluster Registry Test
4 Oracle Pre-Installation tasks 6.3 RAC ASM Tests
4.1 Installing Redhat 6.4 RAC Interconnect Test
4.2 Network Configuration 6.5 Loss of Oracle Config File
4.3 Copy Oracle 10.2.0.1 software onto server
4.4 Check installed packages Appendix
4.5 validate script 1. OCR/Voting disk volumes INAccessible by rac02 87
4.6 Download ASM packages 2. RAC cluster went down On PUBLIC network test. 88
4.7 Download OCFS packages
4.8 Creating Required Operating System Groups and Users.
4.9 Oracle required directory creation
4.10 Verifying That the User nobody Exists
4.11 Configuring SSH on Cluster Member Nodes For oracle
4.12 Configuring SSH on Cluster Member Nodes for root.
4.13 VNC setup
4.14 Kernel parameters
4.15 Verifying Hangcheck-timer Module on Kernel 2.6
4.16 Oracle user limits
4.17 Installing the cvuqdisk Packeage for linux.
4.18 Disk Partitioning
4.19 Checking the Network Setup with CVU
4.20 Checking the Hardware and Operating System Setup with CVU
4.21 Checking the Operating System Requirements with CVU.
4.22 Verifying Shared Storage
4.23 Verifying the Clusterware Requirements with CVU
4.24 ASM package install
4.25 OCFS package install
4.26 disable SELinux
4.27 OCFS2 Configuration
4.28 OCFS2 File system format
4.29 OCFS2 File system mount

Read more…

RAC Build on Solaris: Fifth Phase

Posted by Sagar Patil

Step by Step instructions on how to remove temp nodes from RAC cluster. Step by step instruction on how to verify removal of temp nodes.

REMOVAL OF CLUSTERING AFTER FAILOVER

1.shutdown the instances prod1,prod2 and then do the following.

2.Remove all the devdb entries for devdb or tempracsrv3,tempracsrv4 in tnsnames.ora

In both the servers—i.e. prodracsrv1,prodracsrv2.

3.Remove the following entries from init.ora in prodracsrv1,prodracsrv2

*.log_archive_config=’dg_config=

(PROD,DEVDB)’

*.log_archive_dest_2=’service=DEVDB

valid_for=(online_logfiles,primary_role)

db_unique_name=DEVDB’

*.standby_file_management=auto

*.fal_server=’DEVDB’

*.fal_client=’PROD’

*.service_names=’PROD’

4.After this Your PROD Database is Ready after failover .

RAC Build on Solaris : Fourth Phase

Posted by Sagar Patil

Step by Step instructions on how to fail RAC databases over from temp nodes to prod nodes. Includes step by step instructions on how to verify the failover from temp nodes to prod nodes. Step by Step instructions on how to test RAC database connectivity after failover.

FAILOVER

Performing a failover in a Data Guard configuration converts the standby database into the production database. The following sections describe this

Manual Failover

Manual failover is performed by the administrator directly through the Enterprise Manager graphical user interface, or the Data Guard broker command-line interface (DGMGRL), or by issuing SQL*Plus statements. The sections below describe the relevant SQL*Plus commands.

Simulation of Failover :-

Shutdown both the instances devdb1 ,devdb2(tempracsrv3,tempracsrv4) by connecting / as sysdba from command line And issuing the following command

SQL>shutdown abort..

Manual Failover to a Physical Standby Database(in PROD_PRODRACSRV1)

Use the following commands to perform a manual failover of a physical standby Database:

1. Initiate the failover by issuing the following on the target standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH FORCE;

Note: Include the FORCE keyword to ensure that the RFS processes on the standby database will fail over without waiting for the network Connections to time out through normal TCP timeout processing before Shutting down.

2. Convert the physical standby database to the production role:

ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

3. If the standby database was never opened read-only since the last time it was Started, then open the new production database by issuing the following Statement:

ALTER DATABASE OPEN;

If the physical standby database has been opened in read-only mode since the

last time it was started, shut down the target standby database and restart it:

SQL> SHUTDOWN IMMEDIATE;

SQL> STARTUP;

Note: In rare circumstances, administrators may wish to avoid waiting for the standby database to complete applying redo in the current standby redo log file before performing the failover. (note: use of Data Guard real-time apply will avoid this delay by keeping apply up to date on the standby database). If so desired, administrators may issue the ALTER DATABASE ACTIVATE STANDBY DATABASE statement to perform an immediate failover.

This statement converts the standby database to the production database, creates a new resetlogs branch, and opens the database. However, because this statement will cause any un-applied redo in the standby redo log to be lost, Oracle recommends you only use the failover procedure described in the above steps to perform a failover.

RAC Build on Solaris : Third Phase

Posted by Sagar Patil

Oracle 10g R2 RAC Installation for PROD Nodes:
Step by Step instructions for installing Oracle 10g R2 RAC installation. The procedures will provide STEP By STEP guide you for installing two nodes (prodracsrv1and prodracsrv2) and adding to the existing RAC cluster(Configuring Failover).

10g RAC Installation (Part-II Clusterware & Database Installation)

1. Install Oracle Clusterware

Mount the Clusterware dvd in the prodracsrv1 and run the runInstaller

After downloading, as the oracle user on prodracsrv1, execute

1. Welcome: Click on Next.

2. Specify Inventory directory and credentials:

o Enter the full path of the inventory directory:

/u01/app/oracle/oraInventory.

o Specify Operating System group name: oinstall.

3. Specify Home Details:

o Name: OraCrs10g_home

o /u01/app/oracle/product/10.2.0/crs_1

4. Product-Specific Prerequisite Checks:

o Ignore the warning on physical memory requirement.

5. Specify Cluster Configuration: Click on Add.

o Public Node Name: prodracsrv2.mycorpdomain.com

o Private Node Name: prodracsrv2-priv.mycorpdomain.com

o Virtual Host Name: prodracsrv2-vip.mycorpdomain.com

6. Specify Network Interface Usage:

o Interface Name: eth0

o Subnet: 192.168.2.0

o Interface Type: Public

o Interface Name: eth1

o Subnet: 10.10.10.0

o Interface Type: Private

7. Specify Oracle Cluster Registry (OCR) Location: Select External Redundancy.

For simplicity, here you will not mirror the OCR. In a production environment,

you may want to consider multiplexing the OCR for higher redundancy.

o Specify OCR Location: /u01/ocr_config

8. Specify Voting Disk Location: Select External Redundancy.

Similarly, for simplicity, we have chosen not to mirror the Voting Disk.

o Voting Disk Location: /u01/votingdisk

9. Summary: Click on Install.

10. Execute Configuration scripts: Execute the scripts below as the root user

sequentially, one at a time. Do not proceed to the next script until the current

script completes.

o Execute /u01/app/oracle/oraInventory/orainstRoot.sh on prodracsrv1.

o Execute /u01/app/oracle/oraInventory/orainstRoot.sh on prodracsrv2.

o Execute /u01/app/oracle/product/10.2.0/crs_1/root.sh on prodracsrv1.

o Execute /u01/app/oracle/product/10.2.0/crs_1/root.sh on prodracsrv2.

The root.sh script on prodracsrv2 invoked the VIPCA automatically but it failed with the

error “The given interface(s), “eth0″ is not public. Public interfaces should be

used to configure virtual IPs.” As you are using a non-routable IP address

(192.168.x.x) for the public interface, the Oracle Cluster Verification Utility

(CVU) could not find a suitable public interface. A workaround is to run VIPCA

manually.

11. As the root user, manually invokes VIPCA on the second node.

# /u01/app/oracle/product/10.2.0/crs_1/bin/vipca

Welcome: Click on Next.

Network Interfaces: Select eth0.

Virtual IPs for cluster nodes:

o Node name: prodracsrv1

o IP Alias Name: prodracsrv1-vip

o IP address: 192.168.2.31

o Subnet Mask: 255.255.255.0

o Node name: prodracsrv2

o IP Alias Name: prodracsrv2-vip

o IP address: 192.168.2.32

o Subnet Mask: 255.255.255.0

Summary: Click on Finish.

Configuration Assistant Progress Dialog: After the configuration has completed,

Click on OK.

Configuration Results: Click on Exit.

Return to the Execute Configuration scripts screen on prodracsrv1 and click on OK.

Configuration Assistants: Verify that all checks are successful. The OUI does a

Clusterware post-installation check at the end. If the CVU fails, correct the

Problem and re-run the following command as the oracle user.

prodracsrv1-> /u01/app/oracle/product/10.2.0/crs_1/bin/cluvfy stage -post crsinst -n prodracsrv1, prodracsrv2

23. Performing post-checks for cluster services setup

24.

25. Checking node reachability…

26. Node reachability check passed from node “prodracsrv1”.

27.

28. Checking user equivalence…

29. User equivalence check passed for user “oracle”.

30.

31. Checking Cluster manager integrity…

32.

33. Checking CSS daemon…

34. Daemon status check passed for “CSS daemon”.

35.

36. Cluster manager integrity check passed.

37.

38. Checking cluster integrity…

39.

40. Cluster integrity check passed

41.

42. Checking OCR integrity…

43.

44. Checking the absence of a non-clustered configuration…

45. All nodes free of non-clustered, local-only configurations.

46.

47. Uniqueness check for OCR device passed.

48.

49. Checking the version of OCR…

50. OCR of correct Version “2” exists.

51.

52. Checking data integrity of OCR…

53. Data integrity check for OCR passed.

54.

55. OCR integrity check passed.

56.

57. Checking CRS integrity…

58.

59. Checking daemon liveness…

60. Liveness check passed for “CRS daemon”.

61.

62. Checking daemon liveness…

63. Liveness check passed for “CSS daemon”.

64.

65. Checking daemon liveness…

66. Liveness check passed for “EVM daemon”.

67.

68. Checking CRS health…

69. CRS health check passed.

70.

71. CRS integrity check passed.

72.

73. Checking node application existence…

74.

75. Checking existence of VIP node application (required)

76. Check passed.

77.

78. Checking existence of ONS node application (optional)

79. Check passed.

80.

81. Checking existence of GSD node application (optional)

82. Check passed.

83.

84. Post-check for cluster services setup was successful.

85. End of Installation: Click on Exit.

2. Install Oracle Database 10g Release 2

After mounting database 10g R2 DVD run the installer

prodracsrv1-> ./runInstaller

1. Welcome: Click on Next.

2. Select Installation Type:

o Select Enterprise Edition.

3. Specify Home Details:

o Name: OraDb10g_home1

o Path: /u01/app/oracle/product/10.2.0/db_1

4. Specify Hardware Cluster Installation Mode:

o

Select the “Cluster Install” option and make sure both RAC nodes are selected, the click the “Next” button

o Select the “Install database Software only” option, then click the “Next” button.

Wait while the prerequisite checks are done. If you have any failures correct them and retry the tests before clicking the “Next” button.

7. Select the “Install database Software only” option, then click the “Next” button.

8. On the “Summary” screen, click the “Install” button to continue.

9. Wait while the database software installs.

10. Once the installation is complete, wait while the configuration assistants run.

11. Execute the “root.sh” scripts on both nodes, as instructed on the “Execute Configuration scripts” screen, then click the “OK” button.

12. When the installation is complete, click the “Exit” button to leave the installer.

Adding to the cluster

RAC Physical Standby for a RAC Primary

Overview……………………………………………………………………………………………….2

Task 1: Gather Files and Perform Back Up…………………………………………..3

Task 2: Configure Oracle Net SERVICES on the Standby……………………3

Task 3: Create the Standby Instances and Database………………………………4

Task 4: Configure The Primary Database For Data Guard……………………9

Task 5: Verify Data Guard Configuration……………………………………………10

the database unique name of the RAC database as DEVDB. The instance names of the two RAC instances are DEVDB1 (on node DEVDB_tempracsrv3) and DEVDB2 (on node DE