Friday, September 14, 2012

Adding a node to an existing Oracle RAC cluster on 11.2.0.2 Grid Infrastructure


Adding a node to an existing Oracle RAC cluster on 11.2.0.2 Grid Infrastructure

Environment:

SUSE Linux Enterprise Server 10 (x86_64) ; VERSION = 10 ; PATCHLEVEL = 2
Oracle Grid Infrastructure 11.2.0.2
Oracle Database Server 11.2.0.2
Enterprise Manager Agent 10.2.0.3

Existing cluster has 7 nodes (racprd01, racprd02, racprd03, racprd04, racprd05, racprd08, racprd09); Task is to add an 8th node to the cluster which will be called racprd10

I will be using one of the existing nodes(racprd05) as reference node and the new node will be racprd10


Reference Oracle Documentation:

Oracle® Clusterware Administration and Deployment Guide - 11g Release 2 (11.2)



Prerequisite Steps for Adding Cluster Nodes:

Make physical connections.

Connect the nodes' hardware to the network infrastructure of your cluster. This includes establishing electrical connections, configuring network interconnects, configuring shared disk subsystem connections, and so on.
This was done by the system engineers along with datacenter support team.



Install the operating system.

Install a cloned image of the operating system that matches the operating system on the other nodes in your cluster. This includes installing required service patches, updates, and drivers.

I’m going to use one of the nodes (racprd05) as the reference node or peer node for the new racprd10 server. Hence, I requested the system engineers to use racprd05 as the source and clone new node racprd10 from the source server. The reason I chose racprd05 as reference node is because the server hardware is the same model that I’m going to add.

Also, we use nfs cluster file system. So all existing shared volumes mounted on the existing nodes in the cluster were mounted on the new server with the same mount options. This includes OCR, voting disks, shared storage for oradata, redo, temp, archive volumes, etc.

The binaries stay on the local disk of each server. So apart from the software binaries, everything else is shared and mounted on all the servers.



Create Oracle users.

Since the new server was cloned from an existing server in the cluster, the ‘oracle’ user and the required groups came over from the source server. The UID’s and GID’s were the same which is a requirement.


Ensure that SSH is configured on the node.

Use steps from the below link to manually add SSH configuration on the new node. This includes generating the rsa/dsa keys, adding them to the authorized_keys file (all keys from all servers to a common authorized_keys file) and copying them over to all the servers. Enabling passwordless SSH between the servers.


The above link has steps for manually setting up SSH for all nodes in the cluster (a new RAC setup). We only want to do the steps required to enable the new node into the SSH configuration.

At a high level,

Copy authorized_keys file from existing node to new node
Generate rsa/dsa keys on the new node
Add the newly generated keys on the new node to the authorized keys file obtained from the existing node. So, at this point, the new node’s authorized_keys file will have keys from all the existing nodes and the new node too.
Copy this authorized_keys file from the new node to all the existing nodes. So, at this point, all nodes authorized_keys file will have keys from all other nodes participating in the cluster.
Enabling SSH User Equivalency on Cluster Nodes or password less SSH between new node to all the other nodes. And, from all the nodes to the new node. At this point, you can SSH without needing to use password between any/all nodes in the cluster



Verify the hardware and operating system installations with the Cluster Verification Utility (CVU).

Run the following commands to verify that the nodes you want to add are reachable by other nodes in the cluster. This command is used to verify user equivalence to all given nodes from the local node, node connectivity among all of the given nodes, accessibility to shared storage from all of the given nodes, and so on.

From GI_HOME/bin on an existing node, run

$ cluvfy stage -post hwos -n node_list | all [-verbose]


login as: oracle
Using keyboard-interactive authentication.
Password:
Last login: Thu Jul 19 15:11:32 2012 from racprd10.imycompany.com
racprd05 | ORA1020 | /export/oracle
> . oraenv
ORACLE_SID = [ORA1020] ? CRS
The Oracle base remains unchanged with value /export/oracle
racprd05 | CRS | /export/oracle
> export PATH=$ORACLE_HOME/bin:$PATH
racprd05 | CRS | /export/oracle
> which cluvfy
/export/11.2.0.2/bin/cluvfy
racprd05 | CRS | /export/oracle
> cluvfy stage -post hwos -n racprd10 -verbose

Performing post-checks for hardware and operating system setup

Checking node reachability...

Check: Node reachability from node "racprd05"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  racprd10                             yes
Result: Node reachability check passed from node "racprd05"


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Comment
  ------------------------------------  ------------------------
  racprd10                             passed
Result: User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  racprd10     passed

Verification of the hosts config file successful


Interface information for node "racprd10"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.99    10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:42:6D:F0 1500
 bond1  192.168.20.103  192.168.20.0    0.0.0.0         10.120.20.1     00:5B:21:42:6C:84 1500
 bond2  10.120.20.109   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:42:6C:81 1500


Check: Node connectivity of subnet "10.120.20.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  racprd10[10.120.20.99]         racprd10[10.120.20.109]        yes
Result: Node connectivity passed for subnet "10.120.20.0" with node(s) racprd10


Check: TCP connectivity of subnet "10.120.20.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  racprd05:10.120.20.146         racprd10:10.120.20.99          passed
  racprd05:10.120.20.146         racprd10:10.120.20.109         passed
Result: TCP connectivity check passed for subnet "10.120.20.0"


Check: Node connectivity of subnet "192.168.20.0"
Result: Node connectivity passed for subnet "192.168.20.0" with node(s) racprd10


Check: TCP connectivity of subnet "192.168.20.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  racprd05:10.120.20.146         racprd10:192.168.20.103        passed
Result: TCP connectivity check passed for subnet "192.168.20.0"


Interfaces found on subnet "10.120.20.0" that are likely candidates for VIP are:
racprd10 bond0:10.120.20.99

Interfaces found on subnet "10.120.20.0" that are likely candidates for VIP are:
racprd10 bond2:10.120.20.109

Interfaces found on subnet "192.168.20.0" that are likely candidates for a private interconnect are:
racprd10 bond1:192.168.20.103

Result: Node connectivity check passed


Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Time zone consistency
Result: Time zone consistency check passed

Checking shared storage accessibility...

  Disk                                  Sharing Nodes (1 in count)
  ------------------------------------  ------------------------
  /dev/sda                              racprd10

  Disk                                  Sharing Nodes (1 in count)
  ------------------------------------  ------------------------
  /dev/sdb                              racprd10


Shared storage check was successful on nodes "racprd10"

Post-check for hardware and operating system setup was successful.
racprd05 | CRS | /export/oracle
> 



Next is Peer check.

From GI_HOME/bin on an existing node, run the CVU command to obtain a detailed comparison of the properties of the reference node with all of the other nodes that are part of your current cluster environment.

$ cluvfy comp peer [-refnode ref_node] -n node_list
[-orainv orainventory_group] [-osdba osdba_group] [-verbose]


> cluvfy comp peer -refnode racprd05 -n racprd10 -orainv dba -osdba dba -verbose

Verifying peer compatibility

Checking peer compatibility...

Compatibility check: Physical memory [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     70.7397GB (7.4175932E7KB)  70.7397GB (7.4175932E7KB)  matched
Physical memory check passed

Compatibility check: Available memory [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     70.6032GB (7.4032804E7KB)  58.0346GB (6.08537E7KB)   mismatched
Available memory check failed

Compatibility check: Swap space [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     24GB (2.5165812E7KB)      24GB (2.5165812E7KB)      matched
Swap space check passed

Compatibility check: Free disk space for "/tmp" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     1.5684GB (1644544.0KB)    1.8447GB (1934336.0KB)    mismatched
Free disk space check failed

Compatibility check: User existence for "oracle" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     oracle(100)               oracle(100)               matched
User existence for "oracle" check passed

Compatibility check: Group existence for "dba" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     dba(100)                  dba(100)                  matched
Group existence for "dba" check passed

Compatibility check: Group membership for "oracle" in "dba (Primary)" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     yes                       yes                       matched
Group membership for "oracle" in "dba (Primary)" check passed

Compatibility check: Run level [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     3                         3                         matched
Run level check passed

Compatibility check: System architecture [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     x86_64                    x86_64                    matched
System architecture check passed

Compatibility check: Kernel version [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     2.6.16.60-0.42.10-smp     2.6.16.60-0.42.10-smp     matched
Kernel version check passed

Compatibility check: Kernel param "semmsl" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     250                       250                       matched
Kernel param "semmsl" check passed

Compatibility check: Kernel param "semmns" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     32000                     32000                     matched
Kernel param "semmns" check passed

Compatibility check: Kernel param "semopm" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     100                       100                       matched
Kernel param "semopm" check passed

Compatibility check: Kernel param "semmni" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     128                       128                       matched
Kernel param "semmni" check passed

Compatibility check: Kernel param "shmmax" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     17179869184               17179869184               matched
Kernel param "shmmax" check passed

Compatibility check: Kernel param "shmmni" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     4096                      4096                      matched
Kernel param "shmmni" check passed

Compatibility check: Kernel param "shmall" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     8388608                   8388608                   matched
Kernel param "shmall" check passed

Compatibility check: Kernel param "file-max" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     6815744                   6815744                   matched
Kernel param "file-max" check passed

Compatibility check: Kernel param "ip_local_port_range" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     9000 65500                9000 65500                matched
Kernel param "ip_local_port_range" check passed

Compatibility check: Kernel param "rmem_default" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     1048576                   1048576                   matched
Kernel param "rmem_default" check passed

Compatibility check: Kernel param "rmem_max" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     4194304                   4194304                   matched
Kernel param "rmem_max" check passed

Compatibility check: Kernel param "wmem_default" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     262144                    262144                    matched
Kernel param "wmem_default" check passed

Compatibility check: Kernel param "wmem_max" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     1048576                   1048576                   matched
Kernel param "wmem_max" check passed

Compatibility check: Kernel param "aio-max-nr" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     1048576                   1048576                   matched
Kernel param "aio-max-nr" check passed

Compatibility check: Package existence for "make" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     make-3.80-202.2           make-3.80-202.2           matched
Package existence for "make" check passed

Compatibility check: Package existence for "libaio" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     libaio-0.3.104-14.2       libaio-0.3.104-14.2       matched
Package existence for "libaio" check passed

Compatibility check: Package existence for "binutils" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     binutils-2.16.91.0.5-23.31  binutils-2.16.91.0.5-23.31  matched
Package existence for "binutils" check passed

Compatibility check: Package existence for "gcc (x86_64)" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     gcc-4.1.2_20070115-0.22 (x86_64)  gcc-4.1.2_20070115-0.22 (x86_64)  matched
Package existence for "gcc (x86_64)" check passed

Compatibility check: Package existence for "gcc-c++ (x86_64)" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     gcc-c++-4.1.2_20070115-0.22 (x86_64)  gcc-c++-4.1.2_20070115-0.22 (x86_64)  matched
Package existence for "gcc-c++ (x86_64)" check passed

Compatibility check: Package existence for "compat-libstdc++ (x86_64)" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     compat-libstdc++-5.0.7-22.2 (x86_64)  compat-libstdc++-5.0.7-22.2 (x86_64)  matched
Package existence for "compat-libstdc++ (x86_64)" check passed

Compatibility check: Package existence for "glibc (x86_64)" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     glibc-2.4-31.63.3 (x86_64)  glibc-2.4-31.63.3 (x86_64)  matched
Package existence for "glibc (x86_64)" check passed

Compatibility check: Package existence for "glibc-devel" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     glibc-devel-2.4-31.63.3   glibc-devel-2.4-31.63.3   matched
Package existence for "glibc-devel" check passed

Compatibility check: Package existence for "ksh" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     ksh-93s-59.7              ksh-93s-59.7              matched
Package existence for "ksh" check passed

Compatibility check: Package existence for "libaio-devel" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     libaio-devel-0.3.104-14.2  libaio-devel-0.3.104-14.2  matched
Package existence for "libaio-devel" check passed

Compatibility check: Package existence for "libelf" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     libelf-0.8.5-47.2         libelf-0.8.5-47.2         matched
Package existence for "libelf" check passed

Compatibility check: Package existence for "libgcc (x86_64)" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     libgcc-4.1.2_20070115-0.22 (x86_64)  libgcc-4.1.2_20070115-0.22 (x86_64)  matched
Package existence for "libgcc (x86_64)" check passed

Compatibility check: Package existence for "libstdc++ (x86_64)" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     libstdc++-4.1.2_20070115-0.22 (x86_64)  libstdc++-4.1.2_20070115-0.22 (x86_64)  matched
Package existence for "libstdc++ (x86_64)" check passed

Compatibility check: Package existence for "libstdc++-devel (x86_64)" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     libstdc++-devel-4.1.2_20070115-0.22 (x86_64)  libstdc++-devel-4.1.2_20070115-0.22 (x86_64)  matched
Package existence for "libstdc++-devel (x86_64)" check passed

Compatibility check: Package existence for "sysstat" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     sysstat-8.0.4-1.4         sysstat-8.0.4-1.4         matched
Package existence for "sysstat" check passed

Compatibility check: Package existence for "libcap" [reference node: racprd05]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     libcap-1.92-499.4         libcap-1.92-499.4         matched
Package existence for "libcap" check passed

Verification of peer compatibility was unsuccessful.
Checks did not pass for the following node(s):
        racprd10
racprd05 | CRS | /export/oracle
> 


Since the new node OS was cloned from existing node, most of the OS package and kernel parameter were the same and they matched. The mismatch was only on the available memory and free space in /tmp
Both these are ignorable as I know there is enough memory and free space on the server to support the node addition.



Prechecks are over and now proceeding to add the node to the cluster.

Since the new server was cloned from an existing server, on the new server, I backed up and removed the existing Grid home, Oracle home. These will be installed fresh as part of the node addition.
If the existing binaries or Oracle homes are to be used, then there will be several cleanups that need to be performed since the cloned homes from the server clone will have entries and logs reflecting the source server. Since I wanted a clean install, I removed all the existing homes.

Verify the integrity of the cluster and the new node

$ cluvfy stage -pre nodeadd -n node3 [-fixup [-fixupdir fixup_dir]] [-verbose]


> ./cluvfy stage -pre nodeadd -n racprd10 -fixup -fixupdir /export/install/nels/nodeadd/fixup -verbose

Performing pre-checks for node addition

Checking node reachability...

Check: Node reachability from node "racprd05"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  racprd10                             yes
Result: Node reachability check passed from node "racprd05"


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Comment
  ------------------------------------  ------------------------
  racprd10                             passed
Result: User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  racprd01     passed
  racprd10     passed
  racprd09     passed
  racprd08     passed
  racprd05     passed
  racprd04     passed
  racprd02     passed

Verification of the hosts config file successful


Interface information for node "racprd01"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.91    10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7E:5C 1500
 bond0  10.120.20.111   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7E:5C 1500
 bond1  192.168.20.91   192.168.20.0    0.0.0.0         10.120.20.1     00:18:22:0E:4B:70 1500
 bond1  169.254.162.181 169.254.0.0     0.0.0.0         10.120.20.1     00:18:22:0E:4B:70 1500
 bond2  10.120.20.101   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7E:5D 1500


Interface information for node "racprd02"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.92    10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7D:9E 1500
 bond0  10.120.20.112   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7D:9E 1500
 bond1  192.168.20.92   192.168.20.0    0.0.0.0         10.120.20.1     00:18:22:0D:8F:54 1500
 bond1  169.254.151.45  169.254.0.0     0.0.0.0         10.120.20.1     00:18:22:0D:8F:54 1500
 bond2  10.120.20.102   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7D:9F 1500


Interface information for node "racprd04"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.125   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:46:46:60 1500
 bond0  10.120.20.127   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:46:46:60 1500
 bond0  10.120.20.168   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:46:46:60 1500
 bond1  192.168.20.102  192.168.20.0    0.0.0.0         10.120.20.1     00:5B:21:49:35:BC 1500
 bond1  169.254.7.188   169.254.0.0     0.0.0.0         10.120.20.1     00:5B:21:49:35:BC 1500
 bond2  10.120.20.126   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:46:46:61 1500


Interface information for node "racprd05"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.146   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:49:36:D0 1500
 bond0  10.120.20.148   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:49:36:D0 1500
 bond1  192.168.20.101  192.168.20.0    0.0.0.0         10.120.20.1     00:5B:21:49:36:D4 1500
 bond1  169.254.83.149  169.254.0.0     0.0.0.0         10.120.20.1     00:5B:21:49:36:D4 1500
 bond2  10.120.20.147   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:49:36:D1 1500


Interface information for node "racprd08"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.94    10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:76:08:29 1500
 bond0  10.120.20.119   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:76:08:29 1500
 bond0  10.120.20.166   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:76:08:29 1500
 bond1  192.168.20.99   192.168.20.0    0.0.0.0         10.120.20.1     00:18:22:76:08:2B 1500
 bond1  169.254.213.9   169.254.0.0     0.0.0.0         10.120.20.1     00:18:22:76:08:2B 1500
 bond2  10.120.20.104   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:76:08:28 1500


Interface information for node "racprd09"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.90    10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:85:29:24 1500
 bond0  10.120.20.120   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:85:29:24 1500
 bond1  192.168.20.100  192.168.20.0    0.0.0.0         10.120.20.1     00:18:22:85:29:26 1500
 bond1  169.254.58.183  169.254.0.0     0.0.0.0         10.120.20.1     00:18:22:85:29:26 1500
 bond2  10.120.20.114   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:22:2A:E0 1500


Interface information for node "racprd10"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.99    10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:42:6D:F0 1500
 bond1  192.168.20.103  192.168.20.0    0.0.0.0         10.120.20.1     00:5B:21:42:6C:84 1500
 bond2  10.120.20.109   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:42:6C:81 1500


Check: Node connectivity for interface "bond0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  racprd01[10.120.20.91]         racprd01[10.120.20.111]        yes
  racprd01[10.120.20.91]         racprd02[10.120.20.92]         yes
  racprd01[10.120.20.91]         racprd02[10.120.20.112]        yes
  racprd01[10.120.20.91]         racprd04[10.120.20.125]        yes
  racprd01[10.120.20.91]         racprd04[10.120.20.127]        yes
  racprd01[10.120.20.91]         racprd04[10.120.20.168]        yes
  racprd01[10.120.20.91]         racprd05[10.120.20.146]        yes
  racprd01[10.120.20.91]         racprd05[10.120.20.148]        yes
  racprd01[10.120.20.91]         racprd08[10.120.20.94]         yes
  racprd01[10.120.20.91]         racprd08[10.120.20.119]        yes
  racprd01[10.120.20.91]         racprd08[10.120.20.166]        yes
  racprd01[10.120.20.91]         racprd09[10.120.20.90]         yes
  racprd01[10.120.20.91]         racprd09[10.120.20.120]        yes
  racprd01[10.120.20.91]         racprd10[10.120.20.99]         yes
  racprd01[10.120.20.111]        racprd02[10.120.20.92]         yes
  racprd01[10.120.20.111]        racprd02[10.120.20.112]        yes
  racprd01[10.120.20.111]        racprd04[10.120.20.125]        yes
  racprd01[10.120.20.111]        racprd04[10.120.20.127]        yes
  racprd01[10.120.20.111]        racprd04[10.120.20.168]        yes
  racprd01[10.120.20.111]        racprd05[10.120.20.146]        yes
  racprd01[10.120.20.111]        racprd05[10.120.20.148]        yes
  racprd01[10.120.20.111]        racprd08[10.120.20.94]         yes
  racprd01[10.120.20.111]        racprd08[10.120.20.119]        yes
  racprd01[10.120.20.111]        racprd08[10.120.20.166]        yes
  racprd01[10.120.20.111]        racprd09[10.120.20.90]         yes
  racprd01[10.120.20.111]        racprd09[10.120.20.120]        yes
  racprd01[10.120.20.111]        racprd10[10.120.20.99]         yes
  racprd02[10.120.20.92]         racprd02[10.120.20.112]        yes
  racprd02[10.120.20.92]         racprd04[10.120.20.125]        yes
  racprd02[10.120.20.92]         racprd04[10.120.20.127]        yes
  racprd02[10.120.20.92]         racprd04[10.120.20.168]        yes
  racprd02[10.120.20.92]         racprd05[10.120.20.146]        yes
  racprd02[10.120.20.92]         racprd05[10.120.20.148]        yes
  racprd02[10.120.20.92]         racprd08[10.120.20.94]         yes
  racprd02[10.120.20.92]         racprd08[10.120.20.119]        yes
  racprd02[10.120.20.92]         racprd08[10.120.20.166]        yes
  racprd02[10.120.20.92]         racprd09[10.120.20.90]         yes
  racprd02[10.120.20.92]         racprd09[10.120.20.120]        yes
  racprd02[10.120.20.92]         racprd10[10.120.20.99]         yes
  racprd02[10.120.20.112]        racprd04[10.120.20.125]        yes
  racprd02[10.120.20.112]        racprd04[10.120.20.127]        yes
  racprd02[10.120.20.112]        racprd04[10.120.20.168]        yes
  racprd02[10.120.20.112]        racprd05[10.120.20.146]        yes
  racprd02[10.120.20.112]        racprd05[10.120.20.148]        yes
  racprd02[10.120.20.112]        racprd08[10.120.20.94]         yes
  racprd02[10.120.20.112]        racprd08[10.120.20.119]        yes
  racprd02[10.120.20.112]        racprd08[10.120.20.166]        yes
  racprd02[10.120.20.112]        racprd09[10.120.20.90]         yes
  racprd02[10.120.20.112]        racprd09[10.120.20.120]        yes
  racprd02[10.120.20.112]        racprd10[10.120.20.99]         yes
  racprd04[10.120.20.125]        racprd04[10.120.20.127]        yes
  racprd04[10.120.20.125]        racprd04[10.120.20.168]        yes
  racprd04[10.120.20.125]        racprd05[10.120.20.146]        yes
  racprd04[10.120.20.125]        racprd05[10.120.20.148]        yes
  racprd04[10.120.20.125]        racprd08[10.120.20.94]         yes
  racprd04[10.120.20.125]        racprd08[10.120.20.119]        yes
  racprd04[10.120.20.125]        racprd08[10.120.20.166]        yes
  racprd04[10.120.20.125]        racprd09[10.120.20.90]         yes
  racprd04[10.120.20.125]        racprd09[10.120.20.120]        yes
  racprd04[10.120.20.125]        racprd10[10.120.20.99]         yes
  racprd04[10.120.20.127]        racprd04[10.120.20.168]        yes
  racprd04[10.120.20.127]        racprd05[10.120.20.146]        yes
  racprd04[10.120.20.127]        racprd05[10.120.20.148]        yes
  racprd04[10.120.20.127]        racprd08[10.120.20.94]         yes
  racprd04[10.120.20.127]        racprd08[10.120.20.119]        yes
  racprd04[10.120.20.127]        racprd08[10.120.20.166]        yes
  racprd04[10.120.20.127]        racprd09[10.120.20.90]         yes
  racprd04[10.120.20.127]        racprd09[10.120.20.120]        yes
  racprd04[10.120.20.127]        racprd10[10.120.20.99]         yes
  racprd04[10.120.20.168]        racprd05[10.120.20.146]        yes
  racprd04[10.120.20.168]        racprd05[10.120.20.148]        yes
  racprd04[10.120.20.168]        racprd08[10.120.20.94]         yes
  racprd04[10.120.20.168]        racprd08[10.120.20.119]        yes
  racprd04[10.120.20.168]        racprd08[10.120.20.166]        yes
  racprd04[10.120.20.168]        racprd09[10.120.20.90]         yes
  racprd04[10.120.20.168]        racprd09[10.120.20.120]        yes
  racprd04[10.120.20.168]        racprd10[10.120.20.99]         yes
  racprd05[10.120.20.146]        racprd05[10.120.20.148]        yes
  racprd05[10.120.20.146]        racprd08[10.120.20.94]         yes
  racprd05[10.120.20.146]        racprd08[10.120.20.119]        yes
  racprd05[10.120.20.146]        racprd08[10.120.20.166]        yes
  racprd05[10.120.20.146]        racprd09[10.120.20.90]         yes
  racprd05[10.120.20.146]        racprd09[10.120.20.120]        yes
  racprd05[10.120.20.146]        racprd10[10.120.20.99]         yes
  racprd05[10.120.20.148]        racprd08[10.120.20.94]         yes
  racprd05[10.120.20.148]        racprd08[10.120.20.119]        yes
  racprd05[10.120.20.148]        racprd08[10.120.20.166]        yes
  racprd05[10.120.20.148]        racprd09[10.120.20.90]         yes
  racprd05[10.120.20.148]        racprd09[10.120.20.120]        yes
  racprd05[10.120.20.148]        racprd10[10.120.20.99]         yes
  racprd08[10.120.20.94]         racprd08[10.120.20.119]        yes
  racprd08[10.120.20.94]         racprd08[10.120.20.166]        yes
  racprd08[10.120.20.94]         racprd09[10.120.20.90]         yes
  racprd08[10.120.20.94]         racprd09[10.120.20.120]        yes
  racprd08[10.120.20.94]         racprd10[10.120.20.99]         yes
  racprd08[10.120.20.119]        racprd08[10.120.20.166]        yes
  racprd08[10.120.20.119]        racprd09[10.120.20.90]         yes
  racprd08[10.120.20.119]        racprd09[10.120.20.120]        yes
  racprd08[10.120.20.119]        racprd10[10.120.20.99]         yes
  racprd08[10.120.20.166]        racprd09[10.120.20.90]         yes
  racprd08[10.120.20.166]        racprd09[10.120.20.120]        yes
  racprd08[10.120.20.166]        racprd10[10.120.20.99]         yes
  racprd09[10.120.20.90]         racprd09[10.120.20.120]        yes
  racprd09[10.120.20.90]         racprd10[10.120.20.99]         yes
  racprd09[10.120.20.120]        racprd10[10.120.20.99]         yes
Result: Node connectivity passed for interface "bond0"

Result: Node connectivity check passed


Checking CRS integrity...
The Oracle Clusterware is healthy on node "racprd01"
The Oracle Clusterware is healthy on node "racprd02"
The Oracle Clusterware is healthy on node "racprd04"
The Oracle Clusterware is healthy on node "racprd05"
The Oracle Clusterware is healthy on node "racprd08"
The Oracle Clusterware is healthy on node "racprd09"

CRS integrity check passed

Checking shared resources...

Checking CRS home location...

ERROR:
PRVF-4864 : Path location check failed for: "/export/11.2.0.2"
Result: Shared resources check for node addition failed


Checking node connectivity...

Checking hosts config file...
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  racprd01     passed
  racprd10     passed
  racprd09     passed
  racprd08     passed
  racprd05     passed
  racprd04     passed
  racprd02     passed

Verification of the hosts config file successful


Interface information for node "racprd01"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.91    10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7E:5C 1500
 bond0  10.120.20.111   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7E:5C 1500
 bond1  192.168.20.91   192.168.20.0    0.0.0.0         10.120.20.1     00:18:22:0E:4B:70 1500
 bond1  169.254.162.181 169.254.0.0     0.0.0.0         10.120.20.1     00:18:22:0E:4B:70 1500
 bond2  10.120.20.101   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7E:5D 1500


Interface information for node "racprd02"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.92    10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7D:9E 1500
 bond0  10.120.20.112   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7D:9E 1500
 bond1  192.168.20.92   192.168.20.0    0.0.0.0         10.120.20.1     00:18:22:0D:8F:54 1500
 bond1  169.254.151.45  169.254.0.0     0.0.0.0         10.120.20.1     00:18:22:0D:8F:54 1500
 bond2  10.120.20.102   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7D:9F 1500


Interface information for node "racprd04"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.125   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:46:46:60 1500
 bond0  10.120.20.127   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:46:46:60 1500
 bond0  10.120.20.168   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:46:46:60 1500
 bond1  192.168.20.102  192.168.20.0    0.0.0.0         10.120.20.1     00:5B:21:49:35:BC 1500
 bond1  169.254.7.188   169.254.0.0     0.0.0.0         10.120.20.1     00:5B:21:49:35:BC 1500
 bond2  10.120.20.126   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:46:46:61 1500


Interface information for node "racprd05"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.146   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:49:36:D0 1500
 bond0  10.120.20.148   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:49:36:D0 1500
 bond1  192.168.20.101  192.168.20.0    0.0.0.0         10.120.20.1     00:5B:21:49:36:D4 1500
 bond1  169.254.83.149  169.254.0.0     0.0.0.0         10.120.20.1     00:5B:21:49:36:D4 1500
 bond2  10.120.20.147   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:49:36:D1 1500


Interface information for node "racprd08"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.94    10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:76:08:29 1500
 bond0  10.120.20.119   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:76:08:29 1500
 bond0  10.120.20.166   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:76:08:29 1500
 bond1  192.168.20.99   192.168.20.0    0.0.0.0         10.120.20.1     00:18:22:76:08:2B 1500
 bond1  169.254.213.9   169.254.0.0     0.0.0.0         10.120.20.1     00:18:22:76:08:2B 1500
 bond2  10.120.20.104   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:76:08:28 1500


Interface information for node "racprd09"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.90    10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:85:29:24 1500
 bond0  10.120.20.120   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:85:29:24 1500
 bond1  192.168.20.100  192.168.20.0    0.0.0.0         10.120.20.1     00:18:22:85:29:26 1500
 bond1  169.254.58.183  169.254.0.0     0.0.0.0         10.120.20.1     00:18:22:85:29:26 1500
 bond2  10.120.20.114   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:22:2A:E0 1500


Interface information for node "racprd10"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.99    10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:42:6D:F0 1500
 bond1  192.168.20.103  192.168.20.0    0.0.0.0         10.120.20.1     00:5B:21:42:6C:84 1500
 bond2  10.120.20.109   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:42:6C:81 1500


Check: Node connectivity for interface "bond1"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  racprd01[192.168.20.91]        racprd02[192.168.20.92]        yes
  racprd01[192.168.20.91]        racprd04[192.168.20.102]       yes
  racprd01[192.168.20.91]        racprd05[192.168.20.101]       yes
  racprd01[192.168.20.91]        racprd08[192.168.20.99]        yes
  racprd01[192.168.20.91]        racprd09[192.168.20.100]       yes
  racprd01[192.168.20.91]        racprd10[192.168.20.103]       yes
  racprd02[192.168.20.92]        racprd04[192.168.20.102]       yes
  racprd02[192.168.20.92]        racprd05[192.168.20.101]       yes
  racprd02[192.168.20.92]        racprd08[192.168.20.99]        yes
  racprd02[192.168.20.92]        racprd09[192.168.20.100]       yes
  racprd02[192.168.20.92]        racprd10[192.168.20.103]       yes
  racprd04[192.168.20.102]       racprd05[192.168.20.101]       yes
  racprd04[192.168.20.102]       racprd08[192.168.20.99]        yes
  racprd04[192.168.20.102]       racprd09[192.168.20.100]       yes
  racprd04[192.168.20.102]       racprd10[192.168.20.103]       yes
  racprd05[192.168.20.101]       racprd08[192.168.20.99]        yes
  racprd05[192.168.20.101]       racprd09[192.168.20.100]       yes
  racprd05[192.168.20.101]       racprd10[192.168.20.103]       yes
  racprd08[192.168.20.99]        racprd09[192.168.20.100]       yes
  racprd08[192.168.20.99]        racprd10[192.168.20.103]       yes
  racprd09[192.168.20.100]       racprd10[192.168.20.103]       yes
Result: Node connectivity passed for interface "bond1"

Result: Node connectivity check passed


Check: Total memory
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     70.7397GB (7.4175932E7KB)  1.5GB (1572864.0KB)       passed
  racprd05     70.7397GB (7.4175932E7KB)  1.5GB (1572864.0KB)       passed
Result: Total memory check passed

Check: Available memory
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     70.5484GB (7.3975312E7KB)  50MB (51200.0KB)          passed
  racprd05     57.963GB (6.077866E7KB)   50MB (51200.0KB)          passed
Result: Available memory check passed

Check: Swap space
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     24GB (2.5165812E7KB)      16GB (1.6777216E7KB)      passed
  racprd05     24GB (2.5165812E7KB)      16GB (1.6777216E7KB)      passed
Result: Swap space check passed

Check: Free disk space for "racprd10:/tmp"
  Path              Node Name     Mount point   Available     Required      Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  /tmp              racprd10     /             1.5674GB      1GB           passed
Result: Free disk space check passed for "racprd10:/tmp"

Check: Free disk space for "racprd05:/tmp"
  Path              Node Name     Mount point   Available     Required      Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  /tmp              racprd05     /             1.8447GB      1GB           passed
Result: Free disk space check passed for "racprd05:/tmp"

Check: User existence for "oracle"
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  racprd10     exists(100)               passed
  racprd05     exists(100)               passed

Checking for multiple users with UID value 100
Result: Check for multiple users with UID value 100 passed
Result: User existence check passed for "oracle"

Check: Run level
  Node Name     run level                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     3                         3,5                       passed
  racprd05     3                         3,5                       passed
Result: Run level check passed

Check: Hard limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Comment
  ----------------  ------------  ------------  ------------  ----------------
  racprd05         hard          65536         65536         passed
  racprd10         hard          65536         65536         passed
Result: Hard limits check passed for "maximum open file descriptors"

Check: Soft limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Comment
  ----------------  ------------  ------------  ------------  ----------------
  racprd05         soft          1024          1024          passed
  racprd10         soft          1024          1024          passed
Result: Soft limits check passed for "maximum open file descriptors"

Check: Hard limits for "maximum user processes"
  Node Name         Type          Available     Required      Comment
  ----------------  ------------  ------------  ------------  ----------------
  racprd05         hard          16384         16384         passed
  racprd10         hard          16384         16384         passed
Result: Hard limits check passed for "maximum user processes"

Check: Soft limits for "maximum user processes"
  Node Name         Type          Available     Required      Comment
  ----------------  ------------  ------------  ------------  ----------------
  racprd05         soft          2047          2047          passed
  racprd10         soft          2047          2047          passed
Result: Soft limits check passed for "maximum user processes"

Check: System architecture
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     x86_64                    x86_64                    passed
  racprd05     x86_64                    x86_64                    passed
Result: System architecture check passed

Check: Kernel version
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     2.6.16.60-0.42.10-smp     2.6.16.21                 passed
  racprd05     2.6.16.60-0.42.10-smp     2.6.16.21                 passed
Result: Kernel version check passed

Check: Kernel parameter for "semmsl"
  Node Name     Configured                Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     250                       250                       passed
  racprd05     250                       250                       passed
Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns"
  Node Name     Configured                Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     32000                     32000                     passed
  racprd05     32000                     32000                     passed
Result: Kernel parameter check passed for "semmns"

Check: Kernel parameter for "semopm"
  Node Name     Configured                Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     100                       100                       passed
  racprd05     100                       100                       passed
Result: Kernel parameter check passed for "semopm"

Check: Kernel parameter for "semmni"
  Node Name     Configured                Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     128                       128                       passed
  racprd05     128                       128                       passed
Result: Kernel parameter check passed for "semmni"

Check: Kernel parameter for "shmmax"
  Node Name     Configured                Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     17179869184               4294967295                passed
  racprd05     17179869184               4294967295                passed
Result: Kernel parameter check passed for "shmmax"

Check: Kernel parameter for "shmmni"
  Node Name     Configured                Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     4096                      4096                      passed
  racprd05     4096                      4096                      passed
Result: Kernel parameter check passed for "shmmni"

Check: Kernel parameter for "shmall"
  Node Name     Configured                Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     8388608                   2097152                   passed
  racprd05     8388608                   2097152                   passed
Result: Kernel parameter check passed for "shmall"

Check: Kernel parameter for "file-max"
  Node Name     Configured                Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     6815744                   6815744                   passed
  racprd05     6815744                   6815744                   passed
Result: Kernel parameter check passed for "file-max"

Check: Kernel parameter for "ip_local_port_range"
  Node Name     Configured                Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     between 9000 & 65500      between 9000 & 65500      passed
  racprd05     between 9000 & 65500      between 9000 & 65500      passed
Result: Kernel parameter check passed for "ip_local_port_range"

Check: Kernel parameter for "rmem_default"
  Node Name     Configured                Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     1048576                   262144                    passed
  racprd05     1048576                   262144                    passed
Result: Kernel parameter check passed for "rmem_default"

Check: Kernel parameter for "rmem_max"
  Node Name     Configured                Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     4194304                   4194304                   passed
  racprd05     4194304                   4194304                   passed
Result: Kernel parameter check passed for "rmem_max"

Check: Kernel parameter for "wmem_default"
  Node Name     Configured                Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     262144                    262144                    passed
  racprd05     262144                    262144                    passed
Result: Kernel parameter check passed for "wmem_default"

Check: Kernel parameter for "wmem_max"
  Node Name     Configured                Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     1048576                   1048576                   passed
  racprd05     1048576                   1048576                   passed
Result: Kernel parameter check passed for "wmem_max"

Check: Kernel parameter for "aio-max-nr"
  Node Name     Configured                Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     1048576                   1048576                   passed
  racprd05     1048576                   1048576                   passed
Result: Kernel parameter check passed for "aio-max-nr"

Check: Package existence for "make-3.80( x86_64)"
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     make-3.80-202.2           make-3.80( x86_64)        passed
  racprd05     make-3.80-202.2           make-3.80( x86_64)        passed
Result: Package existence check passed for "make-3.80( x86_64)"

Check: Package existence for "libaio-0.3.104( x86_64)"
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     libaio-0.3.104-14.2       libaio-0.3.104( x86_64)   passed
  racprd05     libaio-0.3.104-14.2       libaio-0.3.104( x86_64)   passed
Result: Package existence check passed for "libaio-0.3.104( x86_64)"

Check: Package existence for "binutils-2.16.91.0.5( x86_64)"
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     binutils-2.16.91.0.5-23.31  binutils-2.16.91.0.5( x86_64)  passed
  racprd05     binutils-2.16.91.0.5-23.31  binutils-2.16.91.0.5( x86_64)  passed
Result: Package existence check passed for "binutils-2.16.91.0.5( x86_64)"

Check: Package existence for "gcc-4.1.2 (x86_64)( x86_64)"
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     gcc-4.1.2_20070115-0.22 (x86_64)  gcc-4.1.2 (x86_64)( x86_64)  passed
  racprd05     gcc-4.1.2_20070115-0.22 (x86_64)  gcc-4.1.2 (x86_64)( x86_64)  passed
Result: Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)"

Check: Package existence for "gcc-c++-4.1.2 (x86_64)( x86_64)"
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     gcc-c++-4.1.2_20070115-0.22 (x86_64)  gcc-c++-4.1.2 (x86_64)( x86_64)  passed
  racprd05     gcc-c++-4.1.2_20070115-0.22 (x86_64)  gcc-c++-4.1.2 (x86_64)( x86_64)  passed
Result: Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)"

Check: Package existence for "compat-libstdc++-5.0.7 (x86_64)( x86_64)"
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     compat-libstdc++-5.0.7-22.2 (x86_64)  compat-libstdc++-5.0.7 (x86_64)( x86_64)  passed
  racprd05     compat-libstdc++-5.0.7-22.2 (x86_64)  compat-libstdc++-5.0.7 (x86_64)( x86_64)  passed
Result: Package existence check passed for "compat-libstdc++-5.0.7 (x86_64)( x86_64)"

Check: Package existence for "glibc at least 2.4-31.63, not between 2.5-18 & 2.5-23 (x86_64)( x86_64)"
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     glibc-2.4-31.63.3 (x86_64)  glibc at least 2.4-31.63, not between 2.5-18 & 2.5-23 (x86_64)( x86_64)  passed
  racprd05     glibc-2.4-31.63.3 (x86_64)  glibc at least 2.4-31.63, not between 2.5-18 & 2.5-23 (x86_64)( x86_64)  passed
Result: Package existence check passed for "glibc at least 2.4-31.63, not between 2.5-18 & 2.5-23 (x86_64)( x86_64)"

Check: Package existence for "glibc-devel-2.4( x86_64)"
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     glibc-devel-2.4-31.63.3   glibc-devel-2.4( x86_64)  passed
  racprd05     glibc-devel-2.4-31.63.3   glibc-devel-2.4( x86_64)  passed
Result: Package existence check passed for "glibc-devel-2.4( x86_64)"

Check: Package existence for "ksh-93r-12.9( x86_64)"
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     ksh-93s-59.7              ksh-93r-12.9( x86_64)     passed
  racprd05     ksh-93s-59.7              ksh-93r-12.9( x86_64)     passed
Result: Package existence check passed for "ksh-93r-12.9( x86_64)"

Check: Package existence for "libaio-devel-0.3.104( x86_64)"
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     libaio-devel-0.3.104-14.2  libaio-devel-0.3.104( x86_64)  passed
  racprd05     libaio-devel-0.3.104-14.2  libaio-devel-0.3.104( x86_64)  passed
Result: Package existence check passed for "libaio-devel-0.3.104( x86_64)"

Check: Package existence for "libelf-0.8.5( x86_64)"
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     libelf-0.8.5-47.2         libelf-0.8.5( x86_64)     passed
  racprd05     libelf-0.8.5-47.2         libelf-0.8.5( x86_64)     passed
Result: Package existence check passed for "libelf-0.8.5( x86_64)"

Check: Package existence for "libgcc-4.1.2 (x86_64)( x86_64)"
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     libgcc-4.1.2_20070115-0.22 (x86_64)  libgcc-4.1.2 (x86_64)( x86_64)  passed
  racprd05     libgcc-4.1.2_20070115-0.22 (x86_64)  libgcc-4.1.2 (x86_64)( x86_64)  passed
Result: Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)"

Check: Package existence for "libstdc++-4.1.2 (x86_64)( x86_64)"
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     libstdc++-4.1.2_20070115-0.22 (x86_64)  libstdc++-4.1.2 (x86_64)( x86_64)  passed
  racprd05     libstdc++-4.1.2_20070115-0.22 (x86_64)  libstdc++-4.1.2 (x86_64)( x86_64)  passed
Result: Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)"

Check: Package existence for "libstdc++-devel-4.1.2 (x86_64)( x86_64)"
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     libstdc++-devel-4.1.2_20070115-0.22 (x86_64)  libstdc++-devel-4.1.2 (x86_64)( x86_64)  passed
  racprd05     libstdc++-devel-4.1.2_20070115-0.22 (x86_64)  libstdc++-devel-4.1.2 (x86_64)( x86_64)  passed
Result: Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)"

Check: Package existence for "sysstat-8.0.4( x86_64)"
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     sysstat-8.0.4-1.4         sysstat-8.0.4( x86_64)    passed
  racprd05     sysstat-8.0.4-1.4         sysstat-8.0.4( x86_64)    passed
Result: Package existence check passed for "sysstat-8.0.4( x86_64)"

Check: Package existence for "libcap-1.92( x86_64)"
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd10     libcap-1.92-499.4         libcap-1.92( x86_64)      passed
  racprd05     libcap-1.92-499.4         libcap-1.92( x86_64)      passed
Result: Package existence check passed for "libcap-1.92( x86_64)"

Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed

Check: Current group ID
Result: Current group ID check passed

Checking OCR integrity...
Check for compatible storage device for OCR location "/u03_CRS/ocr/POINT3.ocr"...


Checking OCR device "/u03_CRS/ocr/POINT3.ocr" for sharedness...


ERROR:
PRVF-4172 : Check of OCR device "/u03_CRS/ocr/POINT3.ocr" for sharedness failed

racprd10:
Mount options did not meet the requirements [Expected = "rw,hard,rsize>=32768,wsize>=32768,proto=tcp|tcp,vers=3|nfsvers=3|nfsv3|v3,timeo>=600,acregmin=0&acregmax=0&acdirmin=0&acdirmax=0|actimeo=0" ; Found = "rw,v3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,lock,proto=tcp,sec=sys,addr=myprod01-node1"]
racprd05:
Mount options did not meet the requirements [Expected = "rw,hard,rsize>=32768,wsize>=32768,proto=tcp|tcp,vers=3|nfsvers=3|nfsv3|v3,timeo>=600,acregmin=0&acregmax=0&acdirmin=0&acdirmax=0|actimeo=0" ; Found = "rw,v3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,lock,proto=tcp,sec=sys,addr=myprod01-node1"]

Check for compatible storage device for OCR location "/u01_CRS/ocr/POINT1.ocr"...


Checking OCR device "/u01_CRS/ocr/POINT1.ocr" for sharedness...


ERROR:
PRVF-4172 : Check of OCR device "/u01_CRS/ocr/POINT1.ocr" for sharedness failed

racprd10:
Mount options did not meet the requirements [Expected = "rw,hard,rsize>=32768,wsize>=32768,proto=tcp|tcp,vers=3|nfsvers=3|nfsv3|v3,timeo>=600,acregmin=0&acregmax=0&acdirmin=0&acdirmax=0|actimeo=0" ; Found = "rw,v3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,lock,proto=tcp,sec=sys,addr=myprod01-node1"]
racprd05:
Mount options did not meet the requirements [Expected = "rw,hard,rsize>=32768,wsize>=32768,proto=tcp|tcp,vers=3|nfsvers=3|nfsv3|v3,timeo>=600,acregmin=0&acregmax=0&acdirmin=0&acdirmax=0|actimeo=0" ; Found = "rw,v3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,lock,proto=tcp,sec=sys,addr=myprod01-node1"]

Check for compatible storage device for OCR location "/u02_CRS/ocr/POINT2.ocr"...


Checking OCR device "/u02_CRS/ocr/POINT2.ocr" for sharedness...


ERROR:
PRVF-4172 : Check of OCR device "/u02_CRS/ocr/POINT2.ocr" for sharedness failed

racprd10:
Mount options did not meet the requirements [Expected = "rw,hard,rsize>=32768,wsize>=32768,proto=tcp|tcp,vers=3|nfsvers=3|nfsv3|v3,timeo>=600,acregmin=0&acregmax=0&acdirmin=0&acdirmax=0|actimeo=0" ; Found = "rw,v3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,lock,proto=tcp,sec=sys,addr=myprod01-node2"]
racprd05:
Mount options did not meet the requirements [Expected = "rw,hard,rsize>=32768,wsize>=32768,proto=tcp|tcp,vers=3|nfsvers=3|nfsv3|v3,timeo>=600,acregmin=0&acregmax=0&acdirmin=0&acdirmax=0|actimeo=0" ; Found = "rw,v3,rsize=32768,wsize=32768,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,lock,proto=tcp,sec=sys,addr=myprod01-node2"]


OCR integrity check failed

Checking Oracle Cluster Voting Disk configuration...

ERROR:

Unable to determine the sharedness of /u01_CRS/voting/POINT1.vot on nodes: racprd10

ERROR:

Unable to determine the sharedness of /u02_CRS/voting/POINT2.vot on nodes: racprd10

ERROR:

Unable to determine the sharedness of /u03_CRS/voting/POINT3.vot on nodes: racprd10

Oracle Cluster Voting Disk configuration check passed
Check: Time zone consistency
Result: Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP Configuration file check passed

Checking daemon liveness...

Check: Liveness for "ntpd"
  Node Name                             Running?
  ------------------------------------  ------------------------
  racprd10                             yes
  racprd05                             yes
Result: Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes

Checking NTP daemon command line for slewing option "-x"
Check: NTP daemon command line
  Node Name                             Slewing Option Set?
  ------------------------------------  ------------------------
  racprd10                             no
  racprd05                             no
Result:
NTP daemon slewing option check failed on some nodes
PRVF-5436 : The NTP daemon running on one or more nodes lacks the slewing option "-x"
Result: Clock synchronization check using Network Time Protocol(NTP) failed


Checking to make sure user "oracle" is not in "root" group
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  racprd10     does not exist            passed
  racprd05     does not exist            passed
Result: User "oracle" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes

Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "imycompany.com man.co" as found on node "racprd05"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
  Node Name                             Status
  ------------------------------------  ------------------------
  racprd05                             passed
  racprd10                             passed
The DNS response time for an unreachable node is within acceptable limit on all nodes

File "/etc/resolv.conf" is consistent across nodes


Pre-check for node addition was unsuccessful on all the nodes.
racprd05 | CRS | /export/11.2.0.2/bin


Things of concern from the pre-check above
- Sharedness check for OCR device
- Shared resource path location failed for : "/export/11.2.0.2"
- ntp check for slewing option.

-- The second error is ignorable. I’m not going to need the local path to be shared.

-- The third error is related to ntp. CVU looks for ntp configuration from file  "/etc/sysconfig/ntpd" whereas my system administrators have it configured differently. It works on the existing cluster and hence I’m ignoring this error. I checked for the ntp daemon and it is running with correct options.

-- For the first error, I checked the mount options for the OCR thoroughly. It was fine. The precheck is reporting this issue and node addition might fail. After doing a bit of research, found the work around.

> export IGNORE_PREADDNODE_CHECKS=Y



Now, to extend the Grid Infrastructure home to the new node.

If you are using Grid Naming Service (GNS), run the following command:
$ ./addNode.sh "CLUSTER_NEW_NODES={node3}"

If you are not using GNS, run the following command:
$ ./addNode.sh "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_
HOSTNAMES={node3-vip}"

From existing node/ source node,

racprd05 | CRS | /export/11.2.0.2/oui/bin
> echo $IGNORE_PREADDNODE_CHECKS
Y
racprd05 | CRS | /export/11.2.0.2/oui/bin
> ./addNode.sh -silent "CLUSTER_NEW_NODES={racprd10}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={racprd10v}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 23811 MB    Passed
Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.


Performing tests to see whether nodes racprd01,racprd02,racprd03,racprd04,racprd08,racprd09,racprd10 are available
............................................................... 100% Done.

.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /export/11.2.0.2
   New Nodes
Space Requirements
   New Nodes
      racprd10
         /export: Required 8.59GB : Available 29.44GB
Installed Products
   Product Names
      Oracle Grid Infrastructure 11.2.0.2.0
      Sun JDK 1.5.0.24.08
      Installer SDK Component 11.2.0.2.0
      Oracle One-Off Patch Installer 11.2.0.0.2
      Oracle Universal Installer 11.2.0.2.0
      Oracle USM Deconfiguration 11.2.0.2.0
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0
      Enterprise Manager Common Core Files 10.2.0.4.3
      Oracle DBCA Deconfiguration 11.2.0.2.0
      Oracle RAC Deconfiguration 11.2.0.2.0
      Oracle Quality of Service Management (Server) 11.2.0.2.0
      Installation Plugin Files 11.2.0.2.0
      Universal Storage Manager Files 11.2.0.2.0
      Oracle Text Required Support Files 11.2.0.2.0
      Automatic Storage Management Assistant 11.2.0.2.0
      Oracle Database 11g Multimedia Files 11.2.0.2.0
      Oracle Multimedia Java Advanced Imaging 11.2.0.2.0
      Oracle Globalization Support 11.2.0.2.0
      Oracle Multimedia Locator RDBMS Files 11.2.0.2.0
      Oracle Core Required Support Files 11.2.0.2.0
      Bali Share 1.1.18.0.0
      Oracle Database Deconfiguration 11.2.0.2.0
      Oracle Quality of Service Management (Client) 11.2.0.2.0
      Expat libraries 2.0.1.0.1
      Oracle Containers for Java 11.2.0.2.0
      Perl Modules 5.10.0.0.1
      Secure Socket Layer 11.2.0.2.0
      Oracle JDBC/OCI Instant Client 11.2.0.2.0
      Oracle Multimedia Client Option 11.2.0.2.0
      LDAP Required Support Files 11.2.0.2.0
      Character Set Migration Utility 11.2.0.2.0
      Perl Interpreter 5.10.0.0.1
      PL/SQL Embedded Gateway 11.2.0.2.0
      OLAP SQL Scripts 11.2.0.2.0
      Database SQL Scripts 11.2.0.2.0
      Oracle Extended Windowing Toolkit 3.4.47.0.0
      SSL Required Support Files for InstantClient 11.2.0.2.0
      SQL*Plus Files for Instant Client 11.2.0.2.0
      Oracle Net Required Support Files 11.2.0.2.0
      Oracle Database User Interface 2.2.13.0.0
      RDBMS Required Support Files for Instant Client 11.2.0.2.0
      RDBMS Required Support Files Runtime 11.2.0.2.0
      XML Parser for Java 11.2.0.2.0
      Oracle Security Developer Tools 11.2.0.2.0
      Oracle Wallet Manager 11.2.0.2.0
      Enterprise Manager plugin Common Files 11.2.0.2.0
      Platform Required Support Files 11.2.0.2.0
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
      RDBMS Required Support Files 11.2.0.2.0
      Oracle Ice Browser 5.2.3.6.0
      Oracle Help For Java 4.2.9.0.0
      Enterprise Manager Common Files 10.2.0.4.3
      Deinstallation Tool 11.2.0.2.0
      Oracle Java Client 11.2.0.2.0
      Cluster Verification Utility Files 11.2.0.2.0
      Oracle Notification Service (eONS) 11.2.0.2.0
      Oracle LDAP administration 11.2.0.2.0
      Cluster Verification Utility Common Files 11.2.0.2.0
      Oracle Clusterware RDBMS Files 11.2.0.2.0
      Oracle Locale Builder 11.2.0.2.0
      Oracle Globalization Support 11.2.0.2.0
      Buildtools Common Files 11.2.0.2.0
      Oracle RAC Required Support Files-HAS 11.2.0.2.0
      SQL*Plus Required Support Files 11.2.0.2.0
      XDK Required Support Files 11.2.0.2.0
      Agent Required Support Files 10.2.0.4.3
      Parser Generator Required Support Files 11.2.0.2.0
      Precompiler Required Support Files 11.2.0.2.0
      Installation Common Files 11.2.0.2.0
      Required Support Files 11.2.0.2.0
      Oracle JDBC/THIN Interfaces 11.2.0.2.0
      Oracle Multimedia Locator 11.2.0.2.0
      Oracle Multimedia 11.2.0.2.0
      HAS Common Files 11.2.0.2.0
      Assistant Common Files 11.2.0.2.0
      PL/SQL 11.2.0.2.0
      HAS Files for DB 11.2.0.2.0
      Oracle Recovery Manager 11.2.0.2.0
      Oracle Database Utilities 11.2.0.2.0
      Oracle Notification Service 11.2.0.2.0
      SQL*Plus 11.2.0.2.0
      Oracle Netca Client 11.2.0.2.0
      Oracle Net 11.2.0.2.0
      Oracle JVM 11.2.0.2.0
      Oracle Internet Directory Client 11.2.0.2.0
      Oracle Net Listener 11.2.0.2.0
      Cluster Ready Services Files 11.2.0.2.0
      Oracle Database 11g 11.2.0.2.0
-----------------------------------------------------------------------------


Instantiating scripts for add node (Wednesday, July 25, 2012 2:02:27 AM GMT)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Wednesday, July 25, 2012 2:02:30 AM GMT)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Wednesday, July 25, 2012 2:05:06 AM GMT)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each cluster node.
/export/11.2.0.2/root.sh #On nodes racprd10
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /export/11.2.0.2 was successful.
Please check '/tmp/silentInstall.log' for more details.
racprd05 | CRS | /export/11.2.0.2/oui/bin
> 


During the above process, at the prompt, run $GI_HOME/root.sh on the new node. Do this from a new terminal as root user.

racprd10:~ # sh /export/11.2.0.2/root.sh
Running Oracle 11g root script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /export/11.2.0.2

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /export/11.2.0.2/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
Adding daemon to inittab
ACFS-9459: ADVM/ACFS is not supported on this OS version: 'SLES10 SP2'
ACFS-9201: Not Supported
ACFS-9459: ADVM/ACFS is not supported on this OS version: 'SLES10 SP2'
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node racprd02, number 2, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
/export/11.2.0.2/bin/srvctl start listener -n racprd10 ... failed
Failed to perform new node configuration at /export/11.2.0.2/crs/install/crsconfig_lib.pm line 8245.
/export/11.2.0.2/perl/bin/perl -I/export/11.2.0.2/perl/lib -I/export/11.2.0.2/crs/install /export/11.2.0.2/crs/install/rootcrs.pl execution failed


Investigating why root.sh triggered the error.

Check the error log rootcrs_<node_name>.log located at GI_HOME/cfgtoollogs/crsconfig

racprd10:/export/11.2.0.2/cfgtoollogs/crsconfig # view rootcrs_racprd10.log

2012-07-25 02:13:18: netmask = 255.255.255.0
2012-07-25 02:13:18: interface =
2012-07-25 02:13:18: hostVIP = racprd10v/255.255.255.0
2012-07-25 02:13:18: Invoking "/export/11.2.0.2/bin/srvctl add nodeapps -n racprd10 -A "racprd10v/255.255.255.0" "
2012-07-25 02:13:18: trace file=/export/11.2.0.2/cfgtoollogs/crsconfig/srvmcfg0.log
2012-07-25 02:13:18: Executing /export/11.2.0.2/bin/srvctl add nodeapps -n racprd10 -A "racprd10v/255.255.255.0"
2012-07-25 02:13:18: Executing cmd: /export/11.2.0.2/bin/srvctl add nodeapps -n racprd10 -A "racprd10v/255.255.255.0"
2012-07-25 02:13:23: add nodeapps on node=racprd10 ... success
2012-07-25 02:13:23: Invoking "/export/11.2.0.2/bin/srvctl start vip -i racprd10"
2012-07-25 02:13:23: trace file=/export/11.2.0.2/cfgtoollogs/crsconfig/srvmcfg1.log
2012-07-25 02:13:23: Executing /export/11.2.0.2/bin/srvctl start vip -i racprd10
2012-07-25 02:13:23: Executing cmd: /export/11.2.0.2/bin/srvctl start vip -i racprd10
2012-07-25 02:13:25: start vip on node:racprd10 ... success
2012-07-25 02:13:25: Running as user oracle: /export/11.2.0.2/bin/srvctl start listener -n racprd10
2012-07-25 02:13:25: s_run_as_user2: Running /bin/su oracle -c ' /export/11.2.0.2/bin/srvctl start listener -n racprd10 '
2012-07-25 02:13:31: Removing file /tmp/file7FlP8l
2012-07-25 02:13:31: Successfully removed file: /tmp/file7FlP8l
2012-07-25 02:13:31: /bin/su exited with rc=0
 1
2012-07-25 02:13:31: /export/11.2.0.2/bin/srvctl start listener -n racprd10 ... failed
2012-07-25 02:13:31: Running as user oracle: /export/11.2.0.2/bin/cluutil -ckpt -oraclebase /export/oracle -writeckpt -name ROOTCRS_NODECONFIG -state FAIL
2012-07-25 02:13:31: s_run_as_user2: Running /bin/su oracle -c ' /export/11.2.0.2/bin/cluutil -ckpt -oraclebase /export/oracle -writeckpt -name ROOTCRS_NODECONFIG -state FAIL '
2012-07-25 02:13:31: Removing file /tmp/fileMI0g9k
2012-07-25 02:13:31: Successfully removed file: /tmp/fileMI0g9k
2012-07-25 02:13:31: /bin/su successfully executed

2012-07-25 02:13:31: Succeeded in writing the checkpoint:'ROOTCRS_NODECONFIG' with status:FAIL
2012-07-25 02:13:31: ###### Begin DIE Stack Trace ######
2012-07-25 02:13:31:     Package         File                 Line Calling
2012-07-25 02:13:31:     --------------- -------------------- ---- ----------
2012-07-25 02:13:31:  1: main            rootcrs.pl            324 crsconfig_lib::dietrap
2012-07-25 02:13:31:  2: crsconfig_lib   crsconfig_lib.pm     8245 main::__ANON__
2012-07-25 02:13:31:  3: crsconfig_lib   crsconfig_lib.pm     8205 crsconfig_lib::configNode
2012-07-25 02:13:31:  4: main            rootcrs.pl            753 crsconfig_lib::perform_configNode
2012-07-25 02:13:31: ####### End DIE Stack Trace #######

2012-07-25 02:13:31: 'ROOTCRS_NODECONFIG' checkpoint has failed


There is a known bug similar to the issue that we have.
Root.Sh From addNode.Sh On 11.2.0.1 Fails On New Node [ID 1295194.1]

The errors in log were pointing to listener startup from the 11g RAC home. We have several local listeners running from Oracle home and also from Grid home. I can take care of the listener configuration separately. We will be doing another root.sh later during this procedure.

Now to clone the Oracle RAC home

From ORACLE_HOME/oui/bin on an existing server, run

$ ./addNode.sh "CLUSTER_NEW_NODES={node3}"

> ./addNode.sh -silent "CLUSTER_NEW_NODES={racprd10}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 23811 MB    Passed
Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.


Performing tests to see whether nodes racprd01,racprd02,racprd03,racprd04,racprd08,racprd09,racprd10 are available
............................................................... 100% Done.

.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /export/oracle/product/11.2.0.2
   New Nodes
Space Requirements
   New Nodes
      racprd10
         /export: Required 4.49GB : Available 24.02GB
Installed Products
   Product Names
      Oracle Database 11g 11.2.0.2.0
      Sun JDK 1.5.0.24.08
      Installer SDK Component 11.2.0.2.0
      Oracle One-Off Patch Installer 11.2.0.0.2
      Oracle Universal Installer 11.2.0.2.0
      Oracle USM Deconfiguration 11.2.0.2.0
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0
      Oracle DBCA Deconfiguration 11.2.0.2.0
      Oracle RAC Deconfiguration 11.2.0.2.0
      Oracle Database Deconfiguration 11.2.0.2.0
      Oracle Configuration Manager Client 10.3.2.1.0
      Oracle Configuration Manager 10.3.3.1.1
      Oracle ODBC Driverfor Instant Client 11.2.0.2.0
      LDAP Required Support Files 11.2.0.2.0
      SSL Required Support Files for InstantClient 11.2.0.2.0
      Bali Share 1.1.18.0.0
      Oracle Extended Windowing Toolkit 3.4.47.0.0
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
      Oracle Real Application Testing 11.2.0.2.0
      Oracle Database Vault J2EE Application 11.2.0.2.0
      Oracle Label Security 11.2.0.2.0
      Oracle Data Mining RDBMS Files 11.2.0.2.0
      Oracle OLAP RDBMS Files 11.2.0.2.0
      Oracle OLAP API 11.2.0.2.0
      Platform Required Support Files 11.2.0.2.0
      Oracle Database Vault option 11.2.0.2.0
      Oracle RAC Required Support Files-HAS 11.2.0.2.0
      SQL*Plus Required Support Files 11.2.0.2.0
      Oracle Display Fonts 9.0.2.0.0
      Oracle Ice Browser 5.2.3.6.0
      Oracle JDBC Server Support Package 11.2.0.2.0
      Oracle SQL Developer 11.2.0.2.0
      Oracle Application Express 11.2.0.2.0
      XDK Required Support Files 11.2.0.2.0
      RDBMS Required Support Files for Instant Client 11.2.0.2.0
      SQLJ Runtime 11.2.0.2.0
      Database Workspace Manager 11.2.0.2.0
      RDBMS Required Support Files Runtime 11.2.0.2.0
      Oracle Globalization Support 11.2.0.2.0
      Exadata Storage Server 11.2.0.1.0
      Provisioning Advisor Framework 10.2.0.4.3
      Enterprise Manager Database Plugin -- Repository Support 11.2.0.2.0
      Enterprise Manager Repository Core Files 10.2.0.4.3
      Enterprise Manager Database Plugin -- Agent Support 11.2.0.2.0
      Enterprise Manager Grid Control Core Files 10.2.0.4.3
      Enterprise Manager Common Core Files 10.2.0.4.3
      Enterprise Manager Agent Core Files 10.2.0.4.3
      RDBMS Required Support Files 11.2.0.2.0
      regexp 2.1.9.0.0
      Agent Required Support Files 10.2.0.4.3
      Oracle 11g Warehouse Builder Required Files 11.2.0.2.0
      Oracle Notification Service (eONS) 11.2.0.2.0
      Oracle Text Required Support Files 11.2.0.2.0
      Parser Generator Required Support Files 11.2.0.2.0
      Oracle Database 11g Multimedia Files 11.2.0.2.0
      Oracle Multimedia Java Advanced Imaging 11.2.0.2.0
      Oracle Multimedia Annotator 11.2.0.2.0
      Oracle JDBC/OCI Instant Client 11.2.0.2.0
      Oracle Multimedia Locator RDBMS Files 11.2.0.2.0
      Precompiler Required Support Files 11.2.0.2.0
      Oracle Core Required Support Files 11.2.0.2.0
      Sample Schema Data 11.2.0.2.0
      Oracle Starter Database 11.2.0.2.0
      Oracle Message Gateway Common Files 11.2.0.2.0
      Oracle XML Query 11.2.0.2.0
      XML Parser for Oracle JVM 11.2.0.2.0
      Oracle Help For Java 4.2.9.0.0
      Installation Plugin Files 11.2.0.2.0
      Enterprise Manager Common Files 10.2.0.4.3
      Expat libraries 2.0.1.0.1
      Deinstallation Tool 11.2.0.2.0
      Oracle Quality of Service Management (Client) 11.2.0.2.0
      Perl Modules 5.10.0.0.1
      JAccelerator (COMPANION) 11.2.0.2.0
      Oracle Containers for Java 11.2.0.2.0
      Perl Interpreter 5.10.0.0.1
      Oracle Net Required Support Files 11.2.0.2.0
      Secure Socket Layer 11.2.0.2.0
      Oracle Universal Connection Pool 11.2.0.2.0
      Oracle JDBC/THIN Interfaces 11.2.0.2.0
      Oracle Multimedia Client Option 11.2.0.2.0
      Oracle Java Client 11.2.0.2.0
      Character Set Migration Utility 11.2.0.2.0
      Oracle Code Editor 1.2.1.0.0I
      PL/SQL Embedded Gateway 11.2.0.2.0
      OLAP SQL Scripts 11.2.0.2.0
      Database SQL Scripts 11.2.0.2.0
      Oracle Locale Builder 11.2.0.2.0
      Oracle Globalization Support 11.2.0.2.0
      SQL*Plus Files for Instant Client 11.2.0.2.0
      Required Support Files 11.2.0.2.0
      Oracle Database User Interface 2.2.13.0.0
      Oracle ODBC Driver 11.2.0.2.0
      Oracle Notification Service 11.2.0.2.0
      XML Parser for Java 11.2.0.2.0
      Oracle Security Developer Tools 11.2.0.2.0
      Oracle Wallet Manager 11.2.0.2.0
      Cluster Verification Utility Common Files 11.2.0.2.0
      Oracle Clusterware RDBMS Files 11.2.0.2.0
      Oracle UIX 2.2.24.6.0
      Enterprise Manager plugin Common Files 11.2.0.2.0
      HAS Common Files 11.2.0.2.0
      Precompiler Common Files 11.2.0.2.0
      Installation Common Files 11.2.0.2.0
      Oracle Help for the  Web 2.0.14.0.0
      Oracle LDAP administration 11.2.0.2.0
      Buildtools Common Files 11.2.0.2.0
      Assistant Common Files 11.2.0.2.0
      Oracle Recovery Manager 11.2.0.2.0
      PL/SQL 11.2.0.2.0
      Generic Connectivity Common Files 11.2.0.2.0
      Oracle Database Gateway for ODBC 11.2.0.2.0
      Oracle Programmer 11.2.0.2.0
      Oracle Database Utilities 11.2.0.2.0
      Enterprise Manager Agent 10.2.0.4.3
      SQL*Plus 11.2.0.2.0
      Oracle Netca Client 11.2.0.2.0
      Oracle Multimedia Locator 11.2.0.2.0
      Oracle Call Interface (OCI) 11.2.0.2.0
      Oracle Multimedia 11.2.0.2.0
      Oracle Net 11.2.0.2.0
      Oracle XML Development Kit 11.2.0.2.0
      Database Configuration and Upgrade Assistants 11.2.0.2.0
      Oracle JVM 11.2.0.2.0
      Oracle Advanced Security 11.2.0.2.0
      Oracle Internet Directory Client 11.2.0.2.0
      Oracle Enterprise Manager Console DB 11.2.0.2.0
      HAS Files for DB 11.2.0.2.0
      Oracle Net Listener 11.2.0.2.0
      Oracle Text 11.2.0.2.0
      Oracle Net Services 11.2.0.2.0
      Oracle Database 11g 11.2.0.2.0
      Oracle OLAP 11.2.0.2.0
      Oracle Spatial 11.2.0.2.0
      Oracle Partitioning 11.2.0.2.0
      Enterprise Edition Options 11.2.0.2.0
-----------------------------------------------------------------------------


Instantiating scripts for add node (Wednesday, July 25, 2012 2:34:01 AM GMT)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Wednesday, July 25, 2012 2:34:06 AM GMT)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Wednesday, July 25, 2012 2:38:18 AM GMT)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each cluster node.
/export/oracle/product/11.2.0.2/root.sh #On nodes racprd10
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /export/oracle/product/11.2.0.2 was successful.
Please check '/tmp/silentInstall.log' for more details.
> 



During the above process, at the prompt, run $ORACLE_HOME/root.sh on the new node. Do this from a new terminal as root user.

# sh /export/oracle/product/11.2.0.2/root.sh
Running Oracle 11g root script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /export/oracle/product/11.2.0.2

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.



Run the GI_HOME/root.sh script on the new node as root and run the subsequent script, as instructed.

racprd10:/export/11.2.0.2 # sh /export/11.2.0.2/root.sh
Running Oracle 11g root script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /export/11.2.0.2

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /export/11.2.0.2/crs/install/crsconfig_params
PRKO-2188 : All the node applications already exist. They were not recreated.
PRKO-2420 : VIP is already started on node(s): racprd10
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
racprd10:/export/11.2.0.2 #

The cluster configuration was successful.

Post Checks

$ cluvfy stage -post nodeadd -n node3 [-verbose]


> ./cluvfy stage -post nodeadd -n racprd10 -verbose

Performing post-checks for node addition

Checking node reachability...

Check: Node reachability from node "racprd05"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  racprd10                             yes
Result: Node reachability check passed from node "racprd05"


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Comment
  ------------------------------------  ------------------------
  racprd10                             passed
Result: User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  racprd01     passed
  racprd09     passed
  racprd10     passed
  racprd08     passed
  racprd05     passed
  racprd04     passed
  racprd02     passed

Verification of the hosts config file successful


Interface information for node "racprd01"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.91    10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7E:5C 1500
 bond0  10.120.20.111   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7E:5C 1500
 bond1  192.168.20.91   192.168.20.0    0.0.0.0         10.120.20.1     00:18:22:0E:4B:70 1500
 bond1  169.254.162.181 169.254.0.0     0.0.0.0         10.120.20.1     00:18:22:0E:4B:70 1500
 bond2  10.120.20.101   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7E:5D 1500


Interface information for node "racprd10"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.99    10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:42:6D:F0 1500
 bond0  10.120.20.122   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:42:6D:F0 1500
 bond1  192.168.20.103  192.168.20.0    0.0.0.0         10.120.20.1     00:5B:21:42:6C:84 1500
 bond1  169.254.100.100 169.254.0.0     0.0.0.0         10.120.20.1     00:5B:21:42:6C:84 1500
 bond2  10.120.20.109   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:42:6C:81 1500


Interface information for node "racprd09"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.90    10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:85:29:24 1500
 bond0  10.120.20.120   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:85:29:24 1500
 bond1  192.168.20.100  192.168.20.0    0.0.0.0         10.120.20.1     00:18:22:85:29:26 1500
 bond1  169.254.58.183  169.254.0.0     0.0.0.0         10.120.20.1     00:18:22:85:29:26 1500
 bond2  10.120.20.114   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:22:2A:E0 1500


Interface information for node "racprd08"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.94    10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:76:08:29 1500
 bond0  10.120.20.119   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:76:08:29 1500
 bond0  10.120.20.166   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:76:08:29 1500
 bond1  192.168.20.99   192.168.20.0    0.0.0.0         10.120.20.1     00:18:22:76:08:2B 1500
 bond1  169.254.213.9   169.254.0.0     0.0.0.0         10.120.20.1     00:18:22:76:08:2B 1500
 bond2  10.120.20.104   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:76:08:28 1500


Interface information for node "racprd05"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.146   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:49:36:D0 1500
 bond0  10.120.20.148   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:49:36:D0 1500
 bond1  192.168.20.101  192.168.20.0    0.0.0.0         10.120.20.1     00:5B:21:49:36:D4 1500
 bond1  169.254.83.149  169.254.0.0     0.0.0.0         10.120.20.1     00:5B:21:49:36:D4 1500
 bond2  10.120.20.147   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:49:36:D1 1500


Interface information for node "racprd04"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.125   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:46:46:60 1500
 bond0  10.120.20.127   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:46:46:60 1500
 bond0  10.120.20.168   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:46:46:60 1500
 bond1  192.168.20.102  192.168.20.0    0.0.0.0         10.120.20.1     00:5B:21:49:35:BC 1500
 bond1  169.254.7.188   169.254.0.0     0.0.0.0         10.120.20.1     00:5B:21:49:35:BC 1500
 bond2  10.120.20.126   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:46:46:61 1500


Interface information for node "racprd02"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.92    10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7D:9E 1500
 bond0  10.120.20.112   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7D:9E 1500
 bond1  192.168.20.92   192.168.20.0    0.0.0.0         10.120.20.1     00:18:22:0D:8F:54 1500
 bond1  169.254.151.45  169.254.0.0     0.0.0.0         10.120.20.1     00:18:22:0D:8F:54 1500
 bond2  10.120.20.102   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7D:9F 1500


Check: Node connectivity for interface "bond0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  racprd01[10.120.20.91]         racprd01[10.120.20.111]        yes
  racprd01[10.120.20.91]         racprd10[10.120.20.99]         yes
  racprd01[10.120.20.91]         racprd10[10.120.20.122]        yes
  racprd01[10.120.20.91]         racprd09[10.120.20.90]         yes
  racprd01[10.120.20.91]         racprd09[10.120.20.120]        yes
  racprd01[10.120.20.91]         racprd08[10.120.20.94]         yes
  racprd01[10.120.20.91]         racprd08[10.120.20.119]        yes
  racprd01[10.120.20.91]         racprd08[10.120.20.166]        yes
  racprd01[10.120.20.91]         racprd05[10.120.20.146]        yes
  racprd01[10.120.20.91]         racprd05[10.120.20.148]        yes
  racprd01[10.120.20.91]         racprd04[10.120.20.125]        yes
  racprd01[10.120.20.91]         racprd04[10.120.20.127]        yes
  racprd01[10.120.20.91]         racprd04[10.120.20.168]        yes
  racprd01[10.120.20.91]         racprd02[10.120.20.92]         yes
  racprd01[10.120.20.91]         racprd02[10.120.20.112]        yes
  racprd01[10.120.20.111]        racprd10[10.120.20.99]         yes
  racprd01[10.120.20.111]        racprd10[10.120.20.122]        yes
  racprd01[10.120.20.111]        racprd09[10.120.20.90]         yes
  racprd01[10.120.20.111]        racprd09[10.120.20.120]        yes
  racprd01[10.120.20.111]        racprd08[10.120.20.94]         yes
  racprd01[10.120.20.111]        racprd08[10.120.20.119]        yes
  racprd01[10.120.20.111]        racprd08[10.120.20.166]        yes
  racprd01[10.120.20.111]        racprd05[10.120.20.146]        yes
  racprd01[10.120.20.111]        racprd05[10.120.20.148]        yes
  racprd01[10.120.20.111]        racprd04[10.120.20.125]        yes
  racprd01[10.120.20.111]        racprd04[10.120.20.127]        yes
  racprd01[10.120.20.111]        racprd04[10.120.20.168]        yes
  racprd01[10.120.20.111]        racprd02[10.120.20.92]         yes
  racprd01[10.120.20.111]        racprd02[10.120.20.112]        yes
  racprd10[10.120.20.99]         racprd10[10.120.20.122]        yes
  racprd10[10.120.20.99]         racprd09[10.120.20.90]         yes
  racprd10[10.120.20.99]         racprd09[10.120.20.120]        yes
  racprd10[10.120.20.99]         racprd08[10.120.20.94]         yes
  racprd10[10.120.20.99]         racprd08[10.120.20.119]        yes
  racprd10[10.120.20.99]         racprd08[10.120.20.166]        yes
  racprd10[10.120.20.99]         racprd05[10.120.20.146]        yes
  racprd10[10.120.20.99]         racprd05[10.120.20.148]        yes
  racprd10[10.120.20.99]         racprd04[10.120.20.125]        yes
  racprd10[10.120.20.99]         racprd04[10.120.20.127]        yes
  racprd10[10.120.20.99]         racprd04[10.120.20.168]        yes
  racprd10[10.120.20.99]         racprd02[10.120.20.92]         yes
  racprd10[10.120.20.99]         racprd02[10.120.20.112]        yes
  racprd10[10.120.20.122]        racprd09[10.120.20.90]         yes
  racprd10[10.120.20.122]        racprd09[10.120.20.120]        yes
  racprd10[10.120.20.122]        racprd08[10.120.20.94]         yes
  racprd10[10.120.20.122]        racprd08[10.120.20.119]        yes
  racprd10[10.120.20.122]        racprd08[10.120.20.166]        yes
  racprd10[10.120.20.122]        racprd05[10.120.20.146]        yes
  racprd10[10.120.20.122]        racprd05[10.120.20.148]        yes
  racprd10[10.120.20.122]        racprd04[10.120.20.125]        yes
  racprd10[10.120.20.122]        racprd04[10.120.20.127]        yes
  racprd10[10.120.20.122]        racprd04[10.120.20.168]        yes
  racprd10[10.120.20.122]        racprd02[10.120.20.92]         yes
  racprd10[10.120.20.122]        racprd02[10.120.20.112]        yes
  racprd09[10.120.20.90]         racprd09[10.120.20.120]        yes
  racprd09[10.120.20.90]         racprd08[10.120.20.94]         yes
  racprd09[10.120.20.90]         racprd08[10.120.20.119]        yes
  racprd09[10.120.20.90]         racprd08[10.120.20.166]        yes
  racprd09[10.120.20.90]         racprd05[10.120.20.146]        yes
  racprd09[10.120.20.90]         racprd05[10.120.20.148]        yes
  racprd09[10.120.20.90]         racprd04[10.120.20.125]        yes
  racprd09[10.120.20.90]         racprd04[10.120.20.127]        yes
  racprd09[10.120.20.90]         racprd04[10.120.20.168]        yes
  racprd09[10.120.20.90]         racprd02[10.120.20.92]         yes
  racprd09[10.120.20.90]         racprd02[10.120.20.112]        yes
  racprd09[10.120.20.120]        racprd08[10.120.20.94]         yes
  racprd09[10.120.20.120]        racprd08[10.120.20.119]        yes
  racprd09[10.120.20.120]        racprd08[10.120.20.166]        yes
  racprd09[10.120.20.120]        racprd05[10.120.20.146]        yes
  racprd09[10.120.20.120]        racprd05[10.120.20.148]        yes
  racprd09[10.120.20.120]        racprd04[10.120.20.125]        yes
  racprd09[10.120.20.120]        racprd04[10.120.20.127]        yes
  racprd09[10.120.20.120]        racprd04[10.120.20.168]        yes
  racprd09[10.120.20.120]        racprd02[10.120.20.92]         yes
  racprd09[10.120.20.120]        racprd02[10.120.20.112]        yes
  racprd08[10.120.20.94]         racprd08[10.120.20.119]        yes
  racprd08[10.120.20.94]         racprd08[10.120.20.166]        yes
  racprd08[10.120.20.94]         racprd05[10.120.20.146]        yes
  racprd08[10.120.20.94]         racprd05[10.120.20.148]        yes
  racprd08[10.120.20.94]         racprd04[10.120.20.125]        yes
  racprd08[10.120.20.94]         racprd04[10.120.20.127]        yes
  racprd08[10.120.20.94]         racprd04[10.120.20.168]        yes
  racprd08[10.120.20.94]         racprd02[10.120.20.92]         yes
  racprd08[10.120.20.94]         racprd02[10.120.20.112]        yes
  racprd08[10.120.20.119]        racprd08[10.120.20.166]        yes
  racprd08[10.120.20.119]        racprd05[10.120.20.146]        yes
  racprd08[10.120.20.119]        racprd05[10.120.20.148]        yes
  racprd08[10.120.20.119]        racprd04[10.120.20.125]        yes
  racprd08[10.120.20.119]        racprd04[10.120.20.127]        yes
  racprd08[10.120.20.119]        racprd04[10.120.20.168]        yes
  racprd08[10.120.20.119]        racprd02[10.120.20.92]         yes
  racprd08[10.120.20.119]        racprd02[10.120.20.112]        yes
  racprd08[10.120.20.166]        racprd05[10.120.20.146]        yes
  racprd08[10.120.20.166]        racprd05[10.120.20.148]        yes
  racprd08[10.120.20.166]        racprd04[10.120.20.125]        yes
  racprd08[10.120.20.166]        racprd04[10.120.20.127]        yes
  racprd08[10.120.20.166]        racprd04[10.120.20.168]        yes
  racprd08[10.120.20.166]        racprd02[10.120.20.92]         yes
  racprd08[10.120.20.166]        racprd02[10.120.20.112]        yes
  racprd05[10.120.20.146]        racprd05[10.120.20.148]        yes
  racprd05[10.120.20.146]        racprd04[10.120.20.125]        yes
  racprd05[10.120.20.146]        racprd04[10.120.20.127]        yes
  racprd05[10.120.20.146]        racprd04[10.120.20.168]        yes
  racprd05[10.120.20.146]        racprd02[10.120.20.92]         yes
  racprd05[10.120.20.146]        racprd02[10.120.20.112]        yes
  racprd05[10.120.20.148]        racprd04[10.120.20.125]        yes
  racprd05[10.120.20.148]        racprd04[10.120.20.127]        yes
  racprd05[10.120.20.148]        racprd04[10.120.20.168]        yes
  racprd05[10.120.20.148]        racprd02[10.120.20.92]         yes
  racprd05[10.120.20.148]        racprd02[10.120.20.112]        yes
  racprd04[10.120.20.125]        racprd04[10.120.20.127]        yes
  racprd04[10.120.20.125]        racprd04[10.120.20.168]        yes
  racprd04[10.120.20.125]        racprd02[10.120.20.92]         yes
  racprd04[10.120.20.125]        racprd02[10.120.20.112]        yes
  racprd04[10.120.20.127]        racprd04[10.120.20.168]        yes
  racprd04[10.120.20.127]        racprd02[10.120.20.92]         yes
  racprd04[10.120.20.127]        racprd02[10.120.20.112]        yes
  racprd04[10.120.20.168]        racprd02[10.120.20.92]         yes
  racprd04[10.120.20.168]        racprd02[10.120.20.112]        yes
  racprd02[10.120.20.92]         racprd02[10.120.20.112]        yes
Result: Node connectivity passed for interface "bond0"

Result: Node connectivity check passed


Checking cluster integrity...

  Node Name
  ------------------------------------
  racprd01
  racprd02
  racprd03
  racprd04
  racprd05
  racprd08
  racprd09
  racprd10

Cluster integrity check passed


Checking CRS integrity...
The Oracle Clusterware is healthy on node "racprd01"
The Oracle Clusterware is healthy on node "racprd10"
The Oracle Clusterware is healthy on node "racprd09"
The Oracle Clusterware is healthy on node "racprd08"
The Oracle Clusterware is healthy on node "racprd05"
The Oracle Clusterware is healthy on node "racprd04"
The Oracle Clusterware is healthy on node "racprd02"

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
The location "/export/11.2.0.2" is not shared but is present/creatable on all nodes
Result: Shared resources check for node addition passed


Checking node connectivity...

Checking hosts config file...
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  racprd01     passed
  racprd09     passed
  racprd10     passed
  racprd08     passed
  racprd05     passed
  racprd04     passed
  racprd02     passed

Verification of the hosts config file successful


Interface information for node "racprd01"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.91    10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7E:5C 1500
 bond0  10.120.20.111   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7E:5C 1500
 bond1  192.168.20.91   192.168.20.0    0.0.0.0         10.120.20.1     00:18:22:0E:4B:70 1500
 bond1  169.254.162.181 169.254.0.0     0.0.0.0         10.120.20.1     00:18:22:0E:4B:70 1500
 bond2  10.120.20.101   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7E:5D 1500


Interface information for node "racprd10"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.99    10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:42:6D:F0 1500
 bond0  10.120.20.122   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:42:6D:F0 1500
 bond1  192.168.20.103  192.168.20.0    0.0.0.0         10.120.20.1     00:5B:21:42:6C:84 1500
 bond1  169.254.100.100 169.254.0.0     0.0.0.0         10.120.20.1     00:5B:21:42:6C:84 1500
 bond2  10.120.20.109   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:42:6C:81 1500


Interface information for node "racprd09"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.90    10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:85:29:24 1500
 bond0  10.120.20.120   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:85:29:24 1500
 bond1  192.168.20.100  192.168.20.0    0.0.0.0         10.120.20.1     00:18:22:85:29:26 1500
 bond1  169.254.58.183  169.254.0.0     0.0.0.0         10.120.20.1     00:18:22:85:29:26 1500
 bond2  10.120.20.114   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:22:2A:E0 1500


Interface information for node "racprd08"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.94    10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:76:08:29 1500
 bond0  10.120.20.119   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:76:08:29 1500
 bond0  10.120.20.166   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:76:08:29 1500
 bond1  192.168.20.99   192.168.20.0    0.0.0.0         10.120.20.1     00:18:22:76:08:2B 1500
 bond1  169.254.213.9   169.254.0.0     0.0.0.0         10.120.20.1     00:18:22:76:08:2B 1500
 bond2  10.120.20.104   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:76:08:28 1500


Interface information for node "racprd05"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.146   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:49:36:D0 1500
 bond0  10.120.20.148   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:49:36:D0 1500
 bond1  192.168.20.101  192.168.20.0    0.0.0.0         10.120.20.1     00:5B:21:49:36:D4 1500
 bond1  169.254.83.149  169.254.0.0     0.0.0.0         10.120.20.1     00:5B:21:49:36:D4 1500
 bond2  10.120.20.147   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:49:36:D1 1500


Interface information for node "racprd04"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.125   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:46:46:60 1500
 bond0  10.120.20.127   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:46:46:60 1500
 bond0  10.120.20.168   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:46:46:60 1500
 bond1  192.168.20.102  192.168.20.0    0.0.0.0         10.120.20.1     00:5B:21:49:35:BC 1500
 bond1  169.254.7.188   169.254.0.0     0.0.0.0         10.120.20.1     00:5B:21:49:35:BC 1500
 bond2  10.120.20.126   10.120.20.0     0.0.0.0         10.120.20.1     00:5B:21:46:46:61 1500


Interface information for node "racprd02"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 bond0  10.120.20.92    10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7D:9E 1500
 bond0  10.120.20.112   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7D:9E 1500
 bond1  192.168.20.92   192.168.20.0    0.0.0.0         10.120.20.1     00:18:22:0D:8F:54 1500
 bond1  169.254.151.45  169.254.0.0     0.0.0.0         10.120.20.1     00:18:22:0D:8F:54 1500
 bond2  10.120.20.102   10.120.20.0     0.0.0.0         10.120.20.1     00:18:22:0B:7D:9F 1500


Check: Node connectivity for interface "bond1"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  racprd01[192.168.20.91]        racprd10[192.168.20.103]       yes
  racprd01[192.168.20.91]        racprd09[192.168.20.100]       yes
  racprd01[192.168.20.91]        racprd08[192.168.20.99]        yes
  racprd01[192.168.20.91]        racprd05[192.168.20.101]       yes
  racprd01[192.168.20.91]        racprd04[192.168.20.102]       yes
  racprd01[192.168.20.91]        racprd02[192.168.20.92]        yes
  racprd10[192.168.20.103]       racprd09[192.168.20.100]       yes
  racprd10[192.168.20.103]       racprd08[192.168.20.99]        yes
  racprd10[192.168.20.103]       racprd05[192.168.20.101]       yes
  racprd10[192.168.20.103]       racprd04[192.168.20.102]       yes
  racprd10[192.168.20.103]       racprd02[192.168.20.92]        yes
  racprd09[192.168.20.100]       racprd08[192.168.20.99]        yes
  racprd09[192.168.20.100]       racprd05[192.168.20.101]       yes
  racprd09[192.168.20.100]       racprd04[192.168.20.102]       yes
  racprd09[192.168.20.100]       racprd02[192.168.20.92]        yes
  racprd08[192.168.20.99]        racprd05[192.168.20.101]       yes
  racprd08[192.168.20.99]        racprd04[192.168.20.102]       yes
  racprd08[192.168.20.99]        racprd02[192.168.20.92]        yes
  racprd05[192.168.20.101]       racprd04[192.168.20.102]       yes
  racprd05[192.168.20.101]       racprd02[192.168.20.92]        yes
  racprd04[192.168.20.102]       racprd02[192.168.20.92]        yes
Result: Node connectivity passed for interface "bond1"

Result: Node connectivity check passed


Checking node application existence...

Checking existence of VIP node application (required)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd01     yes                       yes                       passed
  racprd10     yes                       yes                       passed
  racprd09     yes                       yes                       passed
  racprd08     yes                       yes                       passed
  racprd05     yes                       yes                       passed
  racprd04     yes                       yes                       passed
  racprd02     yes                       yes                       passed
VIP node application check passed

Checking existence of NETWORK node application (required)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd01     yes                       yes                       passed
  racprd10     yes                       yes                       passed
  racprd09     yes                       yes                       passed
  racprd08     yes                       yes                       passed
  racprd05     yes                       yes                       passed
  racprd04     yes                       yes                       passed
  racprd02     yes                       yes                       passed
NETWORK node application check passed

Checking existence of GSD node application (optional)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd01     no                        no                        exists
  racprd10     no                        no                        exists
  racprd09     no                        no                        exists
  racprd08     no                        no                        exists
  racprd05     no                        no                        exists
  racprd04     no                        no                        exists
  racprd02     no                        no                        exists
GSD node application is offline on nodes "racprd01,racprd10,racprd09,racprd08,racprd05,racprd04,racprd02"

Checking existence of ONS node application (optional)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  racprd01     no                        yes                       passed
  racprd10     no                        yes                       passed
  racprd09     no                        yes                       passed
  racprd08     no                        yes                       passed
  racprd05     no                        yes                       passed
  racprd04     no                        yes                       passed
  racprd02     no                        yes                       passed
ONS node application check passed


Checking Single Client Access Name (SCAN)...
  SCAN Name         Node          Running?      ListenerName  Port          Running?
  ----------------  ------------  ------------  ------------  ------------  ------------
  MYRAC.imycompany.com  racprd03     true          LISTENER_SCAN1  1521          true
  MYRAC.imycompany.com  racprd08     true          LISTENER_SCAN2  1521          true
  MYRAC.imycompany.com  racprd04     true          LISTENER_SCAN3  1521          true

Checking TCP connectivity to SCAN Listeners...
  Node          ListenerName              TCP connectivity?
  ------------  ------------------------  ------------------------
  localnode     LISTENER_SCAN1            yes
  localnode     LISTENER_SCAN2            yes
  localnode     LISTENER_SCAN3            yes
TCP connectivity to SCAN Listeners exists on all cluster nodes

Checking name resolution setup for "MYRAC.imycompany.com"...
  SCAN Name     IP Address                Status                    Comment
  ------------  ------------------------  ------------------------  ----------
  MYRAC.imycompany.com  10.120.20.166             passed
  MYRAC.imycompany.com  10.120.20.167             passed                   
  MYRAC.imycompany.com  10.120.20.168             passed                   

Verification of SCAN VIP and Listener setup passed

Checking to make sure user "oracle" is not in "root" group
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  racprd10     does not exist            passed
Result: User "oracle" is not part of "root" group. Check passed

Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
  Node Name                             Status
  ------------------------------------  ------------------------
  racprd10                             passed
Result: CTSS resource check passed


Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed

Check CTSS state started...
Check: CTSS state
  Node Name                             State
  ------------------------------------  ------------------------
  racprd10                             Observer
CTSS is in Observer state. Switching over to clock synchronization checks using NTP


Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP Configuration file check passed

Checking daemon liveness...

Check: Liveness for "ntpd"
  Node Name                             Running?
  ------------------------------------  ------------------------
  racprd10                             yes
Result: Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes

Checking NTP daemon command line for slewing option "-x"
Check: NTP daemon command line
  Node Name                             Slewing Option Set?
  ------------------------------------  ------------------------
  racprd10                             no
Result:
NTP daemon slewing option check failed on some nodes
PRVF-5436 : The NTP daemon running on one or more nodes lacks the slewing option "-x"
Result: Clock synchronization check using Network Time Protocol(NTP) failed


PRVF-9652 : Cluster Time Synchronization Services check failed

Post-check for node addition was unsuccessful.
Checks did not pass for the following node(s):
        racprd10
racprd05 | CRS | /export/11.2.0.2/bin
> 

The above check was unsuccessful due to the NTP daemon lacking the slewing option. I have discussed this already in this post. CVU looks for ntp configuration from file  "/etc/sysconfig/ntpd" whereas my system administrators have it configured differently. It works on the existing cluster and hence I’m ignoring this error. I checked for the ntp daemon and it is running with correct options.


So, at this point, my new node is added successfully to the cluster and the Grid Infrastructure home and Oracle home have been successfully created.

racprd10 |  | /export/oracle
> . oraenv
ORACLE_SID = [oracle] ? CRS
The Oracle base remains unchanged with value /export/oracle
racprd10 | CRS | /export/oracle
> export PATH=$ORACLE_HOME/bin:$PATH
racprd10 | CRS | /export/oracle
> olsnodes -t
racprd01       Pinned
racprd02       Pinned
racprd03       Pinned
racprd04       Pinned
racprd05       Pinned
racprd08       Pinned
racprd09       Pinned
racprd10       Pinned



Checking the Oracle Inventory to see if the new nodes homes are reflecting correctly.

> cat /etc/oraInst.loc
inventory_loc=/export/oracle/oraInventory
inst_group=dba
racprd10 | CRS | /export/oracle

> cd /export/oracle/oraInventory/ContentsXML

> cat inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 2009 Oracle Corporation. All rights Reserved -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>11.2.0.2.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="Ora11g_gridinfrahome2" LOC="/export/11.2.0.2" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="racprd01"/>
      <NODE NAME="racprd02"/>
      <NODE NAME="racprd03"/>
      <NODE NAME="racprd04"/>
      <NODE NAME="racprd05"/>
      <NODE NAME="racprd08"/>
      <NODE NAME="racprd09"/>
      <NODE NAME="racprd10"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraDb11g_home2" LOC="/export/oracle/product/11.2.0.2" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racprd01"/>
      <NODE NAME="racprd02"/>
      <NODE NAME="racprd03"/>
      <NODE NAME="racprd04"/>
      <NODE NAME="racprd05"/>
      <NODE NAME="racprd08"/>
      <NODE NAME="racprd09"/>
      <NODE NAME="racprd10"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
</INVENTORY>
racprd10 | CRS | /export/oracle/oraInventory/ContentsXML


There is another Step pending - which is to add the new server to the Enterprise Manager Grid Control monitoring.

This new server has to be one of the monitoring targets in Enterprise Manager Grid Control and hence the Enterprise Manager Grid Control Agent needs to be installed on the new server.

I’m going to clone the Enterprise Manager Agent from one of the existing servers in the cluster to the new server in the cluster.

Reference Documentation


In my case, the agent software is already there from the OS clone. So, I’m just going to do the configuration part.
If you don’t have the agent binaries already, tar or zip the agent home from one of the existing server, ftp it to the new server and unzip in the destination home.
Then do the following..

racprd10 | ORA1020 | /export/oracle/product/10.2.0/agent10g/oui/bin
> ./runInstaller -clone -forceClone ORACLE_HOME=/export/oracle/product/10.2.0/agent10g ORACLE_HOME_NAME=agent10g -noconfig -silent
Starting Oracle Universal Installer...

No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-07-25_03-54-08PM. Please wait ...racprd10 | ORA1020 | /export/oracle/product/10.2.0/agent10g/oui/bin
> Oracle Universal Installer, Version 10.2.0.5.0 Production
Copyright (C) 1999, 2009, Oracle. All rights reserved.

You can find a log of this install session at:
 /export/oracle/oraInventory/logs/cloneActions2012-07-25_03-54-08PM.log
.EM Agent version 10.2.0.3 or higher is not installed in this location specified. Please choose to install into a different location where EM Agent is installed.




A workaround for the above issue is to use the -ignorePrereq option.

racprd10 | ORA1020 | /export/oracle/product/10.2.0/agent10g/oui/bin
> ./runInstaller -clone -forceClone ORACLE_HOME=/export/oracle/product/10.2.0/agent10g ORACLE_HOME_NAME=agent10g -noconfig -silent -ignorePrereq
Starting Oracle Universal Installer...

No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-07-25_03-57-34PM. Please wait ...racprd10 | ORA1020 | /export/oracle/product/10.2.0/agent10g/oui/bin
> Oracle Universal Installer, Version 10.2.0.5.0 Production
Copyright (C) 1999, 2009, Oracle. All rights reserved.

You can find a log of this install session at:
 /export/oracle/oraInventory/logs/cloneActions2012-07-25_03-57-34PM.log
.EM Agent version 10.2.0.3 or higher is not installed in this location specified. Please choose to install into a different location where EM Agent is installed.
Pre-requisite failure is ignored because IGNORE_PREREQ flag is set. Installation will continue
................................................................................................... 100% Done.



Installation in progress (Wednesday, July 25, 2012 3:57:40 PM GMT)
...................................................................                                                             67% Done.
Install successful

Linking in progress (Wednesday, July 25, 2012 3:57:42 PM GMT)
.                                                                68% Done.
Link successful

Setup in progress (Wednesday, July 25, 2012 3:57:49 PM GMT)
....................                                            100% Done.
Setup successful

End of install phases.(Wednesday, July 25, 2012 3:57:51 PM GMT)
WARNING:
The following configuration scripts need to be executed as the "root" user.
#!/bin/sh
#Root script to run
/export/oracle/product/10.2.0/agent10g/root.sh
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts

Starting to execute configuration assistants
The following configuration assistants have not been run. This can happen for following reasons - either root.sh is to be run before config or Oracle Universal Installer was invoked with the -noConfig option.
--------------------------------------
The "/export/oracle/product/10.2.0/agent10g/cfgtoollogs/configToolFailedCommands" script contains all commands that failed, were skipped or were cancelled. This file may be used to run these configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.
The "/export/oracle/product/10.2.0/agent10g/cfgtoollogs/configToolAllCommands" script contains all commands to be executed by the configuration assistants. This file may be used to run the configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.

--------------------------------------
The cloning of agent10g was successful.
Please check '/export/oracle/oraInventory/logs/cloneActions2012-07-25_03-57-34PM.log' for more details.

racprd10 | ORA1020 | /export/oracle/product/10.2.0/agent10g/oui/bin
> 

Run the root.sh script as root user in a different terminal during the above procedure.


Execute the following script to run the agent configuration assistant.

racprd10 | ORA1020 | /export/oracle/product/10.2.0/agent10g/bin
> ./agentca -f

Stopping the agent using /export/oracle/product/10.2.0/agent10g/bin/emctl  stop agent
Oracle Enterprise Manager 10g Release 5 Grid Control 10.2.0.5.0. 
Copyright (c) 1996, 2009 Oracle Corporation.  All rights reserved.
Running agentca using /export/oracle/product/10.2.0/agent10g/oui/bin/runConfig.sh ORACLE_HOME=/export/oracle/product/10.2.0/agent10g ACTION=Configure MODE=Perform RESPONSE_FILE=/export/oracle/product/10.2.0/agent10g/response_file RERUN=TRUE INV_PTR_LOC=/export/oracle/product/10.2.0/agent10g/oraInst.loc COMPONENT_XML={oracle.sysman.top.agent.10_2_0_1_0.xml}
Perform - mode is starting for action: Configure


Perform - mode finished for action: Configure

You can see the log file: /export/oracle/product/10.2.0/agent10g/cfgtoollogs/oui/configActions2012-07-25_04-00-52-PM.log
racprd10 | ORA1020 | /export/oracle/product/10.2.0/agent10g/bin
> 

Note: The cloned Management Agent is not in the secure mode by default. You must manually secure the Management Agent by running <Oracle_Home>/bin/emctl secure agent. 

For Unix only, run the root.sh script from the Oracle home directory of the Management Agent. Always, root.sh is run as root user.

racprd10:/export/oracle/product/10.2.0/agent10g # sh /export/oracle/product/10.2.0/agent10g/root.sh
Running Oracle10 root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /export/oracle/product/10.2.0/agent10g

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
racprd10:/export/oracle/product/10.2.0/agent10g #

Run emctl status agent to verify the agent is up and running.

The new node has been added to the cluster configuration. Add/extend instances, listener as required.