Tuesday, 13 December 2011

Securing Netapp to a subset of snap-* commands


One of the disadvantages in the security model created by Netapp is the inability to lock the filer down to giving a user the ability to run a particular command against a subset of volumes, for example if you give a user access to the snap delete command this can be run against any volume on the filer.

Whilst it can be restricted by the use of virtual filers (vfilers) this requires the multistore licence which you may not have.

I needed a way to lock down the filer, so came up with the scripts below.

First we will create a role on the filer to limit the command set and create a user to have this role.  Here we are looking at the snap-* commands:

netapp> useradmin role add “snapshot_manager” -a cli-snap* -c “CLI Snapshots”
netapp> useradmin group add “snapshot_group” -r “snapshot_manager”
netapp> useradmin user add “netappconnector” -g “snapshot_group”

Now on our linux host, we create a group called netappconnector, followed by a user also called netappconnector.  This user should only be a member of the netappconnector group.

Now login to the netappconnector account on the linux host and setup ssh to the filer.

Generate the keys:

netappconnector $> ssh-keygen –t rsa –b 1024

Now copy the contents on /home/netappconnector/.ssh/id_dsa.pub to /vol/vol0/etc/sshd/netapconnector/.ssh/authorized_keys on the filer.

Next check that ssh is working as expected:

netappconnector $>  ssh <filer> snap list <volume name>

And that it is locked to the snap commands only, by testing against a command that is not-authorised:

netappconnector $> ssh <filer> version

We can now start the linux configuration.  First change the shell of the netappconnector account to /bin/false.  This locks the account down to the point that it is impossible to directly login to the account.

We then need to create scripts on the linux host.  Lets start with the script that will perform the connection to the filer and run the commands.  Place the following script into /home/netappconnector/bin/run_command_on_netapp.sh on the linux host:

#!/bin/bash -u
# Name:                     run_command_on_netapp.sh
# Purpose:            This script is an interface between the host and the netapp
#                      filer.  Parameters are passed to it, detailing which filer,
#                      command to run.  It then checks if this command is in the
#                      valid list, before running the command on the filer specified.
#
# Syntax:            run_command_on_netapp.sh [filer] [command] (command parameters)
#
# Error Codes:          
#                      1:            Incorrect arguments passed
#                      2:            Command not authorised on filer requested
#                      3:            User is attempting to run this script as a user other
#                                  than the netappconnector user
#
# NOTE: All parameters are mandatory except for the command parameters
#
# Author:            Matthew Harman
# Date:                       29/11/2011
#
# Updates:
# Who What            Why
#
#
# local configuration variables
AUTHORISED_NETAPP_COMMANDS=/usr/bin/authorised_netapp_commands
# Functions
function die { # ($1=errMsg)
            echo "ERROR: $1 - exiting." >&2
            exit 1
}

function usage {
cat << EOF
Usage:
            $0 [filer] [command] (command parameters)

This script is an interface to running commands on the netapp filer.
It can be used to run the passed command (subject to authority) on the filer

Examples:
            $0 ukocna04 "snap list ORACLE_DBF_FCHDT"
            $0 ukocna04 "snap create ORACLE_DBF_FCHDT" 201111291213

Notes:
            All parameters are mandatory except the command parameters element, an
            error will be returned if all required parameters have not been passed
EOF
}

function check_command_is_authorised { # ($1=CONCATENATED)
echo "we are looking for:$1"
            # We need to check if the string passed, is contained within the
            # authorised commands file
            COUNTER=`cat $AUTHORISED_NETAPP_COMMANDS|grep "$1"|wc -l`
            if [ $COUNTER == 0 ]; then
                        # command is not authorised, raise an error
                        exit 2
            fi
            # If we get here command is authorised
}

function remove_parameters_for_check { # ($1=NETAPP_COMMAND)
            # certain commands have parameters that are variables, so we need to
            # remove them to get a matched on authorised commands
            #
            # snap restore is a prime example, lets see if we have this
            SNAP_RESTORE_COUNT=`echo $1|grep "snap restore"|wc -l`
            if [ $SNAP_RESTORE_COUNT -eq 1 ]; then
                        # We have the snap restore command, lets strip it
                        STRIPPED_COMMAND=`echo $1|awk -F" " '{print $6}'`
                        OUR_NEW_COMMAND="snap restore $STRIPPED_COMMAND"
            else
                        OUR_NEW_COMMAND=$1
            fi
}

function check_correct_user {
            USER=`whoami`
            if [ $USER != "netappconnector" ]; then
                        # we are running as an incorrect user, flag as error
                        exit 3
            fi
}
# End Functions #############################################################

# Read arguments and set parameters
if [ "$#" == 1 ]; then
            # Not enough parameters have been passed
            usage
            exit 1
else
            # Correct number of parameters, lets pass them into variables
            NETAPP_FILER=$1
            NETAPP_COMMAND=$2
            CONCATENATED="$1:$2"
            if [ "$#" == 3 ]; then
                        COMMAND_OPTIONS=$3
            else
                        COMMAND_OPTIONS=""
            fi
fi

check_correct_user

remove_parameters_for_check "$NETAPP_COMMAND"
check_command_is_authorised "$OUR_NEW_COMMAND"

# If we get here command has been authorised, so we can run it
ssh $NETAPP_FILER $NETAPP_COMMAND $COMMAND_OPTIONS


Make sure the permissions on this file, for security are:

User: root
Group:  root
Permissions: 755

Next we will create a file that lists all the authorised commands, this is how we lock it down to a subset of volumes.  Place the following contents into a new file called /usr/bin/authorised_netapp_commands:

# This file contains a list of authorised commands that can be run against
# the netapp(s).
# The format of the lines is:
# filer:username:command
filer:<user not used>:snap list volume1
filer:<user not used>:snap list volume2

Here we are limiting the command list to just snap list on volume1 and volume2 against the filer named “filer”

To make things as secure as possible, make sure this file has the following permissions:

User: root
Group: root
Permissions: 644

Finally we need to configure sudo access to the netappconnector account, along with access to only the command above.  Create a filesystem group called netappaccess and add all the accounts you wish to grant access to this filesystem group.

Edit the /etc/sudoers file, and ensure it contains the following lines:

%netappaccess <hostname>=(netappconnector) NOPASSWD:/home/netappconnector/bin/run_command_on_netapp.sh

where <hostname> is replaced with the linux hostname.

We can now test it.  Login to an account that configured for sudo access and run the following command:

account $> sudo –u netappconnector /home/netappconnector/bin/run_command_on_netapp.sh <filer> “snap list volume1”

You can then check for errors codes, by echoing the value of $?:

account $> echo $?

The values are as follows:

0: successful
1: incorrect arguments passed to command
2: command is not authorised, add it to /usr/bin/authorised_netapp_commands

The /usr/bin/authorised_netapp_commands should only contain the high level command and the volume, should you wish to add additional parameters when running the command, for example to create a snapshot with a specific name on volume1, you would add filer:<user>:snap create volume1 to /usr/bin/authorised_netapp_commands and then run:

account $> sudo –u netappconnector /home/netappconnector/bin/run_command_on_netapp.sh <filer> “snap create volume1” “snapshot_name”

Friday, 9 December 2011

Redundant Interconnect in Oracle RAC


Oracle RAC 11gr2 and Redundant Interconnects

Oracle 11gr2 RAC allows the use of Redundant Interconnects. This means it is possible to create multiple links for the Interconnect, and remove the single point of failure, e.g. a switch failure, causing the cluster to fallback to a single node.

Before we configure this feature it is useful to detail what happens when part of the interconnect fails. For these tests we have a single RAC cluster of 2 nodes, operating over a single interconnect, using network interfaces eth1 on both nodes.

If we connect to the RAC database, via the scan address:

sqlplus system/manager1@rac-scan.laptop.com:1521/TESTDB.laptop.com

we can query the database fine.

Now take the interconnect away on one of the nodes, with the command:

root#> ifconfig eth1 down

If we now issue a query to the database:

sqlplus> select * from dba_data_files;

We see that this statement still works, however further digging will reveal we are now operating on one node:

sqlplus> select * from gv$instance;

Will only show one node.

If we look at the grid logs, located in:

/home/grid/app/11.2.0/grid/log/<node name>/alter<node name>.log

we see that on one node there are errors reported that one node is being shutdown and on the other node, errors saying that the ASM disk group is inaccessible.

Restart the network port, with:

root#> ifconfig eth1 up

Querying gv$instance again shows only 1 node. Therefore the node does not automatically restart after an interconnect failure.

To restart it, as root, restart the ohasd service on the node you took the network down on:

root#> service ohasd stop
root#> service ohasd start

After a few minutes you will find the second instance has been added back to the cluster and a second record appears in the gv$instance table.

So as can been seen, it is probably wise to guard against this, so we will introduce a second interconnect.

First lets list our networks:

oracle_node1$> /home/grid/app/11.2.0/grid/bin/oifcfg getif
eth0 192.168.100.0 global public
eth1 192.168.200.0 global cluster_interconnect

Here we can see that the 192.168.200.0 network is being used as a cluster interconnect.

Now create a new network interface on both nodes, and give them a separate range, for the purposes of this document I will use:

node1: eth3 192.168.230.1
node2: eth3 192.168.230.2

Now add the additional interconnect:

oracle_node1$> /home/grid/app/11.2.0/grid/bin/oifcfg setif -global eth3/192.168.230.0:cluster_interconnect

Check it has been added:

oracle_node1$> /home/grid/app/11.2.0/grid/bin/oifcfg getif

Now restart the clusterware on each node:

root_node1#> service ohasd stop
root_node1#> service ohasd start

Wait for it to fully start and the database shown as open in the gv$instance table, before doing the second node:

root_node2#> service ohasd stop
root_node2#> service ohasd start

Now we can test by taking down the interface again. This time the link stays up (by using the other link), and when querying gv$instance we see both instances.

Removing a node from Oracle RAC Cluster


Removing a node from an Oracle 11gr2 cluster.

In this document, we have a 3 node cluster, and we will remove node 2 from the configuration.

First backup the ocr configuration, in case of problems.

root_node1#> export ORACLE_HOME=/home/grid/app/11.2.0/grid
root_node1#> export PATH=$PATH:$ORACLE_HOME/bin
root_node1#> ocrconfig -manualbackup

Run DBCA on a node not being removed, to remove the instance:

oracle_node1$> export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1
oracle_node1$> export PATH=$PATH:$ORACLE_HOME/bin
oracle_node1$> dbca &

select Oracle RAC database
click Next
select Instance Management
click Next
select Delete an Instance
click Next
Make sure TESTDB is selected
enter sys for the username and the sys password
click on Next
select the Node you wish to remove, I.e racnode2
click on Next
click on Finish
click OK to the popup
click OK to proceed

Make sure the redo log thread has been removed:

oracle_node1$> export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1
oracle_node1$> export PATH=$PATH:$ORACLE_HOME/bin
oracle_node1$> sqlplus system/manager1@rac-scan.laptop.com:1521/TESTDB.laptop.com
sqlplus> select * from v$log;

Make sure the instance is removed from the cluster:

oracle_node1$> export ORACLE_HOME=/home/grid/app/11.2.0/grid
oracle_node1$> export PATH=$PATH:$ORACLE_HOME/bin
oracle_node1$> srvctl config database -d testdb

You should see that the node has been removed from the instances listed against the Database Instances parameter.

Check where the listerner is running, it should only be in grid home:

oracle_node1$> srvctl config listener -a

It should show that the only listener home, is that under grid

On the node to be deleted, remove the oracle inventory:

oracle_node2$> export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1
oracle_node2$> export PATH=$PATH:$ORACLE_HOME/bin
oracle_node2$> cd $ORACLE_HOME/oui/bin
oracle_node2$> ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={racnode2}” -local -silent

This should return as successful.

We can now de-install the Oracle Home on the node we are deleting:

oracle_node2$> /home/oracle/app/oracle/product/11.2.0/dbhome_1/deinstall/deinstall -local

We now need to update the other nodes to reflect the change in nodes:

oracle_node1$> export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1
oracle_node1$> /home/oracle/app/oracle/product/11.2.0/dbhome_1/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={racnode1,racnode3}”

oracle_node3$> export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1
oracle_node3$> /home/oracle/app/oracle/product/11.2.0/dbhome_1/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={racnode1,racnode3}”

Make sure the node is unpinned:

oracle_node1$> export ORACLE_HOME=/home/grid/app/11.2.0/grid
oracle_node1$> export PATH=$PATH:$PORACLE_HOME/bin
oracle_node1$> olsnodes -s -t

We can now remove the grid home:

root_node2#> /home/grid/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force

This should return a successful message.

We now need to remove the node using CRS on an node not being deleted:

root_node1#> export ORACLE_HOME=/home/grid/app/11.2.0/grid
root_node1#> export PATH=$PATH:$PORACLE_HOME/bin
root_node1#> crsctl delete node -n racnode2

Update the node list on the node that is being deleted to contain just the node being deleted:

oracle_node2$> export ORACLE_HOME=/home/grid/app/11.2.0/grid
oracle_node2$> /home/grid/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={racnode2}” -silent -local CRS=TRUE

Now deinstall the grid home:

oracle_node2$> /home/grid/app/11.2.0/grid/deinstall/deinstall -local

When prompted run the following command in another window as root:

root_node2$> /tmp/deinstall<date time>/perl/bin/perl -I/tmp/deinstall<date time>/perl/lib -I/tmp/deinstall<date time>/crs/install /tmp/deinstall<date time>/crs/install/rootcrs.pl -force -deconfig -paramfile “/tmp/deinstall<date time>/response/deinstall_Ora11g_gridinfrahome1.rsp”

When this has run, click on Enter in the original window

Now update the node list on the other nodes:

oracle_node1$> export CRS_HOME=/home/grid/app/11.2.0/grid
oracle_node1$> export PATH=$PATH:$CRS_HOME/bin
oracle_node1$> /home/grid/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$CRS_HOME “CLUSTER_NODES={racnode1,racnode3}” CRS=TRUE

oracle_node3$> export CRS_HOME=/home/grid/app/11.2.0/grid
oracle_node3$> export PATH=$PATH:$CRS_HOME/bin
oracle_node3$> /home/grid/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$CRS_HOME “CLUSTER_NODES={racnode1,racnode3}” CRS=TRUE

Now run the cluster verification tool, to check the node has been removed:

oracle_node1$> /tmp/oracle/grid/runcluvfy.sh stage -post nodedel -n racnode2

Adding a node to Oracle RAC Cluster


Adding an additional node to an existing RAC cluster.

This document assumes the following information for the cluster:


Hostname
Secondary Name
IP Address
Purpose
Location
racnode1-san
racnode1-san.laptop.com
192.168.3.101
Storage Network for the 1st node
hosts
racnode2-san
racnode2-san.laptop.com
192.168.3.102
Storage Network for the 2nd node
hosts
racstorage-san
racstorage-san.laptop.com
192.168.3.103
Storage Network for the Shared Storage
hosts
racnode1
racnode1.laptop.com
192.168.100.1
Public ip address for 1st node
hosts,dns
racnode2
racnode2.laptop.com
192.168.100.2
Public ip address for 2nd node
hosts,dns
racstorage
racstorage.laptop.com
192.168.100.3
Public ip address for storage
hosts,dns
racnode1-priv
racnode1-priv.laptop.com
192.168.200.1
Private interconnect for 1st node
hosts
racnode2-priv
racnode2-priv.laptop.com
192.168.200.2
Private interconnect for 2nd node
hosts
racnode1-vip
racnode1-vip.laptop.com
192.168.100.21
VIP address for 1st node
hosts,dns
racnode2-vip
racnode2-vip.laptop.com
192.168.100.22
VIP address for 2nd node
hosts,dns
rac-scan
rac-scan.laptop.com
192.168.100.10
1st address for the scan
dns
rac-scan
rac-scan.laptop.com
192.168.100.11
2nd address for the scan
dns

We will be adding a third node, with the following network configuration:


Secondary Name
IP Address
Purpose
Location
racnode3-san.laptop.com
192.168.3.104
Storage Network for the 3rd node
hosts
racnode3.laptop.com
192.168.100.4
Public ip address for 3rd node
hosts,dns
racnode3-priv.laptop.com
192.168.200.4
Private interconnect for 3rd node
hosts
racnode3-vip.laptop.com
192.168.100.24
VIP address for 3rd node
hosts,dns
rac-scan.laptop.com
192.168.100.12
3rd address for the scan
dns

For the purposes of this document I will be making the hosts as Vms under VMWare workstation, but these could equally be physical servers.

Create a new machine, by installing the operating system.


Configure the networking, for the public address, interconnect and storage networking, making sure that the interface names corresponding to the same networks as those on the other nodes.

Point the host at the correct dns server.

Populate the /etc/hosts file:

192.168.3.101 racnode1-san racnode1-san.laptop.com
192.168.3.102 racnode2-san racnode2-san.laptop.com
192.168.3.103 racstorage-san racstorage-san.laptop.com
192.168.3.104 racnode3-san racnode3-san.laptop.com
192.168.100.1 racnode1 racnode1.laptop.com
192.168.100.2 racnode2 racnode2.laptop.com
192.168.100.3 racstorage racstorage.laptop.com
192.168.100.4 racnode3 racnode3.laptop.com
192.168.200.1 racnode1-priv racnode1-priv.laptop.com
192.168.200.2 racnode2-priv racnode2-priv.laptop.com
192.168.200.4 racnode3-priv racnode3-priv.laptop.com
192.168.100.21 racnode1-vip racnode1-vip.laptop.com
192.168.100.22 racnode2-vip racnode2-vip.laptop.com
192.168.100.24 racnode3-vip racnode3-vip.laptop.com

And ensure the following is added to the DNS server:

Record Key
Value
racnode3
192.168.100.4
racnode3-vip
192.168.100.24
rac-scan
192.168.100.12

And add the following entries to the /etc/hosts file on each of the existing nodes and the storage:

192.168.3.104 racnode3-san racnode3-san.laptop.com
192.168.100.4 racnode3 racnode3.laptop.com
192.168.200.4 racnode3-priv racnode3-priv.laptop.com
192.168.100.24 racnode3-vip racnode3-vip.laptop.com

Add the following lines to the /etc/security/limits.conf file on node 3:

oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft memlock 3145728
oracle hard memlock 3145728
oracle soft stack 10240

And the kernel parameters to /etc/sysctl.conf on node 3:

fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.sem = 250 256000 100 1024
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 4194304
kernel.shmmax = 1073741824
kernel.shmmni = 4096
kernel.shmall = 2097152

And install the required additional packages:

#> yast -i sysstat
#> yast -i libcap1

Create the grid home directory:

#> mkdir /home/grid
#> chmod 777 /home/grid

And add the following lines to the /etc/fstab file to mount the shared storage:

racstorage-san:/storage/oracle_dbf_testdb /data/oradata/TESTDB nfs rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,proto=tcp,suid,noac 0 0
racstorage-san:/storage/oracle_crs /oracle_crs nfs rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,proto=tcp,suid,noac 0 0

Now create the oinstall, dba groups and the oracle account with the following Ids:

Group
Group ID
dba
1010
oinstall
1011

User
UserId
Primary Group
Secondary Group
oracle
1010
1011 (oinstall)
1010 (dba)

And create oracle's .bash_profile as the oracle user:

$> vi /home/oracle/.bash_profile

if [ $USER = “oracle” ]; then
if [ $SHELL = “/bin/ksh” ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi


Now make the structure for the shared drives:

#> cd /
#> mkdir /data
#> mkdir /data/oradata
#> mkdir /data/oradata/TESTDB
#> mkdir /oracle_crs
#> chown -R oracle:oinstall /data
#> chmod -R 755 /data
#> chown -R oracle:oinstall /oracle_crs
#> chmod -R 755 /oracle_crs

Now on the storage node, modify the /etc/exports file to include our third node, the entries should now look like:

/storage/oracle_crs racnode1-san(rw,no_root_squash) racnode2-san(rw,no_root_squash) racnode3-san(rw,no_root_squash)
/storage/oracle_dbf_testdb racnode1-san(rw,no_root_squash) racnode2-san(rw,no_root_squash) racnode3-san(rw,no_root_squash)

Make the change permanent:

#> exportfs -a

Now reboot the 3rd node:

#> reboot

Make sure our volumes are mounted with the following command:

#> df -k

We now need to set up the ssh connectivity from the third node to the existing two nodes, and vice versa:

node3_oracle$> ssh-keygen -t rsa

Copy the contents of the /home/oracle/.ssh/id_rsa.pub file on node 3 into the /home/oracle/.ssh/authorized_keys file on both node 1 and node 2.

Copy the /home/oracle/.ssh/id_rsa.pub file to /home/oracle/.ssh/authorized_keys on node 3, then add to /home/oracle/.ssh/authorized_keys file on node 3, the contents of the id_rsa.pub files from nodes 1 and 2 into this file

We now need to connect to each connection via ssh so that we can manually answer the question that appears, therefore do the following, answering yes to all the prompts that appear:

on node 3:
ssh oracle@racnode2-priv
ssh oracle@racnode3-priv

And on the existing two nodes:

ssh oracle@racnode3-priv

We now need to configure ntp on our third node, to get its time from the storage node.

perform the following:

#> yast

Navigate to Network Services, NTP Configuration

Change the Start NTP Daemon to Now and On Boot

Click on Delete

Confirm the deletion

Click on Add

Ensure Server is selected, click on Next

Enter 192.168.100.3 in the Address box

Click on OK

Click on OK

Click on Quit

Then edit the /etc/sysconfig/ntp file, changing the line:

NNTP_OPTIONS=”-g -u ntp:ntp”

to

NNTP_OPTIONS=”-x”

And restart the ntp service:

#> service ntp restart



We should now be in a position to add the node, so lets run the cluster verification utility from one of the existing nodes, to make sure everything is correct:

node1_oracle$> /tmp/oracle/grid/runcluvfy.sh stage -pre nodeadd -n racnode3

Make sure this returns a successful message.

We can now add our node into Oracle Clusterware, so from the existing node 1, run:

node1_oracle$> /home/grid/app/11.2.0/grid/oui/bin/addNode.sh “CLUSTER_NEW_NODES={racnode3}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={racnode3-vip}” -silent

We now need to run the root scripts on node3:

node3#> /home/oracle/app/oraInventory/orainstRoot.sh
node3#> /home/grid/app/11.2.0/grid/root.sh

Make sure the node is added, by using the cluster verification tool on node 1:

node1_oracle$> /tmp/oracle/grid/runcluvfy.sh stage -post nodeadd -n racnode3

This should return as successful.

We now need to extend the Oracle Home to the third node, so from the first node:

node1_oracle$> /home/oracle/app/oracle/product/11.2.0/dbhome_1/oui/bin/addNode.sh -silent “CLUSTER_NEW_NODES={racnode3}”

We now need to run the root scripts on node3:

node3#> /home/oracle/app/oracle/product/11.2.0/dbhome_1/root.sh

We now need to add the instance to the node. So we run dbca on node 1:

oracle_node1$> export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1
oracle_node1$> export PATH=$PATH:$ORACLE_HOME/bin
oracle_node1$> dbca &

Select Oracle RAC database
click on Next
Select Instance Management
click on Next
Select Add an Instance
click on Next
Make sure TESTDB is selected, enter sys for the username and enter the password
click on Next
click on Next
make sure the TESTDB3 is shown along with racnode3
click on Next
click on Finish
click on OK

Now use the cluster verification tool to confirm the addition:

oracle_node1$> export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1
oracle_node1$> /tmp/oracle/grid/runcluvfy.sh comp admprv -o db_config -d $ORACLE_HOME -n racnode1,racnode2,racnode3

You should now have completed the addition of a third node into your cluster.