Friday 25 November 2011

Building an in-expensive test installation of Oracle 11gr2 RAC with 2 nodes.


It used to be the case that you needed to spend many thousands to create a 2 node Oracle RAC installation. With the advent of virtualisation this is now no longer the case.

This document will walk you through the steps so that you end up with your own working test Oracle RAC installation on existing in-expensive hardware.

The installation here, should only ever be viewed as adequate for test needs and never used in a production environment.

So if you want to test RAC carry on reading!

The Hardware:

For my installation I built it on my work laptop:

Dell Latitude E6510 Laptop with Intel i7 Quad core processor @ 1.87GHz
8 GB Ram
Running Ubuntu 11.04 64bit

NOTE: The actual operating system on the physical hardware, is largely unimportant, so long as the freely available VMWare Player (http://downloads.vmware.com/d/info/desktop_end_user_computing/vmware_player/4_0) runs on it.


Pre-Requisties:

The Oracle RAC setup will be created on Virtual Machines, therefore the main requirement is that VMWare Player is installed on the physical hardware. Get this software from the link above and install it.
This installation will also be built upon Sles 11 SP1 64bit, so the iso for this should also be downloaded from the Novell website, and stored somewhere on the Physical laptop.
It is also desirable at this point to download disks 1, 2 and 3 of the 11.2.0.3 oracle distribution for 64bit linux from the oracle website (patch number: p10404530).

The Setup:

Before we delve into the virtual machine setup we need to create, it is helpful to describe it:

Virtual Machine
Purpose
Software Installed
Storage & DNS Server
Provides NFS shared storage to the Cluster
Provides DNS server
NFS-Server,named
Node 1
Oracle RAC 1st Node
oracle clusterware, oracle 11gr2 database software
Node 2
Oracle RAC 2nd Node
oracle clusterware, oracle 11gr2 database software

It is also useful to describe the network setup and the locations where the various hostnames are setup. For the purposes of my setup, I chose to use a made-up domainname of laptop.com:

Hostname
Secondary Name
IP Address
Purpose
Location
racnode1-san
racnode1-san.laptop.com
192.168.3.101
Storage Network for the 1st node
hosts
racnode2-san
racnode2-san.laptop.com
192.168.3.102
Storage Network for the 2nd node
hosts
racstorage-san
racstorage-san.laptop.com
192.168.3.103
Storage Network for the Shared Storage
hosts
racnode1
racnode1.laptop.com
192.168.100.1
Public ip address for 1st node
hosts,dns
racnode2
racnode2.laptop.com
192.168.100.2
Public ip address for 2nd node
hosts,dns
racstorage
racstorage.laptop.com
192.168.100.3
Public ip address for storage
hosts,dns
racnode1-priv
racnode1-priv.laptop.com
192.168.200.1
Private interconnect for 1st node
hosts
racnode2-priv
racnode2-priv.laptop.com
192.168.200.2
Private interconnect for 2nd node
hosts
racnode1-vip
racnode1-vip.laptop.com
192.168.100.21
VIP address for 1st node
hosts,dns
racnode2-vip
racnode2-vip.laptop.com
192.168.100.22
VIP address for 2nd node
hosts,dns
rac-scan
rac-scan.laptop.com
192.168.100.10
1st address for the scan
dns
rac-scan
rac-scan.laptop.com
192.168.100.11
2nd address for the scan
dns


Ok, that is the clerical bit out of the way, lets get on with building it!

Building the Virtual Machine(s):

As all the machines will be built using Sles 11 SP1 64bit, it is useful to build a standard configuration on one virtual machine and then copy it to create the others. Once this is complete, each Virtual machine will be further configured on its own.

So onto the first standard configuration VM:

Start up VMWare Player, and select Create a New Virtual Machine:

This will start the New Virtual Machine Wizard:

Select Installer disc image file (ISO): and browse for the SLES 11 SP1 64bit ISO we downloaded earlier.

Click on Next

Under Personalise Linux, complete the 4 fields and click on Next

Give the Virtual Machine a Name, e.g. rac_storage

Leave the location alone.

Click on Next

Leave the Maximum disk size (in GB) at 20.0

Ensure the Split virtual disk into multiple files is selected.

Click on Next

Click on Customize Hardware

Remove the redundant hardware for the time being, i.e.

Network Adapter (we will add this back later)
Floppy
USB Controller
Sound Card
Printer

Change the number of Processors to 2

Change the memory to 2048MB

Click on Save

Finally click on Finish

After a few minutes the virtual machine will be installed.

You will be prompted with a login box.

Login as the root user and the password entered just prior to starting the installation.


Adding additional swap:

The default installation configures 2GB of swap – this is too low for the Oracle installation and it will complain, so lets add an additional 1GB of swap:

Create a 1GB file in /:

#> dd if=/dev/zero of=/swapfile1 bs=1024 count=1048576

Then turn it into a swap area:

#> mkswap /swapfile1

Add it to the running system:

#> swapon /swapfile1

Finally add it to the /etc/fstab file so that it is mounted at each reboot:

/swapfile1 swap swap defaults 0 0

Finally shutdown the VM:

#> shutdown -g now -h


Clone the VM:

We have now finished the initial Virtual Machine setup, so we will now clone it twice to create the additional 2 virtual machines required for the 2 oracle nodes.

Navigate to the location of the VM files on the physical machine (/home/<user>/vmware):

$> cd /home/<user>/vmware

Create a directory for the first clone:

$> mkdir rac_node1

Copy the storage VM files to the rac_node1 directory:

$> cp -rp ./rac_storage/* ./rac_node1/

Move into the rac_node1 directory and rename all the files that have rac_storage in their name, to contain rac_node1:

$> cd rac_node1
$> mv {rac_storage,rac_node1}.nvram
$> mv {rac_storage,rac_node1}-s001.vmdk
$> mv {rac_storage,rac_node1}-s002.vmdk
$> mv {rac_storage,rac_node1}-s003.vmdk
$> mv {rac_storage,rac_node1}-s004.vmdk
$> mv {rac_storage,rac_node1}-s005.vmdk
$> mv {rac_storage,rac_node1}-s006.vmdk
$> mv {rac_storage,rac_node1}-s007.vmdk
$> mv {rac_storage,rac_node1}-s008.vmdk
$> mv {rac_storage,rac_node1}-s009.vmdk
$> mv {rac_storage,rac_node1}-s010.vmdk
$> mv {rac_storage,rac_node1}-s011.vmdk
$> mv {rac_storage,rac_node1}.vmdk
$> mv {rac_storage,rac_node1}.vmsd
$> mv {rac_storage,rac_node1}.vmx
$> mv {rac_storage,rac_node1}.vmxf

Now edit the configuration files in this directory, changing all references of rac_storage to rac_node1. Edit the following files:

rac_node1.vmdk
rac_node1.vmx
rac_node1.vmxf

Finally we need to add the VM back into VMWare Player. So within the Player, select Open a Virtual Machine

Open the rac_node1 directory and select the rac_node1.vmx file.

Click on Open


We now need to repeat the clone vm section above, replacing all references to rac_node1 with rac_node2.


Once this is complete we will now have 3 Vms with VMWare player:

rac_storage
rac_node1
rac_node2


Configure the storage VM:

Select the rac_storage VM and click on Edit virtual machine settings

Click on Add

Select Network Adapter and click Next

Select Host-only: A private network shared with the host

Click on Finish

Repeat the addition of a Network Adapter to create a second network adapter

Finally click on Save

Then click on Play virtual machine

Logon as root when the login box appears

Right click on the background and select Open In Terminal

Type:

#> yast

Select Network Devices, Network Settings

Click on Edit

Select Statically assigned IP Address

Enter the following information:

IP Address
Subnet Mask
Hostname
192.168.100.3
255.255.255.0
racstorage.laptop.com

Click on Next

Tab down to the second network card and click on Edit

Select Statically assigned IP Address

Enter the following information:

IP Address
Subnet Mask
Hostname
192.168.3.103
255.255.255.0
racstorage-san.laptop.com

Click on Next

Select Hostname/DNS

Enter the following information:

Hostname
Domain Name
racstorage
laptop.com

Click on OK

Click on Quit

Restart yast:

#> yast

Select Network Services, DNS Server

Click Install when prompted to install the bind package

Click on Next

Enter laptop.com under Add New Zone

with a type of Master

Click on Next

Click on Edit

Tab across to NS Records

Enter ns1.laptop.com to the Name Server to Add and click on Add
Enter racnode1.laptop.com to the Name Server to Add and click on Add

Tab across to Records

Enter the Following Record Keys and Values hitting Add after each one:

Record Key
Value
ns1
192.168.100.1
racnode1
192.168.100.1
racnode2
192.168.100.2
racstorage
192.168.100.3
racnode1-vip
192.168.100.21
racnode2-vip
192.168.100.22
rac-scan
192.168.100.10
rac-scan
192.168.100.11

Click on OK

Click on Next

Change the Startup behaviour to On: Start Now and When Booting

Click on Finish

Click on OK

Click on Quit


We now need to configure the hosts file, so open up the /etc/hosts file with vi and make sure it has the following entries:

192.168.3.101 racnode1-san racnode1-san.laptop.com
192.168.3.102 racnode2-san racnode2-san.laptop.com
192.168.3.103 racstorage-san racstorage-san.laptop.com
192.168.100.1 racnode1 racnode1.laptop.com
192.168.100.2 racnode2 racnode2.laptop.com
192.168.100.3 racstorage racstorage.laptop.com
192.168.200.1 racnode1-priv racnode1-priv.laptop.com
192.168.200.2 racnode2-priv racnode2-priv.laptop.com
192.168.100.21 racnode1-vip racnode1-vip.laptop.com
192.168.100.22 racnode2-vip racnode2-vip.laptop.com

Add the storage configuration:

Install the nfs-server package by entering:

#> yast -i nfs-kernel-server

And make it start at boot time:

#> chkconfig nfsserver on
#> service nfsserver start

May as well do named while we are at it:

#> checkconfig named on


We now need to create the shares:

#> cd /
#> mkdir storage
#> chmod 777 storage
#> cd storage
#> mkdir oracle_crs oracle_dbf_testdb
#> chmod 777 oracle_crs oracle_dbf_testdb

We now need to share this storage out, so edit /etc/exports with vi, adding the following lines:

/storage/oracle_crs racnode1-san(rw,no_root_squash) racnode2-san(rw,no_root_squash)
/storage/oracle_dbf_testdb racnode1-san(rw,no_root_squash) racnode2-san(rw,no_root_squash)

Finally reboot the storage VM so that all our changes take effect:

#> reboot


Oracle Node 1 VM Configuration:

First we need to add 3 Network Cards to the Oracle Node 1 VM, via Edit virtual machine settings, as follows:

Network Card 1
Host-only: A private network shared with the host
Network Card 2
Host-only: A private network shared with the host
Network Card 3
Host-only: A private network shared with the host

Click on Save, then power on the VM.

When asked if the virtual machine has been moved or copied, select I copied it

Login as root when the login prompt is presented.

Right click on the background and select Open In Terminal

Type:

#> yast

Select Network Devices, Network Settings

Click on Edit

Select Statically assigned IP Address

Enter the following information:

IP Address
Subnet Mask
Hostname
192.168.100.1
255.255.255.0
racnode1.laptop.com

Click on Next

Tab down to the second network card and click on Edit

Select Statically assigned IP Address

Enter the following information:

IP Address
Subnet Mask
Hostname
192.168.200.1
255.255.255.0
racnode1-priv.laptop.com

Click on Next

Tab down to the third network card and click on Edit

Select Statically assigned IP Address

Enter the following information:

IP Address
Subnet Mask
Hostname
192.168.3.101
255.255.255.0
racnode1-san.laptop.com

Click on Next

Select Hostname/DNS

Enter the following information:

Hostname
Domain Name
Name Server 1
Domain Search
racnode1
laptop.com
192.168.100.3
laptop.com

Click on OK

Click on Quit

Next populate the /etc/hosts files with the entries:

192.168.3.101 racnode1-san racnode1-san.laptop.com
192.168.3.102 racnode2-san racnode2-san.laptop.com
192.168.3.103 racstorage-san racstorage-san.laptop.com
192.168.100.1 racnode1 racnode1.laptop.com
192.168.100.2 racnode2 racnode2.laptop.com
192.168.100.3 racstorage racstorage.laptop.com
192.168.200.1 racnode1-priv racnode1-priv.laptop.com
192.168.200.2 racnode2-priv racnode2-priv.laptop.com
192.168.100.21 racnode1-vip racnode1-vip.laptop.com
192.168.100.22 racnode2-vip racnode2-vip.laptop.com

And add the following lines to the /etc/security/limits.conf:

oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft memlock 3145728
oracle hard memlock 3145728
oracle soft stack 10240

And set the kernel parameters in the /etc/sysctl.conf file:

fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.sem = 250 256000 100 1024
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 4194304
kernel.shmmax = 2009571328
kernel.shmmni = 4096
kernel.shmall = 2097152

And make them permanent with:

#> sysctl -p
#> sysctl -a

And install the required additional packages:

#> yast -i sysstat
#> yast -i libcap1

Create the grid home directory:

#> mkdir /home/grid
#> chmod 777 /home/grid

And add the following lines to the /etc/fstab file to mount the shared storage:

racstorage-san:/storage/oracle_dbf_testdb /data/oradata/TESTDB nfs rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,proto=tcp,suid,noac 0 0
racstorage-san:/storage/oracle_crs /oracle_crs nfs rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,proto=tcp,suid,noac 0 0

Now create the oinstall, dba groups and the oracle account with the following Ids:

Group
Group ID
dba
1010
oinstall
1011

User
UserId
Primary Group
Secondary Group
oracle
1010
1011 (oinstall)
1010 (dba)

And create oracle's .bash_profile as the oracle user:

$> vi /home/oracle/.bash_profile

if [ $USER = “oracle” ]; then
if [ $SHELL = “/bin/ksh” ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi


Now make the structure for the shared drives:

#> cd /
#> mkdir /data
#> mkdir /data/oradata
#> mkdir /data/oradata/TESTDB
#> mkdir /oracle_crs
#> chown -R oracle:oinstall /data
#> chmod -R 755 /data
#> chown -R oracle:oinstall /oracle_crs
#> chmod -R 755 /oracle_crs

Now mount the drives:

#> mount -a

And confirm that the drives are mounted:

#> df -k


We can now move onto the second oracle node and repeat the configuration above for the first node. However we will change the ip addresses and hostnames as follows:

IP Address
Subnet Mask
Hostname
192.168.100.2
255.255.255.0
racnode2.laptop.com
192.168.200.2
255.255.255.0
racnode2-priv.laptop.com
192.168.3.102
255.255.255.0
racnode2-san.laptop.com



We now must setup the ssh connectivity between the two oracle nodes:

Run the following on the node indicated below are the oracle user:

oracle@node1> ssh-keygen -t rsa
oracle@node2> ssh-keygen -t rsa

Copy the contents of the /home/oracle/.ssh/id_rsa.pub file on node 1 into /home/oracle/.ssh/authorized_keys on node 2
Copy the contents of the /home/oracle/.ssh/id_rsa.pub file on node 2 into /home/oracle/.ssh/authorized_keys on node 1

Copy the contents of the id_rsa.pub file on node1 into authorized_keys file on node 1
Copy the contents of the id_rsa.pub file on node2 into authorized_keys file on node 2

We now need to connect to each connection via ssh so that we can manually answer the question that appears, therefore do the following, answering yes to all the prompts that appear:

on both nodes:
ssh oracle@racnode1-priv
ssh oracle@racnode2-priv
ssh oracle@racnode2-san

We now need to configure ntp on each of the oracle nodes, to get its time from the storage node.

On each oracle node perform the following:

#> yast

Navigate to Network Services, NTP Configuration

Change the Start NTP Daemon to Now and On Boot

Click on Delete

Confirm the deletion

Click on Add

Ensure Server is selected, click on Next

Enter 192.168.100.3 in the Address box

Click on OK

Click on OK

Click on Quit

Then edit the /etc/sysconfig/ntp file, changing the line:

NNTP_OPTIONS=”-g -u ntp:ntp”

to

NNTP_OPTIONS=”-x”

And restart the ntp service:

#> service ntp restart



We are now in a position to start the Oracle Clusterware installation.

Copy disk 3 of the Oracle 11.2.0.3 patchset to each node, under a directory /tmp/oracle and extract it.

Move into the grid directory and run the cluster configuration verification utility on both nodes:

$> ./runcluvfy.sh stage -pre crsinst -n racnode1,racnode2 -verbose

This should return a successful message on both nodes. If it does not, review the errors and correct them before continuing.

Now create our cluster disk as the oracle user:

$> dd if=/dev/zero of=/oracle_crs/_cluster_disk1 bs=1M count=2048
$> chmod 660 /oracle_crs/_cluser_disk1

We can now move onto the clusterware installation, so on node 1 run the installer:

$> /tmp/oracle/grid/runInstaller &

On Step 1, select Skip Software updates, click on Next

On Step 2, select Install and Configure Oracle Grid Infrastructure for a Cluster, click on Next

On Step 3, select Advanced Installation, click on Next

On Step 4, select English and English (United Kingdom) as Languages, click on Next

On Step 5, enter the following values:

Cluster Name:
rac
SCAN Name:
rac-scan.laptop.com
SCAN Port:
1521

Deselect Configure GNS, click on Next

On Step 6, Click on Add and add the following information:

Public Hostname:
racnode2.laptop.com
Virtual Hostname:
racnode2-vip.laptop.com

Click on OK, then click on Next

On Step 7, make sure the Interfaces have the following Interface Types:

eth0
192.168.100.0
Public
eth1
192.168.200.0
Private
eth2
192.168.3.0
Do Not Use

Click on Next

On Step 8, select Oracle Automatic Storage Management (Oracle ASM), click on Next

On Step 9, make sure the following values are selected:

Disk Group Name:
DATA
Redundancy:
External
AU Size:
1MB

Then click on Change Discovery Path and enter /oracle_crs/* in the pop up box

Click on All Disks

Select /oracle_crs/_cluster_disk1, click on Next

On Step 10, select Use same passwords for these accounts and enter manager into both Password boxes, click on Next

Select Yes in the pop-up box, to confirm that we accept that the passwords do not conform to oracle recommended standards

On Step 11, select Do not use Intelligent Platform Management Interface (IPMI), click on Next

On Step 12, select the following group values:

Oracle ASM DBA (OSDBA for ASM) Group:
oinstall
Oracle ASM Operator (OSOPER for ASM) Group (Optional):
<blank>
Oracle ASM Administrator (OSASM) Group:
oinstall

Click on Next

Click on Yes to the popup stating you are using the same OS group

On Step 13, enter the following paths:

Oracle Base:
/home/oracle/app/oracle
Software Location:
/home/grid/app/11.2.0/grid

Click on Next

On Step 14, enter /home/oracle/app/oraInventory as the Inventory Directory, click on Next

On Step 15, you should have 2 warnings:

Package: cvuqdisk-1.0.9-1
Device Checks for ASM

Click on Fix & Check Again

A popup will ask you to run /tmp/CVU_11.2.0.3.0_oracle/runfixup.sh on both oracle nodes. Open a new window to each node and run the script as requested. Click on OK when complete.

After a few seconds, step 15 should appear again, with 1 warning:

Device Checks for ASM

This is to be expected as we have created blank block files in the ASM area, so select Ignore All, followed by Next

Confirm the Popup that you are happy to continue by clicking Yes

Finally click on Install

After a few minutes a popup will appear asking for certain scripts to be run as root on each node. Run the orainst.sh script on node 1 and when complete run it on node 2.

Then run the root.sh script on node 1 and when complete run it on node 2.

Once these scripts are complete, click on OK on the pop up window

After a few more minutes you should get a confirmation that the installation of Oracle Grid Infrastructure for a Cluster was successful.

Click on Close


Now verify that the cluster post steps are complete by running the following statement as the oracle user on both nodes:

$> cd /tmp/oracle/grid
$> ./runcluvfy.sh stage -post crsinst -n racnode1,racnode2 -verbose

This statement should complete on both nodes with no errors. If errors are reported investigate why and correct before carrying on.


Configuration of ASM files for the database:

We now are in a position to create the ASM datafiles for the database. On node 1 as the oracle user perform the following steps:

$> dd if=/dev/zero of=/data/oradata/TESTDB/_datafile_disk1 bs=1M count=2000
$> chmod 660 /data/oradata/TESTDB/_datafile_disk1

We now need to make these files visible to the ASM instance(s). Perform the following:

Node 1 (as oracle user):

$> . oraenv
ORACLE_SID = [oracle] ? +ASM1
$> sqlplus / as sysasm
SQL> alter system set asm_diskstring='/oracle_crs/*','/data/oradata/TESTDB/*';
SQL> create diskgroup DATA_TESTDB external redundancy DISK '/data/oradata/TESTDB/_datafile_disk1';
SQL> exit;

Then on node 2 (as oracle):

$> . oraenv
ORACLE_SID = [oracle] ? +ASM2
$> sqlplus / as sysasm
SQL> alter diskgroup DATA_TESTDB mount;
SQL> exit;

Oracle Database Software and Database installation:

We are now ready to install the database software and create the TESTDB database.

Extract Oracle 11.2.0.3 disks 1 and 2 to /tmp/oracle on node1, then run the database installer as oracle user:

$> /tmp/oracle/database/runInstaller &

On Step 1, deselect I wish to receive security updates via My Oracle Support, click on Next

Click Yes to confirm the popup

On Step 2, select Skip software updates, click on Next

On Step 3, select Create and configure a database, click on Next

On Step 4, select Server Class, click on Next

On Step 5, select Oracle Real Application Clusters database installation

Ensure both nodes are shown in the box

Click on Next

On Step 6, select Advanced install, click on Next

On Step 7, ensure English and English (United Kingdom) are listed under Selected Languages, click on Next

On Step 8, select Standard Edition (4.42GB), click on Next

On Step 9, ensure the following parameters are set:

Oracle Base:
/home/oracle/app/oracle
Software Location:
/home/oracle/app/oracle/product/11.2.0/dbhome_1

Click on Next

On Step 10, select General Purpose / Transaction Processing, click on Next

On Step 11, enter the following parameters:

Global Database Name:
TESTDB.laptop.com
Oracle Service Identified (SID):
TESTDB

Click on Next

On Step 12, Ensure Enable Automatic Memory Management is selected, click on Next

On Step 13, Ensure Use Oracle Enterprise Manager Database Control for database management is selected, click on Next

On Step 14, Ensure Oracle Automatic Storage Management is selected, enter manager as the password, click on Next

On Step 15, Ensure Do not enable automated backups is selected, click on Next

On Step 16, Select the DATA_TESTDB Disk Group for the storage of the database, click on Next

On Step 17, Select Use the same password for all accounts, and enter manager1 into the two password boxes. Click on Next

In the popup box that appears click on Yes to continue

On Step 18, ensure the following groups are selected:

Database Administrator (OSDBA) Group:
dba
Database Operator (OSOPER) Group (Optional):
<blank>

Click on Next

Step 19 should appear and run through a few checks.

Step 20 should then show a summary screen, click on Install


The software will now install.

After a while the database configuration assistant will come up with a complete message, Click on OK

After a few more minutes, Step 21 will ask for the root.sh scripts to be run. Run these as root first on node 1, then when it completes successfully on node 2.

Click on OK

Step 22 will then show the installation as being successful. It will also show the enterprise manager URL (https://racnode1.laptop.com:1158/em).

Click on Close


We have now completed the installation and can do a few tests to confirm that we can connect to the database:

Node 1:

ORACLE ASM:

$> . oraenv
ORACLE_SID = [oracle] ? +ASM1
$> sqlplus / as sysasm

TESTDB (instance number 1):

$> export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1
$> export PATH=$PATH:$ORACLE_HOME/bin
$> sqlplus system/manager1@racnode1.laptop.com:1521/TESTDB.laptop.com


Node 2:

ORACLE ASM:

$> . oraenv
ORACLE_SID = [oracle] ? +ASM2
$> sqlplus / as sysasm

TESTDB (instance number 2):

$> export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1
$> export PATH=$PATH:$ORACLE_HOME/bin
$> sqlplus system/manager1@racnode2.laptop.com:1521/TESTDB.laptop.com



On either node:

TESTDB (via SCAN to load balanced instance):

$> export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1
$> export PATH=$PATH:$ORACLE_HOME/bin
$> sqlplus system/manager1@rac-scan.laptop.com:1521/TESTDB.laptop.com