 |
|
Virtual IPs For RAC
Oracle Database Tips by Donald Burleson |
(also see
using VIPCA to install CRS on RAC )
Virtual IPs For RAC
At this point you should
also be able to do an nslookup on the virtual IP names:
[aultlinux2]/home/oracle>nslookup aultlinux1-v
Once you can ping and do
nslookup on various addresses, you are ready to install CRS and RAC
(as long as you have the shared disks configured and ready!).
When you reach the end of
the RAC installation, you will be prompted to run root.sh.
When you run root.sh, it automatically invokes the VIPCA (Virtual IP
Configuration Assistant). When this happens, you perform the
following steps:
-
The
VIPCA welcome page will be displayed first, review the information
on the VIPCA Welcome page, then click Next, and the VIPCA will
display the Public Network Interfaces page.
-
Next, on the Public Network Interfaces page determine the network
interface cards (NICs) to which you want to assign your public VIP
addresses, click Next, and the VIPCA will display the IP Address
page.
-
Now,
on the IP Address page enter an unused (unassigned) public virtual
IP address for each node displayed and click Next. The VIPCA will
display a Summary page. Review the information on the summary page
and then click Finish. A progress dialog will appear while the VIPCA
configures the virtual IP addresses for the network interfaces that
you specified. The VIPCA then creates and starts the VIPs, GSD, and
the Oracle notification Service (ONS) node applications. When the
configuration is complete, click OK and the VIPCA will show the
session results. Review the information displayed on the
Configuration Results page, and click Exit to exit the VIPCA.
-
Repeat the root.sh procedure on all nodes that are part of this
installation.
-
The
VIPCA will not run again on the remote node because the remote node
is already configured.
If the VIP's are not set up
correctly, the VIPCA will fail with the error "CRS-215 "Could not
start resource" for the VIP resource and any resources that depend
on the CRS resource such as GSD and ONS.
After VIPCA has been
successfully run, you will now see the VIP addresses in ifconfig (or
netstat -in on HP-UX). Here is an example of a VIP ifconfig
output:
eth0:1
Link encap:Ethernet HWaddr 00:91:26:BD:D6:9E
inet addr:172.1.137.27 Bcast:172.1.255.255
Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:11 Base address:0x9000
The information is also
available via CRS, go to the CRS_HOME/bin directory, or its
equivalent on your system, and run the command "crs_stat".
Here is an example of the VIP resource information from crs_stat:
NAME=ora.aultlinux1.vip
TYPE=application
TARGET=ONLINE
STATE=ONLINE on aultlinux1
NAME=ora.aultlinux2.vip
TYPE=application
TARGET=ONLINE
STATE=ONLINE on aultlinux2
This output shows that both
of the aultlinux cluster VIP's are online and are assigned to their
proper nodes. In the event of a failover scenario, one or more
VIP's will be moved to another node. This VIP movement is
automatically managed by the CRS processes.
When there is a need to
change VIPs to a different address, remove the node-level
applications and re-create them using srvctl, for example:
srvctl stop nodeapps
srvctl remove nodeapps
srvctl add nodeapps
What if There are Issues with VIP
If there are
issues with the VIP setup, review the following files or use the
following commands:
"ifconfig -a"
output from each node
"nslookup
<Virtual Host Name>" for each virtual host name
/etc/hosts file from each node
output of "$ORA_CRS_HOME/bin/crs_stat"
output of "srvctl start nodeapps -n <node name>" on the node having
the issue
In light of the
information presented in previous sections, review the output of the
above files and commands and correct as needed.
Conclusion
In this chapter we have examined the various
migration scenarios. Enterprises have started realizing the
scalability and high availability features of a RAC. However,
enterprises will have to face the challenge of migrating the
existing single instance standalone Oracle database to a multi-node
RAC database.
Broadly, there are two ways of migrating. One is
converting an existing sever into a cluster node and then adding
additional nodes into the clusters. Another approach is to create a
new cluster environment by using a new set of servers as cluster
nodes. In this case, the data has to be moved either by
export/import method or by the database cloning method.
We also looked into specific application or
client related configuration and issues related to the use of the
RAC database.
References
White Paper - Oracle Corporation 'Migrating your
eBusiness Suite Single Instance to Real Application Clusters' - Ahmed Alomari
Oracle white paper 'configuring SAP R3 4.6D for
use with Oracle RAC'
Oracle
Real Application Clusters Installation and Configuration Guide
10g Release 1
(10.1) for AIX-Based Systems, hp HP-UX PA-RISC (64-bit), hp Tru64
UNIX, Linux, Solaris Operating System (SPARC 64-bit), and Windows
(32-bit) Platforms, Part No. B10766-02
Oracle MOSC Note: 264847.1, 'How to Configure
Virtual IPs for RAC'
Get the complete Oracle10g story:
The above text is an excerpt from "Oracle
10g Grid & Real Application Clusters", by Rampant TechPress.
Written by top Oracle experts, Mike Ault and Madhu Tumma, this book has a
complete online code depot with ready to use scripts.
To get the code instantly, click here:
http://www.rampant-books.com/book_2004_1_10g_grid.htm
|