Call now: 252-767-6166  
Oracle Training Oracle Support Development Oracle Apps

 
 Home
 E-mail Us
 Oracle Articles
New Oracle Articles


 Oracle Training
 Oracle Tips

 Oracle Forum
 Class Catalog


 Remote DBA
 Oracle Tuning
 Emergency 911
 RAC Support
 Apps Support
 Analysis
 Design
 Implementation
 Oracle Support


 SQL Tuning
 Security

 Oracle UNIX
 Oracle Linux
 Monitoring
 Remote s
upport
 Remote plans
 Remote
services
 Application Server

 Applications
 Oracle Forms
 Oracle Portal
 App Upgrades
 SQL Server
 Oracle Concepts
 Software Support

 Remote S
upport  
 Development  

 Implementation


 Consulting Staff
 Consulting Prices
 Help Wanted!

 


 Oracle Posters
 Oracle Books

 Oracle Scripts
 Ion
 Excel-DB  

Don Burleson Blog 


 

 

 


 

 

 
 

HAIP tips

RAC tuning tips

October 4,  2015

 

HAIP

Oracle 11gR2 introduced the RAC Highly Available IP (HAIP) for the Cluster Interconnect to help eliminate a single point of failure. If the node in the cluster only has one network adapter for the private network, and that adapter fails then the node will no longer be able to participate in cluster operations. It will not be able to perform its heartbeat with the cluster. Eventually, the other nodes will evict the failing node from the cluster. If the cluster only has a single network switch for the Cluster Interconnect and the switch fails, then the entire cluster is compromised. Examine the diagram below which shows one public network and two private networks.

 

 

Figure 4.1 Redundant Private Networks

 

Each node in the cluster has access to dual private networks. The single points of failure have been eliminated. Note that dual network adapters serve as the interfaces to dual network switches.

 

One cannot simply stand up a second private network and expect the clusterware software to start utilizing both networks. Without any additional configuration, only one private network would be used and the other would sit idle.

 

Prior to Oracle 11gR2, system architects that were concerned with the single point of failure would leverage link aggregation. The terms NIC bonding, NIC teaming, or port trunking are also used for the same concept. The central idea behind link aggregation is to have two private networks act as one. The two private networks are combined together to appear to the operating system as one unit. To the OS, the network adapters look like one adapter. If one of the physical network adapters were to fail, the OS would hardly notice and network traffic would proceed through the remaining adapter.

 

This is not a book on maximum availability architecture and at this point you may be asking what link aggregation has to do with Oracle RAC performance tuning? In addition to higher availability for the cluster, link aggregation improves the private network throughput. Two private networks have twice the capacity, and thus, twice the throughput of a single private network. When the traffic on the Cluster Interconnect saturates a singular private network, another option is to leverage link aggregation to improve the global cache transfer performance.

 

Oracle Grid Infrastructure now provides RAC HAIP, which is link aggregation moved to the clusterware level. Instead of bonding the network adapters on the OS side, Grid Infrastructure in instructed to use multiple network adapters. Grid Infrastructure will still start HAIP even if the system is configured with only one private network adapter. The following shows the resource name ora.cluster_interconnect.haip is online.

 

[oracle@host01 bin]$ ./crsctl stat res -t -init

----------------------------------------------------------------------------

Name           Target  State        Server                   State details      

----------------------------------------------------------------------------

Cluster Resources

----------------------------------------------------------------------------

ora.asm

      1        ONLINE  ONLINE       host01                   Started,STABLE

ora.cluster_interconnect.haip

      1        ONLINE  ONLINE       host01                   STABLE

ora.crf

      1        ONLINE  ONLINE       host01                   STABLE

ora.crsd

      1        ONLINE  ONLINE       host01                   STABLE

ora.cssd

      1        ONLINE  ONLINE       host01                   STABLE

ora.cssdmonitor

      1        ONLINE  ONLINE       host01                   STABLE

ora.ctssd

      1        ONLINE  ONLINE       host01                   OBSERVER,STABLE

ora.diskmon

      1        OFFLINE OFFLINE                               STABLE

ora.evmd

      1        ONLINE  ONLINE       host01                   STABLE

ora.gipcd

      1        ONLINE  ONLINE       host01                   STABLE

ora.gpnpd

      1        ONLINE  ONLINE       host01                   STABLE

ora.mdnsd

      1        ONLINE  ONLINE       host01                   STABLE

ora.storage

      1        ONLINE  ONLINE       host01                   STABLE

------------------------------------------------------------------------

 

Furthermore, only one adapter is defined for the Cluster Interconnect.

 

[oracle@host01 bin]$ ./oifcfg getif

eth0  192.168.56.0  global  public

eth1  192.168.10.0  global  cluster_interconnect

 

The ifconfig command shows that network device eth1 is part of two subnets.

 

[oracle@host01 bin]$ ifconfig -a

eth0      Link encap:Ethernet  HWaddr 08:00:27:98:EA:FE 

          inet addr:192.168.56.71  Bcast:192.168.56.255  Mask:255.255.255.0

          inet6 addr: fe80::a00:27ff:fe98:eafe/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:947 errors:0 dropped:0 overruns:0 frame:0

          TX packets:818 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:100821 (98.4 KiB)  TX bytes:92406 (90.2 KiB)

 

eth1      Link encap:Ethernet  HWaddr 08:00:27:54:73:8F 

          inet addr:192.168.10.1  Bcast:192.168.10.255  Mask:255.255.255.0

          inet6 addr: fe80::a00:27ff:fe54:738f/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1

          RX packets:406939 errors:0 dropped:0 overruns:0 frame:0

          TX packets:382298 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:445270636 (424.6 MiB)  TX bytes:202801222 (193.4 MiB)

 

eth1:1    Link encap:Ethernet  HWaddr 08:00:27:54:73:8F 

          inet addr:169.254.225.190  Bcast:169.254.255.255  Mask:255.255.0.0

          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1


The entry for eth1 with IP address 192.168.10.1 is the way the NIC was configured on this system for the private network. Notice the device listed as eth1:1 in the output above. It has been given the 169.254.225.190 IP address.

 

Device eth1:1 is RAC HAIP in action even though only one private network adapter exists. HAIP uses the 169.254.*.* subnet. As such, no other network devices in the cluster should be configured for the same subnet.

 

When Grid Infrastructure is stopped, the ifconfig command will no longer show the eth1:1 device. The gv$cluster_interconnects view shows the HAIP subnets for each instance.

 

select        

     inst_id,

     name,

     ip_address

  from

     gv$cluster_interconnects;

 

 

   INST_ID NAME            IP_ADDRESS

---------- --------------- ----------------

         1 eth1:1          169.254.225.190

         2 eth1:1          169.254.230.98

 

Notice that for instance 1, the name of the interconnect device and the IP address seen in gv$cluster_interconnects is the same as shown with the ifconfig command. The alert log also shows HAIP being configured on instance startup.

 

Private Interface 'eth1:1' configured from GPnP for use as a private interconnect.

  [name='eth1:1', type=1, ip=169.254.225.190, mac=08-00-27-54-73-8f, net=169.254.0.0/16, mask=255.255.0.0, use=haip:cluster_interconnect/62]

Public Interface 'eth0' configured from GPnP for use as a public interface.

  [name='eth0', type=1, ip=192.168.56.71, mac=08-00-27-98-ea-fe, net=192.168.56.0/24, mask=255.255.255.0, use=public/1]

 

While HAIP is running, there is no redundancy or additional network bandwidth because only one network interface is configured. If a second network interface is available for the private network, it will need to be added to Grid Infrastructure. The device needs to be a well-configured network adapter in the operating system. The new network interface needs to have the same configuration as the current interface, i.e. both must be on the same subnet, have the same MTU size, etc.  The oifcfg command is used to set the new interface as a cluster_interconnect device.

 

[oracle@host01 bin]$ ./oifcfg setif ?global \ eth3/192.168.10.0:cluster_interconnect


[oracle@host01 bin]$ ./oifcfg getif

eth0  192.168.56.0  global  public

eth1  192.168.10.0  global  cluster_interconnect

eth3  192.168.10.0  global  cluster_interconnect

 

The device eth3 is now part of the Cluster Interconnect. The commands do not need to be repeated on all nodes as Grid Infrastructure takes care of that for us. On host02, the device is already configured.

 

[oracle@host02 bin]$ ./oifcfg getif

eth0  192.168.56.0  global  public

eth1  192.168.10.0  global  cluster_interconnect

eth3  192.168.10.0  global  cluster_interconnect

 

Grid Infrastructure needs to be restarted on all nodes.

 

[root@host01 bin]# ./crsctl stop crs

[root@host01 bin]# ./crsctl start crs

 

Once the cluster nodes are back up and running, the new interface will be part of the RAC HAIP configuration.

 

[root@host01 ~]# ifconfig ?a

eth0      Link encap:Ethernet  HWaddr 08:00:27:98:EA:FE 

          inet addr:192.168.56.71  Bcast:192.168.56.255  Mask:255.255.255.0

          inet6 addr: fe80::a00:27ff:fe98:eafe/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:5215 errors:0 dropped:0 overruns:0 frame:0

          TX packets:6593 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:2469064 (2.3 MiB)  TX bytes:7087438 (6.7 MiB)

 

eth1      Link encap:Ethernet  HWaddr 08:00:27:54:73:8F 

          inet addr:192.168.10.1  Bcast:192.168.10.255  Mask:255.255.255.0

          inet6 addr: fe80::a00:27ff:fe54:738f/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1

          RX packets:3517 errors:0 dropped:0 overruns:0 frame:0

          TX packets:2771 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:789056 (770.5 KiB)  TX bytes:694387 (678.1 KiB)

 

eth1:1    Link encap:Ethernet  HWaddr 08:00:27:54:73:8F 

          inet addr:169.254.21.30  Bcast:169.254.127.255  Mask:255.255.128.0

          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1

 

eth3      Link encap:Ethernet  HWaddr 08:00:27:6A:8B:8A 

          inet addr:192.168.10.3  Bcast:192.168.10.255  Mask:255.255.255.0

          inet6 addr: fe80::a00:27ff:fe6a:8b8a/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1

          RX packets:857 errors:0 dropped:0 overruns:0 frame:0

          TX packets:511 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:158563 (154.8 KiB)  TX bytes:64923 (63.4 KiB)

 

eth3:1    Link encap:Ethernet  HWaddr 08:00:27:6A:8B:8A 

          inet addr:169.254.170.240 Bcast:169.254.255.255 Mask:255.255.128.0

          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1

 

The new interface is also found in the gv$cluster_interconnects view.

 

select

     inst_id,

     name,

     ip_address

  from

     gv$cluster_interconnects;

 

 

   INST_ID NAME            IP_ADDRESS

---------- --------------- ----------------

         1 eth1:1          169.254.21.30

         1 eth3:1          169.254.170.240

         2 eth1:1          169.254.75.234

         2 eth3:1          169.254.188.35

 

In the end, setting up RAC HAIP was as simple as using the ?oifcfg setif? command to add a new network adapter as part of the Cluster Interconnect and restarting Grid Infrastructure. In addition to removing a single point of failure in the Cluster Interconnect, RAC HAIP provides additional bandwidth is now available to the private network that can help improve cluster performance.


 
 
 
Learn RAC Tuning Internals!

This is an excerpt from the landmark book Oracle RAC Performance tuning, a book that provides real world advice for resolving the most difficult RAC performance and tuning issues.

Buy it  for 30% off directly from the publisher.


Hit Counter

 

Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.  Feel free to ask questions on our Oracle forum.

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata?  Oracle technology is changing and we strive to update our BC Oracle support information.  If you find an error or have a suggestion for improving our content, we would appreciate your feedback.  Just  e-mail:  

and include the URL for the page.


                    









Burleson Consulting

The Oracle of Database Support

Oracle Performance Tuning

Remote DBA Services


 

Copyright © 1996 -  2020

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.

 

 

��  
 
 
Oracle Training at Sea
 
 
 
 
oracle dba poster
 

 
Follow us on Twitter 
 
Oracle performance tuning software 
 
Oracle Linux poster