Call now: 252-767-6166  
Oracle Training Oracle Support Development Oracle Apps

 
 Home
 E-mail Us
 Oracle Articles
New Oracle Articles


 Oracle Training
 Oracle Tips

 Oracle Forum
 Class Catalog


 Remote DBA
 Oracle Tuning
 Emergency 911
 RAC Support
 Apps Support
 Analysis
 Design
 Implementation
 Oracle Support


 SQL Tuning
 Security

 Oracle UNIX
 Oracle Linux
 Monitoring
 Remote s
upport
 Remote plans
 Remote
services
 Application Server

 Applications
 Oracle Forms
 Oracle Portal
 App Upgrades
 SQL Server
 Oracle Concepts
 Software Support

 Remote S
upport  
 Development  

 Implementation


 Consulting Staff
 Consulting Prices
 Help Wanted!

 


 Oracle Posters
 Oracle Books

 Oracle Scripts
 Ion
 Excel-DB  

Don Burleson Blog 


 

 

 


 

 

 

 

 

Installing Cluster Ready Services (CRS)

Oracle RAC Cluster Tips by Burleson Consulting

This is an excerpt from the bestselling book Oracle Grid & Real Application Clusters.  To get immediate access to the code depot of working RAC scripts, buy it directly from the publisher and save more than 30%.


9. Now the installation begins. First, OUI copies software to the local node and then copies the software to the remote nodes.

10. Then at the end of copying the files, the OUI displays a dialog indicating that the root.sh script must be run on all the nodes as shown in Figure 6.13.

Figure 6.13: root.sh script

11. Figure 6.14 shows the output of the root.sh file. In this stage,  voting disk is formatted, daemons are added to inittab, and CSS daemon is activated.

Figure 6.14: Execution of CRS root.sh

12. Remember to run the root.sh scripts on each node one by one. When executing the root.sh on the last node, the root.sh script runs the following assistants without intervention:

a. Oracle Cluster Registry Configuration Tool (ocrconfig). If this tool detects a 9.2.0.2 version of RAC, then the tool upgrades the 9.2.0.2 OCR block format to an Oracle Database 10g OCR block format.

b. Cluster Configuration Tool (clscfg) - This tool automatically configures the cluster and creates the OCR keys.

13. This completes the CRS installation.

After the CRS is successfully installed, the following services should be running:

* oprocd - Process monitor for the cluster. Note that this process will only appear on platforms that do not use vendor clusterware with CRS.

* evmd - Event manager daemon that starts the racgevt process to manage callouts.

* ocssd - Manages cluster node membership and runs as oracle user. Failure of this process results in cluster restart.

* crsd - Performs high availability recovery and management operations such as maintaining the OCR. Also manages application resources and runs as root user and restarts automatically upon failure.

The Installer also adds entries to the /etc/init.d as shown below. The main init scripts help to spawn processes required for cluster ready services functioning.

[root@node2 rc.d]# cd /etc/init.d
[root@node2 init.d]# ls -lt | head

total 384

-rwxr-xr-x    1 root     root          763 Jan 27 17:28 init.crs
-rwxr-xr-x    1 root     root         2261 Jan 27 17:28 init.crsd
-rwxr-xr-x    1 root     root         5950 Jan 27 17:28 init.cssd
-rwxr-xr-x    1 root     root         2280 Jan 27 17:28 init.evmd

The /etc/inittab.crs file controls and describes the INIT processes for the CRS. The details are shown next:

[root@node2 etc]# more /etc/inittab.crs

#
# inittab       This file describes how the INIT process should set up
#               the system in a certain run-level.
#
# Author: Miquel van Smoorenburg, <miquels@drinkel.nl.mugnet.org>
#         Modified for RHS Linux by Marc Ewing and Donnie Barnes
#

# Default runlevel. The runlevels used by RHS are:
# 0 - halt (Do NOT set initdefault to this)
# 1 - Single user mode
# 2 - Multiuser, without NFS (The same as 3,if you do not have networking)
# 3 - Full multiuser mode
# 4 - unused
# 5 - X11
# 6 - reboot (Do NOT set initdefault to this)
#
id:3:initdefault:

# System initialization.

si::sysinit:/etc/rc.d/rc.sysinit

l0:0:wait:/etc/rc.d/rc 0
l1:1:wait:/etc/rc.d/rc 1
l2:2:wait:/etc/rc.d/rc 2
l3:3:wait:/etc/rc.d/rc 3
l4:4:wait:/etc/rc.d/rc 4
l5:5:wait:/etc/rc.d/rc 5
l6:6:wait:/etc/rc.d/rc 6

# Things to run in every runlevel.

ud::once:/sbin/update

sm:S:wait:/sbin/sulogin

# Trap CTRL-ALT-DELETE

#ca::ctrlaltdel:/sbin/shutdown -t3 -r now

# When our UPS tells us power has failed, assume we have a few minutes
# of power left.  Schedule a shutdown for 2 minutes from now.
# This does, of course, assume you have powerd installed and your
# UPS connected and working correctly. 

pf::powerfail:/sbin/shutdown -f -h +2 "Power Failure; System Shutting Down"

# If power was restored before the shutdown kicked in, cancel it.

pr:12345:powerokwait:/sbin/shutdown -c "Power Restored; Shutdown Cancelled"

# Run gettys in standard runlevels

1:2345:respawn:/sbin/mingetty tty1
2:2345:respawn:/sbin/mingetty tty2
3:2345:respawn:/sbin/mingetty tty3
4:2345:respawn:/sbin/mingetty tty4
5:2345:respawn:/sbin/mingetty tty5
6:2345:respawn:/sbin/mingetty tty6

# Run xdm in runlevel 5

# xdm is now a separate service

x:5:respawn:/etc/X11/prefdm -nodaemon
S0:2345:respawn:/sbin/agetty -L rconsole 38400 dumb
h1:235:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/null
h2:235:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/null
h3:235:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null
h4:235:once:/etc/init.d/init.crsd run -1 >/dev/null 2>&1 </dev/null

Also note that the ports used by the services are shown at the time of running root.sh. The ports range from 49895 to 49898, and they are used for various services.

Using ports: CSS=49895 CRS=49896 EVMC=49897 and EVMR=49898

The netstat command shows ports listening.

[root@node2 /]# netstat | grep 4989

tcp  0 0 private-link1:32813     private-link2:49897     ESTABLISHED
tcp  0 0 private-link1:49895     private-link2:32773     ESTABLISHED
tcp  0 0 private-link1:49897     private-link2:32803     ESTABLISHED

Thus each node in the cluster will have the CRS Home to enable the clusterware to operate on each node.

This has been a description of the CRS installation and preparation of Cluster Environment in a typical Linux environment. The next step would be to install the Oracle Database software with RAC option on the selected cluster nodes.

In case of the operating platforms where vendor supplied clusterware is installed, follow the documentation supplied by the vendor. The next couple of sections will provide brief details. These methods are very specific to a particular server and platform.

 


This is an excerpt from the bestselling book Oracle Grid & Real Application Clusters, Rampant TechPress, by Mike Ault and Madhu Tumma.

You can buy it direct from the publisher for 30%-off and get instant access to the code depot of Oracle tuning scripts.

http://www.rampant-books.com/book_2004_1_10g_grid.htm


 

 
��  
 
 
Oracle Training at Sea
 
 
 
 
oracle dba poster
 

 
Follow us on Twitter 
 
Oracle performance tuning software 
 
Oracle Linux poster
 
 
 

 

Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.  Feel free to ask questions on our Oracle forum.

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata?  Oracle technology is changing and we strive to update our BC Oracle support information.  If you find an error or have a suggestion for improving our content, we would appreciate your feedback.  Just  e-mail:  

and include the URL for the page.


                    









Burleson Consulting

The Oracle of Database Support

Oracle Performance Tuning

Remote DBA Services


 

Copyright © 1996 -  2017

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.

Remote Emergency Support provided by Conversational