This is an excerpt from the bestselling book
Oracle Grid & Real Application Clusters. To get immediate
access to the code depot of working RAC scripts, buy it
directly from the publisher and save more than 30%.
Oracle10g introduces Cluster
Ready Services (CRS), which provides many system management services
and interacts with the vendor clusterware to coordinate cluster
membership information.
The Oracle Universal Installer (OUI)
installs CRS on each node on which the OUI detects that vendor
clusterware is running. In addition, the CRS home is distinct from
the RAC-enabled Oracle home. The CRS home can either be shared by
one or more nodes, or private to each node, depending on the
settings when the OUI is run. When vendor clusterware is present,
CRS interacts with the vendor clusterware to coordinate cluster
membership information.
For Oracle10g on Linux and
Windows-based platforms, CRS co-exists with but does not
inter-operate with vendor clusterware. Vendor clusterware may be
used for all UNIX-based operating systems except Linux.
The Oracle Cluster Registry
(OCR) contains cluster and database configuration information for
RAC Cluster Ready Services (CRS), including the list of nodes in the
cluster database, the CRS application, resource profiles, and the
authorizations for the Event Manager (EVM). The OCR can reside in a
file on a cluster file system or on a shared raw device. When Real
Application Clusters is installed, the location of the OCR is
specified.
CRS helps to package a set of
application that work under CRS control and access the RAC database.
The application resource profile defines the resources with which
RAC is managed.
Prior to the 10g release, the
cluster manager implementations on some platforms were referred to
as Cluster Manager. In Oracle10g, Cluster Synchronization Services (CSS)
is the cluster manager on all platforms. The Oracle Cluster
Synchronization Service Daemon (OCSSD) performs this function on
UNIX-based platforms. On Windows-based platforms, the
OracleCSService, OracleCRService, and OracleEVMService provide the
cluster manager functionality.
CRS Features
CRS is required for installing
the Oracle 10g RAC system. CRS can be run on top of vendor provided
Cluster Software. The vendor supplied clusterware is however
optional.
The CRS software is installed in
the cluster with its own set of binaries. The CRS Home and Oracle
Home are in different locations. CRS software installation uses the
two shared disk locations or files which are the Voting Disk and OCR
file. Installation of CRS configures the Virtual IP interface.
Virtual IP is associated with defined Workload Service. CRS
resources can also be managed by the srvctl utility.
CRS has many daemon processes.
They are as follows:
CRDS
? The CRS Daemon is the main background process for managing the HA
operation of the service. Basically it manages the application
resources defined within the cluster. It also maintains the
configuration profiles stored in the Oracle Configuration
Repository.
OCSSD
? This process is associated with the Automatic Storage Management (ASM)
instance. This daemon is spawned to manage shared access of the disk
devices to the clustered nodes. It manages the basic cluster locking
and understands the nodes and its membership status.
EVMD
? This is event management logger. It monitors the message flow
between the nodes and logs the relevant event information to the log
files.
Cluster Private Interconnect
The cluster interconnect is a
high bandwidth, low latency communication facility that connects
each node to other nodes in the cluster and routes messages among
the nodes. It is a key component in building the RAC system.
In case of RAC database, the
cluster interconnect is used for the following high-level functions:
* Monitoring Health, Status, and
Synchronize messages
* Transporting lock management
or resource coordination messages
* Moving the Cache Buffers (data
blocks) from node to node.
High performance database
computing involves distributing the processing across an array of
cluster nodes. It requires that the cluster interconnect provide
high-data rates and low-latency communication between node
processes.
Innterconnect technology that is
employed connecting RAC Nodes should be scalable to handle the
amount of traffic generated by the cache synchronization mechanism.
This is directly related to the amount of contention created by the
application. The more inter-instance updates and inter-instance
transfers, the more message traffic it generates. It is advisable to
implement the highest bandwidth, lowest latency interconnect that is
available for a given platform.
The volume of synchronization
traffic directly impacts the bandwidth requirement, and messaging
delays are highly dependant on the IPC protocol. The interconnect is
not something that should be under configured, assuming scalability
is a key objective.
Oracle recommends that for Linux
environments with interconnect bandwidth is 1GB Ethernet, use the
UDP as IPC protocol in preference to TCP. 10g has extended support
for emerging technologies like Infiniband which will greatly improve
interconnect scalability and standardization for large numbers of
nodes, as well as provide a choice of interconnects under Linux.