This is an excerpt from the bestselling book
Oracle Grid & Real Application Clusters. To get immediate
access to the code depot of working RAC scripts, buy it
directly from the publisher and save more than 30%.
Another competing Parallel
Database that follows the Shared Nothing Architecture is DB2 UDB
Enterprise-Extended Edition (EEE). This product follows the shared
nothing model with each node having its own set of disks. Each
instance or node has ownership of a distinct subset of the data and
all access to this data is performed by this owning instance. Thus,
it is a partition database as shown in Figure 3.13. However, the
disks are physically attached to more than one node. In case of a
node failure, ownership of disk subsystem moves over to another
The basic method UDB (EEE)
follows is the distribution of the data and database functions to
multiple hosts. It uses a hashing algorithm that enables it to
manage the distribution and redistribution of data as required. A
database partition is a part of the database that has its own
portion of the user data, indexes, configuration files, and
Figure 3.13: UDB (EEE) Three
The Shared nothing architecture
allows parallel queries to be processed with the minimum of
contention for resources between the hosts in the DB2 cluster.
Because the number of data partitions has little impact on traffic
between hosts, performance scales better in an almost linear manner
as more machines are added to the DB2 cluster.
UDB (EEE) uses the concept of
function shipping. Function shipping helps in the reduction of
network traffic because functions, such as SQL queries, are shipped
instead of data. Function shipping means that relational operators
are executed on the node or processor containing the data whenever
possible. So, the operation or the SQL is moved to where the data
resides. Function shipping is well suited to the shared nothing
In case of the failure of one
node, a pre-configured node takes over the disk system and makes the
data available through that node. A Cluster script starts DB2 UDB
EEE database partitions on the take-over node. Once this script
completes, all the database partitions in the DB2 UDB EEE database
are available and processing goes on as usual.
As an example, When RS/6000 SP
cluster is implemented with HACMP to support the UDB EEE, nodes are
usually configured in three ways:
* Idle Standby - A
standby SP node can be provided which will take over the work of a
failed SP node. The standby SP node has access to all resources
required for the provision of the essential services such as disks,
networks, and so on. When the failed SP node is fixed and
reintegrated into the cluster, it will reclaim its resources.
* Rotating Standby - A
standby SP node is provided to take over the work of a failed SP
node, as in the idle standby scenario. However, when the failed SP
node is reintroduced, it does not reclaim its resources, but becomes
the new standby machine.
* Mutual Takeover - There
are no standby SP nodes. All SP nodes are utilized in a normal
state. After an SP node failure, the failed SP nodes resources and
essential services are taken over by one of the surviving SP nodes
in addition to its normal services.
DB2 UDB EEE supports a diverse
set of hardware options including SMP, MPP, NUMA, and RISC servers,
and clustered configurations with a range of interconnect options.
DB2 exploits high-availability solutions on each platform. DB2 UDB
EEE can run on multiple operating systems including IBM AIX, Linux,
HP-UX, Sun Solaris, and Windows NT.
General Requirements for
Parallel Database Clusters
In this section, the following
critical issues pertaining to the scalable cluster and the parallel
database will be examined:
* Avoiding Split Brain
* I/O Fencing
* Arbitration through Quorum
* Cache Coherency and Lock