High Performance, Availability
and Computing
Highly available and scalable computer systems and applications are an
essential part of today's business environment. This is achieved by
cluster technology and fault tolerant
systems. The server clustering technology that supports both high
performance computing and high availability has been widely used for
over a decade.
Research institutions have always been in the forefront in using high
performance computers. Analysis of large datasets and complex
algorithms in academic research is common.
The Internet, with its potential to connect virtually every computer
in the world, has made database technology more crucial than ever.
With increasing numbers of users connecting concurrently to databases
to query and update data, high performing servers are essential.
Uneven workload patterns and long running complex data warehousing
applications require high performing database technology. Database
software must be able to cope with increased demands and complexity.
To optimize the data storage and retrieval, parallel execution
or parallel processing is one of the most effective methods. Parallel
execution
focuses on achieving faster response times and better utilization of
multiple CPU resources on the database server. Many database
management systems, including Oracle, leverage the availability of
faster and multi-CPU computers to process and retrieve data by
utilizing parallel processing methodology.
Parallel Processing
Parallel execution
or processing involves dividing a task into several smaller tasks and
working on each of those smaller tasks in parallel.
In a parallel processing system, multiple processes may reside
on a single computer or they may be spread across separate computers
or nodes as in a cluster.
In clustered architecture, two or more nodes, or servers, are
interconnected and share common storage resources. Each node has its
own set of processors. A cluster is usually an aggregation of multiple
SMP nodes. Scalability
is better achieved in a cluster on a modular basis. As the need
arises, additional servers can be added to a cluster.
Scalability
Scalability
is the ability to maintain higher performance levels as the workload
increases by incrementally adding more system capacity in terms of
more processors and memory. On a single processor system, it becomes
difficult to achieve scalability beyond a certain point.
Parallelization,
by using multi-processor servers, provides better scalability than
single processor systems.
Scalability can be understood from two different perspectives: a
speed-up of tasks within the system and an increase in concurrency in
the system, sometimes referred to as scale-up.
There are two ways to achieve speed-up of tasks:
-
Increasing the execution capacity of the existing hardware
components of a server through multiple CPUs
-
Breaking the job into multiple sub-tasks and assigning these
components to multiple processors to execute them concurrently