Slow RAM necessitates numa. The invocation of
numa is done by Oracle on your behalf.
All 64-bit servers have a larger word size (2 to the
64th power) that allows for up to 18 billion GB of addressable RAM. DBAs may
be tempted to create a super large RAM data buffer. Data warehouse
systems tend to bypass the data buffers because of parallel full-table
scans, and maximizing disk I/O throughput is the single most critical
bottleneck.
Most SMP servers have a specialized high speed RAM
called a L2 cache that is localized near the CPUs:
Oracle will recognize NUMA systems and adjust memory and
scheduling operations accordingly and NUMA technology allows for faster
communication between distributed memory in a multi-processor server.
NUMA is fully supported by Linux and Windows so that
Oracle can now better exploit high end NUMA hardware in SMP servers.
In NUMA architecture, or non-uniform memory
access, multiple processors in a computer system are grouped. They
are usually called Quads, or Node Cards as in SGI servers, and the
quads have their own memory and I/O controller. Quads are connected
by high-speed interconnects. Unlike a cluster, all these quads are
part of a single node. Thus, a NUMA system can be thought of as a
large SMP system. However, the memory is non-uniformly distributed
to the processors, each quad has its own localized memory, but the
memory is accessible to other quads. To the processors, all of the
memory in a NUMA machine appears same, but the only difference is in
access time. NUMA is also called Distributed shared memory (DSM)
architecture.
Good examples for NUMA systems include the
Sequent (now IBM) servers and Silicon Graphics (SGI) 2000 / 3000
series.
Emerging Server Cluster Architectures
Any server requires power, connectivity to
storage and to an IP network. When the servers are clustered, it
usually requires a redundant heartbeat and cluster-management
connection, and potentially redundant connections to dual ported
storage. As a cluster begins to grow and have many nodes, with
cables and connectors of the physical environment, it becomes very
complex and messy. Having cluster architecture may result in many
points of failure. It can be a real nightmare situation for data
center managers.
The concept of the Bladed Server or Blade
Server is gaining wider acceptance as it helps to solve the
complexities of cluster management and also provides a modular
solution to the growth of servers.
The BladeFrame architecture provides hot
insertion and removal of servers, which are also called blades, and
cable consolidation. Process Area Network (PAN) manager software
handles the external storage mapping and virtualization, and the
control of I/O and network traffic to and from individual servers.
The Blade Server provides a specially designed rack into which the
blades fit - the idea is to save space and power, reduce cabling,
and simplify maintenance and expansion.
Thus, the main features of the Blade Technology
include:
* BladeFrame is a collection of Blades
* Infrastructure of Networking and Storage
Connectivity is built-in
* Infrastructure of Networking and Storage
Connectivity is common to all the blades in the frame
* Power Supply is common but preferably from
multiple sources
* Each blade can act as a database or
application server or as a client host
* Each server can have its own flavor of
operating system such as Linux or Windows
* Each server can be put to use for any number
of things including Load Balancer, FireWall, App. Server, DB server,
etc.
* All the components are housed in a rack.
The Blade technology based server farm is
available from Egenera, IBM, HP, Dell etc. As an example, the
BladeFrame system from Egenera allows for a pool of up to 96
high-end Intel? processors deployable entirely through software and
without the physical intervention of a system manager. The product
consists of a 24x30x84-inch chassis containing 24 two-way and/or
four-way SMP processing resources, redundant central controllers,
redundant integrated switches, redundant high-speed interconnects
and Egenera PAN Manager software.
In another development, Switch Computing
Architecture, as popularized by TopSpin Communications, provides
unified switched fabric for IPC, Fibre Channel, and Ethernet for
interconnecting computing elements into server area networks. This
would enable the creation of virtual computers from pools of
industry-standard processors, storage, and I/O building blocks. It
improves performance in three parts of the network, namely
host-to-host interconnect communications, host-to-LAN/WAN
communications, and host-to-storage communications. Terabits of
aggregate bandwidth in a single chassis, and Sub-10 microsecond
latencies within the switches help in setting up high performance
clusters.
The new and evolving architectures,
specifically Process Area Networks (PAN) and Server Area Networks,
are helping to create and manage powerful clusters.