|
|
Oracle Capacity Planning
Oracle Tips by Burleson Consulting |
Introduction - Database
Administration
The easiest way to perform Oracle capacity
planning is with the Oracle
capacity planning spreadsheets. Database Administration has grown over the
years from the mere management of a few tables and indexes to a
complex interlocking set of responsibilities ranging from managing
database objects to participating in enterprise wide decisions on
hardware, software and development tools.
In order to fully perform these functions the
modern Oracle DBA needs a large skill set. Over the next few hours we
will discuss the skills needed and specifically how they apply to an
Oracle DBA.
System Capacity Planning
In a green field operation (one where you are
there before the equipment and database) a DBA will have a critical
part to play in the planning and configuration of the new system. Even
in existing environments the ability to plan for new servers, new
databases or improvements to existing databases and systems is
critical.
Essentially the DBA must concern themselves
with two major issues:
-
Get enough server to insure adequate
performance
-
Allow for enough backup and recovery
horsepower to get backup and recovery performed within the required
time constraints.
All of this actually falls under the topic
resource and capacity planning.
Resource and Capacity
Planning
Oracle is a resource intensive database
system. The more memory, CPU and disk resources you can provide
Oracle, the better it performs. Resource planning with Oracle becomes
more a game of "how much can we afford to buy" instead of "what is the
minimum configuration". A minimally configured Oracle server will not
function in an efficient manner.
Resource Specification for Oracle
In resource specification there are several
questions which must be answered.
-
How many users will be using the system both
now and in the future?
-
How much data will the system contain both
now and in the future, do we know growth rates?
-
What response times are expected?
-
What system availability is expected?
Why are these questions important?
1. How many users will be using the
system both now and in the future?
This question is important because it effects
how much processing power is going to be required. The number of users
will determine number and speed of CPUs, size of memory, network
related configuration.
2. How much data will the system contain
both now and in the future, do we know growth rates?
This question is important because it
determines disk needs, how much storage will be required to take data
we have today and how much will be needed to allow for growth. The
answer to this question also helps determine how much memory will be
required.
3. What response times are expected?
This question is important because it drives
number, type and speed of CPU resources as well as network issues. In
addition it will drive disk configuration issues such as number and
speed of disks, number and speed of controllers, disk partitioning
decisions.
4. What system availability is expected?
This question is important because system
availability drives the type of RAID configuration (1, 0, 0/1, RAID5),
the type of backup expected (cold, hot) and any parallel server
issues. The requirements change if all that is expected is the system
be available during working hours Monday through Friday or if the
system is expected to be available 24X7 seven days a week. This also
drives the type of backup media, whether a single tape drive is all
that is required or do we need a hi-speed, multichannel, tape-stacker,
silo based solution?
To properly perform capacity planning a
cooperative effort must be undertaken between the system
administrators, database administrators and network administrators.
Step 1: Size the Oracle database
A starting point for the whole process of
capacity planning is to know how many and what size databases will be
supported on a given server resource. The physical sizing of tables,
indexes, clusters and LOB storage areas will play a critical role in
sizing the overall database including the shared global memory areas
and disk farm. The DBA and designers must work in concert to
accurately size the database physical files. The design of the
database will also drive the placement and number of tablespaces and
other database resources such as size and quantity of redo logs,
rollback segments and their associated buffer areas.
Generally the database block buffer areas of a
database SGA will size out at between 1/20 to 1/100 the physical sum
of the total number of database file sizes. For example if the
database physical size is 20 gigabytes the database block buffers
should size out to around 200 megabytes to 1 gigabyte in size
depending on how the data is being used. In most cases the SGA shared
pool would size out at around 20-150 megabytes maximum depending on
the usage model for the shared SQL areas (covered in a later lesson.)
For a 20 gigabyte system the redo logs would most likely run between
20 and 80 megabytes, you would want them mirrored and probably no
fewer than 5 groups. The log buffer to support a 50 megabyte redo log
file would be a minimum of 5 megabytes maybe as large as 10 megabytes.
The final major factor for the SGA would be the size of the sort area,
for this size of a database a 10-20 megabyte sort area is about right
(depending on the number and size of sorts). Remember that sort areas
can either be a part of the shared pool or a part of the large pool,
this too we will cover in a later lesson.
So based on the above what have we determined?
Lets choose 400 megabytes for our database block buffer size, 70
megabytes for the shared pool, 4-10 megabyte log buffers (40
megabytes) and a sort area size of 10 megabytes. We are looking at a
500-600 megabyte SGA with the other non-DBA sizable factors added in.
Since you are not supposed to use more than 60% of physical memory
(depending on who you ask) this means w will need at least a gigabyte
of RAM. With this size of database a single CPU probably won?t give
sufficient performance so we are probably looking for at least a
4-processor machine. If we have more than one instance installed on
the server, the memory requirements will go up.
Step 2: Determine Number and Type of Users:
Naturally a one user database will require
fewer resources than a thousand user database. Generally you will need
to take a SWAG at how much memory and disk resources each user will
require. An example would be to assume that of an installed user base
of 1000 users, only 10 percent of them will be concurrently using the
database. This leaves 100 concurrent users, of those maybe a second 10
percent will be doing activities that require sort areas, this brings
the number down to 10 users each using (from our previous example) 10
megabytes of memory each (100 megabytes.) In addition each of the 100
concurrent users needs approximately 200k of process space (depending
on activity, OS and other factors) so we are talking an additional
load of 20 megabytes just for user process space. Finally, each of
these users will probably require some amount of disk resource (less
if they are client-server or web based) let?s give them 5 meg of disk
to start apiece, that adds up to 5 gigabytes of disk (give or take a
meg or two.)
Step 3: Determine Hardware Requirements to
Meet Required Response Times and Support User Load:
This step will involve the system
administrator and perhaps the hardware vendor. Given our 1000:100:10
mix of users and any required response times numbers they should be
able to configure a server that will provide proper performance.
Usually this will require multiple, multiple-channel disk interfaces
and several physically separate disk arrays.
Step 4: Determine Backup Hardware to
Support Required Uptime Requirements:
Here again the system administrator and
hardware vendor will have a hand in the decision. Based on the size of
disks and the speed of the backup solution maximum recovery time
should be developed. If there is no way to meet required uptime
requirements using simple backup schemes then more esoteric
architectures may be indicated such as multi-channel tapes, hot
standby databases or even Oracle Parallel Server. Let?s say we require
a 24X7 uptime requirement with instantaneous failover ( no recovery
time due to the mission critical nature of the system.) This type of
specification would require Oracle Parallel Server in an automated
failover setup. We would also use either a double or triple disk
mirror so that we could split the mirror to perform backups without
losing the protection of the mirroring.
Let?s compile what we have determined so far:
Hardware: 2 - 4 CPU (at highest speed CPU we
can afford) with at least 1 gigabyte (preferably 2) of shared RAM, at
least 2 disk controllers each with multiple channels, 90 gigabytes of
disk resource using a three way mirror to give us one 30 gig triple
mirrored array. The systems themselves should have an internal disk
subsystem sufficient to support the operating system and any swap and
paging requirements. Systems must be able to share disk resources so
must support clustering. High-speed tape backup to minimize
mirror-split times.
Software: Oracle Parallel Server, Cluster
management software, Networking software, Backup software to support
backup hardware.
Capacity and resource planning is not an exact
science. Essentially we are shooting for a moving target. The dual
Pentium II 200 NT server with 10 gig of 2-gigabyte SCSI disks I bought
2 years ago for $5k has a modern equivalent in the Pentium III 400
with internal 14 gig drive my father-in-law just purchased for $1k. By
the time we specify and purchase a system it is already superceded.
You should insist on being allowed to substitute more efficient, lower
cost options as they come available during the specification and
procurement phases.
This is an excerpt from
the eBook "Oracle
DBA made Simple".
For more details on Oracle
database administration, see the "Easy
Oracle Jumpstart" by Robert Freeman and Steve Karam. It?s
only $19.95 when you buy it directly from the publisher
here.
|