Oracle UNIX Server Management
Oracle UNIX/Linux Tips by Burleson Consulting
UNIX Server Management
This chapter is devoted to the management of
the Oracle server in a UNIX environment. There are entire
books that have been written about using UNIX to monitor a server,
but this chapter will concentrate on those tools and techniques that
are used by the Oracle DBA to monitor and manage their Oracle UNIX
This chapter is organized to begin with an
overview of the basic of the UNIX architecture and move quickly into
UNIX commands to manage processes, memory, and semaphores. We will
also look at UNIX tools and utilities to help us track the
performance of our Oracle UNIX server. This chapter will be covering
the following topics:
* Process internals for UNIX
* Memory management in UNIX
* Process commands in UNIX
* Memory commands in UNIX
* Displaying UNIX kernel parameters
* Displaying system log messages
* UNIX server monitoring
Let?s begin by exploring how the UNIX
operating system managers processes.
Process internals for UNIX
The center of the UNIX operating system is
called the UNIX kernel. The kernel is used to implement the
interface between UNIX processes to all hardware devices such as
disks, RAM the CPU.
User processes interact with UNIX by making
system calls to UNIX. These system calls include base UNIX
command such as open(), read(), write(), exec(), malloc() and so on,
and these system calls are intercepted by the UNIX kernel and
processed according to specific rules (Figure 2-1).
Figure 1: User processes interacting with
Let?s take a closer look at how a UNIX tasks
operates within the UNIX operating system.
The run queue and the sleep queue in UNIX
When a UNIX user process communicates with
UNIX, the process is placed into a temporary ?sleep? state until the
system call is completed. This is known as the sleep queue,
and it is where UNIX tasks wait while UNIX system calls are being
serviced on their behalf. The process of a UNIX task sleeping and
re-awakening is called context switching. Active UNIX
processes will commonly have context switching as they change from
active to waiting states.
All UNIX tasks enter the UNIX run queue
whenever they require UNIX services. This is sometimes called
the dispatch queue, and the run queue is a list of processes that is
prioritized by UNIX according to the tasks dispatching priority,
which is called the nice value and is determined by the priocntl
system call in UNIX (Figure 2-2).
Figure 2: The UNIX run queue
Let?s take a closer look at the interaction
between Oracle and UNIX at the process level.
Process command execution
To illustrate the communications between
UNIX and Oracle, let?s use the example of a UNIX script that
accesses Oracle to display rollback segment information. Because of
the complex details and differences in UNIX dialects, this example
has been deliberately over-simplified for illustration purposes.
# First, we must set the environment . . . .
ORACLE_HOME=`cat /etc/oratab|grep ^$ORACLE_SID:|cut -f2 -d':'`
echo All Done!
Note the reference to the v$rollstat view in
the UNIX script as v\$rollstat. In UNIX, you must place a
back-slash character in from of every dollar sign in all SQL
commands to tell UNIX that the dollar sign is a literal value and
not a UNIX shell command.
When we execute this script in UNIX, the
script performs the following UNIX system calls:
2 - read(/etc/oratab)
3 - fork(sqlplus)
4 - read(file#,block#)
5 - write(v$rollstat contents)
6 ? write(?All Done?)
Let?s explore how UNIX forks sub-processes
in order to service a task.
The fork system call
The fork() system call directs UNIX to spawn
a sub-task to service the request. In this case, our Korn
shell script will fork two sub processes (Figure 2-3).
Figure 3: Forking a UNIX process
These forked processes are visible by using
the UNIX ps ?ef command. In the example below, we grep for all
processes owned by oracle, and then use the grep ?v command to
remove all Oracle background processes. As we may know, the Oracle
background processes (pmon, smon, arch, etc.) are all identified by
a UNIX process in the form ora_processname_ORACLE_SID, such that we
see processes with names like ora_ smon_testsid, and ora_pmon_prod,
and so on.
root> ps ?ef|grep
ora|grep ?v ora_
12624 12622 0 12:07:17 pts/5 0:00 ?ksh
oracle 12579 12624 0 12:06:54 ?
Look closely at above ps ?ef listing and
note that the first columns are as follows:
Column 1 ? Process_owner_name
Column 2 ? Process_ID
Column 3 ? Parent process_ID
As we see, whenever a fork occurs, we can
track backwards to see the originating process. Here we see that our
UNIX session (process 12622) has forked process 12624 when the Korn
shell script was started. Process 12624, in turn, has forked
process 12579 to manage the connection to SQL*Plus. Here is a
pictorial description of this interaction.
Note: The interactions in UNIX are very
complicated, and these examples have been made deliberately simple
to illustrate the basic concepts.
1 - Here we see that the initial task waits
in the run queue for service.
2 - Upon reaching the head of the runqueue,
the ksh script is started and it issues the read() to inspect the
/etc/oratab file and the context switch places into a sleep state
until the I/O is complete.
3 - Upon receiving the desired data, the
process re-enters the runqueue and waits until it can issue the
fork() command to start SQL*Plus. At this point the context
switch is set to sleep until the SQL*Plus process has completed.
4 ? The SQL*Plus command instructs the
Oracle to issue a read() command to fetch the desired view
information from the RAM memory in the SGA (The V$ views are in RAM,
not on disk). Upon completion of the read, a write() is issued
to display the results to the standard out device. SQL*Plus
then terminates and sends a signal back to the owner process.
5 ? The owner process (ksh) then has a
context switch and re-awakens. After reaching the head of the
runqueue, it issues a write() to standard output to display the ?All
The UNIX buffer cache
Just as Oracle has data buffer caches in RAM
memory, UNIX also utilizes a RAM buffer to minimize unnecessary disk
I/O. This buffer is commonly known as the Journal File System
or JFS buffer. When Oracle data is retrieved from the Oracle
database, the data block often travels through several layers of RAM
Internal file cache vs. JFS cache
As we see in Figure 2-4, when Oracle makes a
request to fetch a data block, the Oracle data buffer is first
checked in ensure that the block is not already in the Oracle
buffer. If it is not, the UNIX JFS buffer will then be checked
for the data block. If the data block is not in the JFS
buffer, the disk array buffer is then checked. Only when none
of these three buffers contains the data block is a physical disk
The JFS buffer and Oracle raw devices
Because of the high amount of I/O that many
Oracle systems experience, many Oracle DBAs consider the use of
?raw? devices. A raw device is defined as a disk that bypasses the
I/O overhead created by the Journal File System (JFS) in UNIX. The
reduction in overhead can improve throughput, but only in cases
where I/O is already the bottleneck for the Oracle database.
Furthermore, raw devices require a tremendous amount of manual work
for both the Oracle administrator and the systems administrator.
Oracle recommends that raw devices should only be considered when
the Oracle database is I/O bound. However, for these types of Oracle
databases, raw devices can dramatically improve overall performance.
If the database is not I/O bound, switching to raw devices will have
no impact on performance.
In many UNIX environments such as AIX, raw
devices are called virtual storage devices (VSDs). These VSDs are
created from disk physical partitions (PPs), such that a single VSD
can contain pieces from several physical disks. It is the job of the
system administrator to create a pool of VSDs for the Oracle
administrator. The Oracle administrator can then take these VSDs and
combine them into Oracle datafiles. This creates a situation where
an Oracle datafile may be made from several VSDs. This many-to-many
relationship between Oracle datafiles and VSDs makes Oracle
administration more challenging.
In summary, raw devices for Oracle databases
can provide improved I/O throughput only for databases that are
already I/O bound. However, this performance gain comes at the
expense of increased administrative overhead for the Oracle
administrator. We also know that raw devices will only improve the
performance of Oracle databases whose Oracle subsystem is clearly
I/O bound. For systems that are not I/O bound, moving to raw devices
will not result in any performance gains.
Now that we have a general idea of how UNIX
tasks operate, let?s take a look at how RAM memory is managed in
Get the Complete
Oracle SQL Tuning Information
The landmark book
SQL Tuning The Definitive Reference" is
filled with valuable information on Oracle SQL Tuning.
This book includes scripts and tools to hypercharge Oracle 11g
performance and you can
for 30% off directly from the publisher.