 |
|
Automatic Storage
Management
Oracle Database Tips by Donald Burleson |
Having created both 9i and 10g RAC databases using
OCFS, it is time to take a look at Automatic Storage Managementor ASM.
Internally, ASM is very similar to RAW. 10g
release 1 does not provide for a file system method for listing,
moving or copying files stored on ASM. Instead, the administrator must
use database views such as
dba_data_filesor
v$datafileto list the files and rman to back them up.
Despite these apparent shortfalls, ASM offers many
advantages. For example, ASM makes it very easy to configure redundant
disk volumes. This means that if a physical disk fails, no database
with ASM files stored on that disk will go down.
Replacing the failed disk drive is a simple
matter. The disk can be hot-swapped, configured as an ASM volume and
added to the disk group. At that point, ASM will automatically
rebalance the database files stored within the disk group.
ASM optimizes I/O performance by automatically
dividing files into extents and distributing those extents across
available volumes. The administrator is thus freed from having to
manually tune I/O.
Using ASM requires driver packages to be
installed. Disk volumes are then configured as ASM disks, and a
special Oracle instance called +ASMn is started. The ASM instance must
be running in order for a database that uses ASM to be started.
Databases that use ASM files read and write to those files directly,
but must communicate with the ASM instance to get information about
how its files are stored.
The ASM instance has two background processes new
to 10g, namely RBAL and ARBn. These processes work together to
rebalance extents across the disks. ARBn is spawned when the ASM
instance requires it.
A database that stores data on ASM volumes has two
new background processes; RBAL and ASMB. RBAL performs global opens of
the disks. ASMB connects to the +ASMn instance to communicate
information such as file creation and deletion.
The following table walks through the steps for
installing the ASM drivers:
1.
As root, install the three ASM driver packages in the order
shown. This step must be completed on each node.
2.
As root, configure the ASM driver to be owned by
oracle:oinstall and start ASM
on boot as shown above. This step must be completed on each node.
3.
As root, from oracle1, create ASM volumes on four unused
partitions of the shared drive as shown. This step writes a header to
each volume identifying it as an ASM volume.
4.
As root, from oracle2, scan the existing partitions for ASM
volumes. This will detect the volumes created from oracle1 in the
previous step
At this point, four partitions have been
configured to be used as ASM volumes.
A production server is likely to have multiple
disks and multiple controllers. A controller is a plug-in card that
provides a connection to an external device such as external storage.
The firewire card on each node is similar to a scsi card used in a
production server. It is used to connect to external storage, and thus
a possible point of failure.
Because a controller is a possible point of
failure, Oracle provides a way of subdividing an ASM diskgroup into
failgroups. Figure 9.7 shows how a server with two pairs of external
disks attached to separate controllers is configured as a single
diskgroup made up of two failgroups.
Although our hardware is not the same, for the
sake of this exercise, we will mimic this configuration by assuming
ASM volumes DISK01 and DISK02 are separate disks on a single
controller, and volumes DISK03 and DISK04 are separate disks on a
different controller. By doing so, ASM will mirror extents on each
failgroup, ensuring that databases stay up even if a controller were
to fail.
It is possible to use dbca to create an ASM
instance, but performing this task manually is easy and will provide
insights into the ASM architecture. The following table walks through
the steps of manually creating the ASM instance:
1.
As root, open /etc/oratab with a text editor and add an
instance with the name +ASM. Exit the root login when done. This step
should be completed from each node.
Note: If you create
an ASM instance with dbca, you will find that the ASM instance is
added to the /etc/oratab file as ?+ASM1? or ?+ASM2? depending on the
node. That configuration is inconsistent with all other cluster
databases created by dbca and will not allow use of the racenv
script.
2.
As oracle, create the supporting directories for the ASM
instance. This step should be completed on each node.
3.
From oracle1, as oracle, create the password file in the shared
directory as shown.
4.
Using gedit, create the file
$ORACLE_BASE/admin/+ASM/pfile/init+ASM.ora for the ASM database. This
step needs to be completed on oracle1 only.
5.
From oracle1, create the spfile on the shared disk as shown and
exit sqlplus.
6.
From each node, change directory to $ORACLE_HOME/dbs. There,
create a symbolic link to the password file and a pointer file to the
spfile as shown.
7.
Use srvctl to add the asm service to the CRS repository and
start the instances.
8.
From oracle1, use racenv to set the environment for +ASM.
Launch sqlplus, and login as sys as sysdba. Selecting the path from
v$asm_diskwill list the available ASM volumes.
9.
Create the diskgroup as shown. Once created, select from
v$asm_diskgroup for
information about the diskgroup just created. Creating the diskgroup
from oracle1 automatically adds the following parameter to the spfile:
+ASM1.asm_diskgroups='DSKGRP01'
This parameter will tell the ASM instance
on oracle1 to automatically mount the diskgroup whenever ASM is
started. However, it will not mount the diskgroup from the instance on
oracle2.
10.
From oracle2, use sqlplus to login to the ASM instance as sys.
You will find that this instance has not mounted the diskgroup. Do so
with the alter diskgroup mount command as shown. This command
automatically adds the following parameter to the spfile:
+ASM2.asm_diskgroups='DSKGRP01'
Now, the diskgroup DSKGRP01 will be
mounted by both instances when the ASM instances are started.
An ASM instance has no datafiles and no data
dictionary. Thus, it is never open in the way regular databases are
opened. Try selecting from
dba_tables! The only users that can login to an ASM instance
are those with sysdba or sysoper privileges. The following table lists
important dynamic performance views available from within the ASM
instance:
V$PARAMETER
Lists parameters
V$INSTANCE Lists instance information
V$ASM_CLIENT Lists each database using ASM diskgroups
V$ASM_DISKGROUP Lists diskgroups
V$ASM_TEMPLATE Lists templates per diskgroup
V$ASM_DISK
Lists all discovered ASM volumes, including those
that are
not included in a diskgroup
V$ASM_OPERATION Lists active ASM operations
V$ASM_FILE
Lists files present in each mounted ASM
diskgroup
V$ASM_ALIAS
Lists aliases (file names) present in each mounted
diskgroup
Note: There
is a parallel GV$ view for each of the above listed views.
GV$ views provide information for all open instances in the
cluster.
 |
If you want to learn RAC at home, get the bestselling book "Personal
Oracle Real Application Clusters" by Edward Stoever.
You can buy it direct from the publisher for 30%-off and get
instant access to the code depot of Oracle tuning scripts. |
|