Call now: 252-767-6166  
Oracle Training Oracle Support Development Oracle Apps

 
 Home
 E-mail Us
 Oracle Articles
New Oracle Articles


 Oracle Training
 Oracle Tips

 Oracle Forum
 Class Catalog


 Remote DBA
 Oracle Tuning
 Emergency 911
 RAC Support
 Apps Support
 Analysis
 Design
 Implementation
 Oracle Support


 SQL Tuning
 Security

 Oracle UNIX
 Oracle Linux
 Monitoring
 Remote s
upport
 Remote plans
 Remote
services
 Application Server

 Applications
 Oracle Forms
 Oracle Portal
 App Upgrades
 SQL Server
 Oracle Concepts
 Software Support

 Remote S
upport  
 Development  

 Implementation


 Consulting Staff
 Consulting Prices
 Help Wanted!

 


 Oracle Posters
 Oracle Books

 Oracle Scripts
 Ion
 Excel-DB  

Don Burleson Blog 


 

 

 


 

 

 

 

 

Oracle Administration Finding Large Files on a UNIX Server

Oracle UNIX/Linux Tips by Burleson Consulting

Finding large files on a UNIX server

The following command is very useful in cases where a UNIX file system has become full.  As we may know, Oracle will hang whenever Oracle must expand a tablespace and Oracle cannot extend the UNIX filesystem.

When a UNIX file become unexpectedly full, it may be because Oracle has written a huge core or trace file into the UNIX filesystem.

The script below will display all files that are greaten then one megabyte in size.  Note that the size parameter is specified in K-bytes.

root> find . -size +1024 ?print
./prodsid_ora_22951.trc

Of course, you can easily append the xargs of ?exec command to automatically remove the large file:

root> find . -size +1024 ?print|xargs ?i rm \;

Note: In Windows you can use this equivalent to the UNIX -mtime check, see equivalent of UNIX mtime for Windows

A step-by-step approach to deleting unwanted UNIX files

When writing UNIX scripts to automatically delete obsolete Oracle trace files, audit files, dump files and archived redo logs, it is a good idea to carefully develop the UNIX script through iteration, getting each piece working.  Remember, even seasoned UNIX guru?s will not write a complex command until each component is tested.

We start by going to the archived redo log directory and finding all redo log files that are more than 7 days old.  In this example, we take a cold backup each night, and the DBA has not use for elderly redo log files.

root> cd $DBA/$ORACLE_SID/arch
root> find . -mtime +7

./archlog2251.arc
./archlog2252.arc
./archlog2253.arc
./archlog2254.arc

Now that we have the list of files, we can add the UNIX rm command to this syntax to automatically remove all files that are more than 7 days old. In this case, we can use either the ?exec or the xargs commands to remove the files.

find . ?mtime +7 ?exec rm {} \;

Delete all old Oracle trace and audit files more than 14 days old

Here is an example of a UNIX script for keeping the archived redo log directory free of elderly files.  As we know, it is important to keep room in this directory, because Oracle may ?lock-up? if he cannot write a current redo log to the archived redo log filesystem. This script could be used in coordination with Oracle Recovery Manager (rman) to only remove files after a full backup has been taken.

clean_arch.ksh
#!/bin/ksh

# Cleanup archive logs more than 7 days old
find /u01/app/oracle/admin/mysid/arch/arch_mysid*.arc -ctime +7 -exec rm {} |
;

Now that we see how to do the cleanup for an individual directory, we can easily expand this approach to loop through every Oracle database name on the server (by using the oratab file), and remove the files from each directory. If you are using Solaris the oratab is located in /var/opt/oratab while HP/UX and AIX have the oratab file in the /etc directory.

clean_all.ksh
#!/bin/ksh

for ORACLE_SID in `cat /etc/oratab|egrep ':N|:Y'|grep -v \*|cut -f1 -d':'`
do
 ORACLE_HOME=`cat /etc/oratab|grep ^$ORACLE_SID:|cut -d":" -f2`
 DBA=`echo $ORACLE_HOME | sed -e 's:/product/.*::g'`/admin
 find $DBA/$ORACLE_SID/bdump -name \*.trc -mtime +14 -exec rm {} \;
 $DBA/$ORACLE_SID/udump -name \*.trc -mtime +14 -exec rm {} \;
 find $ORACLE_HOME/rdbms/audit -name \*.aud -mtime +14 -exec rm {} \;
done

The above script loops through each database, visiting the bdump, udump and audit directories, removing all files more than 2 weeks old.

Removing UNIX files with the dbms_backup_restore package

Oracle provides a package called dbms_backup_restore that can also be used to remove UNIX files.  As we may know, you can create a directory entity inside Oracle that contains the location of a UNIX directory.

SQL> create or replace directory
   arch_dir
as
   '/u01/oracle/admin/mysid/arch';

You can then use a PL/SQL stored procedure to read the UNIX file names from the v$archived_log view.  For all files that are more than 30 days old (according to v$archived_log.completion_time), we call dbms_backup_restore.deletefile procedure to remove the archived redo log from UNIX.  Note the requirement to get the

CREATE OR REPLACE PROCEDURE
   remove_elderly_archive_logs
IS
  arc_file    BFILE;
  arc_exist   BOOLEAN;
  arc_name    VARCHAR2(100);
  --**************************************************
  -- Select all files over 30 days old . . .
  --**************************************************
  CURSOR get_archive IS
    SELECT name
      FROM v$archived_log;
     WHERE completion_time < SYSDATE - 30;
BEGIN
  FOR entry IN get_archive LOOP

    --***********************************************************
    -- use the BFILENAME function to get the file name from UNIX
    --***********************************************************
    arc_file  := BFILENAME('arch_dir',entry.name);

    --**************************************************
    -- check to make sure the file still exists in UNIX
    --**************************************************
    arc_file_exists := FALSE;
    arc_file_exists := DBMS_LOB.FILEEXISTS(arc_file) = 1;   

    IF arc_file_exists THEN
      dbms_output.put_line('Deleting: ' || entry.name);
      --*****************************************************
      -- call the DELETEFILE procedure to nuke the UNIX file
      --*****************************************************
      SYS.DBMS_BACKUP_RESTORE.DELETEFILE(entry.name);
    END IF;

  END loop;
END;
/

Now, we can simply call the procedure from a UNIX crontab to periodically remove the files.  The crontab entry might look like this:

#**********************************************************
# Run the archived redo log cleanup Monday at 7:30 AM
#**********************************************************
30 07 1 * * /home/scripts/nuke_logs.ksh mysid > /home/scripts/nuke.lst
Here is a sample of the script inself.
nuke_logs.ksh
#!/bin/ksh

# First, we must set the environment . . . .

ORACLE_SID=$1
export ORACLE_SID
ORACLE_HOME=`cat /etc/oratab|grep ^$ORACLE_SID:|cut -f2 -d':'`
#ORACLE_HOME=`cat /var/opt/oracle/oratab|grep ^$ORACLE_SID:|cut -f2 -d':'`
export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$PATH
export PATH

$ORACLE_HOME/bin/sqlplus ?s /nologin<<!
connect system/manager as sysdba;
set serveroutput on;
exec remove_elderly_archive_logs
exit
!

Now that we have seen the host of UNIX commands that can be issued to maintain the file environment, let?s move on and take a closer look at minimizing file I/O in a UNIX Oracle environment.  Remember, we must actively work to reduce disk I/O to keep the database running at optimal levels.

Oracle Performance and disk I/O

When a request is made from the Oracle database to fetch a block from a disk, we see sources of latency. This example assumes that we are not using a disk array with caching, and that a physical I/O is required to fetch the data block.  When a physical request is made to a disk, the total delay time can be broken into three components:

* Seek delay (70%) -- the seek delay is the amount of time it takes to move the read-write heads over the appropriate cylinder on the disk device.  Seek delay is the largest component of disk delay.

* Rotational delay (30%) -- Rotational delay is the time that the I/O must wait for the requested block to pass beneath the read-write heads.  The average rotational delay for a disk is one-half the rotational speed of the disk.

* Transmission delay (<1%) -- is the smallest component of Oracle disk response time, and the only one that relates to block size.  Transmission time is measured at the speed of light, and the overhead of transmitting a 32k block is not measurable slower than fetching a 2k block.

Once we understand that 99% of the disk delay is required whether we read a 2K block or a 32K block, we begin to understand the nature of disk I/O and block sizing.

The seek delay and rotational delay are the same regardless of the size of the block you are reading, and that transmission time differences are so small as to be un-measurable. Once the read-write heads are positioned directly over the cylinder, the only difference in machinery sources is the time required to transmit the larger block across the network back to the Oracle database.

Hence, we can come to an important conclusion:

The database block size does not measurably affect the speed of the block I/O, and fetching a 32K block is not more expensive than fetching a 2K block.

If it takes about the same amount of time to fetch a 2k block as it does to fetch a 32k block, why don?t we make all of our database blocks 32K and get as more data for the same I/O cost?

The answer is the expense of potentially wasted RAM storage in the data buffer caches.  For example, moving a 32K block into the RAM buffer to retrieve an 80-byte record is a huge waste, unless there are other rows in the 32K block that are like to be requested by Oracle. Our goal is to manage our precious RAM space and make the most efficient use of our data buffer caches.

While the general rule holds true that the more data you can fetch a single I/O, the better your overall buffer hit ratio, we have to take a closer look at the multiple data buffer phenomenon to gather true picture of what's happening under the covers.  Let?s start with a simple example.

 

If you like Oracle tuning, see the book "Oracle Tuning: The Definitive Reference", with 950 pages of tuning tips and scripts. 

You can buy it direct from the publisher for 30%-off and get instant access to the code depot of Oracle tuning scripts.


 

 
��  
 
 
Oracle Training at Sea
 
 
 
 
oracle dba poster
 

 
Follow us on Twitter 
 
Oracle performance tuning software 
 
Oracle Linux poster
 
 
 

 

Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.  Feel free to ask questions on our Oracle forum.

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata?  Oracle technology is changing and we strive to update our BC Oracle support information.  If you find an error or have a suggestion for improving our content, we would appreciate your feedback.  Just  e-mail:  

and include the URL for the page.


                    









Burleson Consulting

The Oracle of Database Support

Oracle Performance Tuning

Remote DBA Services


 

Copyright © 1996 -  2017

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.

Remote Emergency Support provided by Conversational