Call now: 252-767-6166  
Oracle Training Oracle Support Development Oracle Apps

 
 Home
 E-mail Us
 Oracle Articles
New Oracle Articles


 Oracle Training
 Oracle Tips

 Oracle Forum
 Class Catalog


 Remote DBA
 Oracle Tuning
 Emergency 911
 RAC Support
 Apps Support
 Analysis
 Design
 Implementation
 Oracle Support


 SQL Tuning
 Security

 Oracle UNIX
 Oracle Linux
 Monitoring
 Remote s
upport
 Remote plans
 Remote
services
 Application Server

 Applications
 Oracle Forms
 Oracle Portal
 App Upgrades
 SQL Server
 Oracle Concepts
 Software Support

 Remote S
upport  
 Development  

 Implementation


 Consulting Staff
 Consulting Prices
 Help Wanted!

 


 Oracle Posters
 Oracle Books

 Oracle Scripts
 Ion
 Excel-DB  

Don Burleson Blog 


 

 

 


 

 

The Best Oracle Resource on the Web

Oracle Expert Tuning Secrets - Part 2

by Donald K. Burleson
 

In this installment, we will examine disk I/O and understand how reducing disk I/O is the single most important Oracle tuning activity.

Oracle Tuning and Disk I/O

Disk I/O is a very time-consuming task, and almost every Oracle tuning activity has the ultimate goal of reducing disk I/O.

This is especially evident when we look at Oracle9i tuning. To make this clear, let's look at some common tuning activities and see how they reduce disk I/O:

Tuning SQL statements -- When we tune an SQL statement to replace a full-table scan with an index range scan, the performance improvement is the direct result of a reduction in disk I/O.

Changes to the Oracle SGA -- When we increase the shared_pool, large_pool, or db_cache_size, the resulting performance improvement is related to the reduction in disk I/O.

Table reorganizations -- When we reorganize a table, we remove extent fragments, coalesce migrated/chained rows, re-build the freelist chain, and re-sequence table rows. These activities all have the result of reducing the amount of disk I/O in the Oracle database.

In sum, disk I/O is the single most expensive operation within an Oracle9i database, and multiple block sizes give us a powerful new tool to manage disk I/O with more power than ever before.

Our Ion tool is the easiest way to analyze Oracle performance and Ion allows you to spot hidden performance trends.

 

Let's see how using Oracle RAM data buffers help to reduce disk I/O.

Tuning with RAM data buffers

In Oracle9i, we have the ability to define up to seven separate and distinct data buffers. These data buffers can be used to isolate Oracle data in RAM and improve performance by reducing disk I/O.

These buffers can have different block sizes and named pools exist for 2K, 4K, 16K and 32K buffers. We also have three other pools: the keep pool, the recycle pool and the default pool.

Let's take a look at each of these data buffers.

The KEEP pool
When the KEEP pool was first introduced in Oracle8i, its purpose was to provide a RAM data buffer to fully-cache blocks, frequently referenced tables and indexes. For example, when computing the size of the KEEP pool, we must total the number of bytes for all tables that have been marked to reside in the KEEP pool. This is because we always want the KEEP pool to fully cache all tables that have been assigned to the KEEP pool.

In Oracle9i, a table must reside in a tablespace of the same block size as the cache assigned to the table.

alter table CUSTOMER storage (buffer_pool KEEP);

Remember, the point of the KEEP pool is to always have a data buffer hit ratio of 100 percent. Also note that the block size of the KEEP pool is not important. This is because, once loaded, all blocks in the KEEP pool will always remain in RAM memory. In our example, the KEEP pool is a 32K blocksize because we wanted the RECYCLE pool to have a large block size to improve the performance of full-table scans.

Locating Tables and Indexes for the KEEP Pool

The Oracle documentation states, "A good candidate for a segment to put into the KEEP pool is a segment that is smaller than 10% of the size of the DEFAULT buffer pool and has incurred at least 1% of the total I/Os in the system." In other words, small, highly accessed tables are good candidates for caching.

So, how do we identify small-table full table scans? The best method is to explain all of the SQL that is currently in your library cache and then generate a report showing the all of the full table scans in your database at that time. I invented a very important script called access.sql that was published in the December 2000 issue of Oracle Magazine. You can find the access.sql script in my article entitled Mining Gold from the Library.

Running the access.sql script should give us all of the information we need to identify tables for the KEEP pool. Any small tables (for example, less than 50 blocks) that have a high number of full table scans will benefit from being added to the KEEP pool. In the report below, we see output from an Oracle Applications database, and we see full table scans on both large and small tables.

full table scans and counts
 

OWNER      NAME                         NUM_ROWS C K   BLOCKS  NBR_FTS      

---------- ------------------------ ------------ - - -------- --------      

APPLSYS    FND_CONC_RELEASE_DISJS             39 N K        2   98,864      

APPLSYS    FND_CONC_RELEASE_PERIODS           39 N K        2   98,864      

APPLSYS    FND_CONC_RELEASE_STATES             1 N K        2   98,864      

SYS        DUAL                                  N K        2   63,466      

APPLSYS    FND_CONC_PP_ACTIONS             7,021 N      1,262   52,036      

APPLSYS    FND_CONC_REL_CONJ_MEMBER            0 N K       22   50,174      

APPLSYS    FND_CONC_REL_DISJ_MEMBER           39 N K        2   50,174      

APPLSYS    FND_FILE_TEMP                       0 N         22   48,611      

APPLSYS    FND_RUN_REQUESTS                   99 N         32   48,606      

INV        MTL_PARAMETERS                      6 N K        6   21,478      

APPLSYS    FND_PRODUCT_GROUPS                  1 N          2   12,555      

APPLSYS    FND_CONCURRENT_QUEUES_TL           13 N K       10   12,257      

AP         AP_SYSTEM_PARAMETERS_ALL            1 N K        6    4,521      

APPLSYS    FND_CONCURRENT_QUEUES              13 N K       10    4,078  

From examining the report above, we identify the following files for addition to the KEEP pool. We select those tables with less than 50 blocks that are not already in the KEEP pool (the "K" column).

OWNER          NAME                          NUM_ROWS C K   BLOCKS  NBR_FTS      

-------------- ------------------------ ---- -------- - - -------- --------      

PPLSYS         FND_FILE_TEMP                     10 N         22   48,611      

APPLSYS        FND_RUN_REQUESTS                  99 N         32   48,606      

APPLSYS        FND_PRODUCT_GROUPS                 1 N          2   12,555      

Remember, our goal is for the data buffer hit ratio for the KEEP pool to always be 100 percent. Every time we add a table to the KEEP pool, we must also add the number of blocks in the table to the KEEP pool parameter in our init.ora file.

Once you have explained all of the SQL in your library cache, you will have a plan table with all of the execution plans and a sqltemp table with all of the SQL source code. Once these tables are populated, you can create a simple script to generate the KEEP syntax for you. Next, let's examine each of the data buffer pools so we know these proper use for each data buffer area.

The RECYCLE Pool

This data pool is reserved for large-table full table scans. Because Oracle data blocks from full table scans are unlikely to be reread, the RECYCLE pool is used so that the incoming data blocks do not "flush out" data blocks from more frequently used tables and indexes. Large tables that experience full-table scans are assigned to the RECYCLE pool to prevent their data blocks from reducing available blocks for other tables.

Now, let's see how multiple block sizes can improve Oracle performance.

Using Multiple Block Sizes

The most important consideration when using multiple block sizes in Oracle9i is to segregate different portions of the Oracle database into different data pools.

When an SQL statement requests the fetch of a result set from Oracle tables, the SQL is probably retrieving the table by an index.

As an Oracle8i tuning expert, I often recommended that a whole database be re-defined with a large blocksize. Many people were mystified when a database with a 2K-block size was increased to an 8K-block size and the entire database ran faster. A common justification for resisting a block size increase was "This database randomly fetches small rows. I can't see why moving to a larger block size would improve performance." So, then, what explains the performance improvement with larger block sizes?

When choosing a block size, many DBAs forget about the index trees and how Oracle indexes are accessed sequentially when doing an index range scan. An index range scan is commonly seen in nested loop joins, and the vast majority of row access involved indexes.

Because index range scans involve gathering sequential index nodes, placing the indexes in a larger block size reduces disk I/O and improves throughput for the whole database.

So then, why not create our entire Oracle database with large block sizes and forget about multiple block sizes? The answer is not simple. In order to fully utilize the RAM memory in the data buffers, you must segregate tables according to their distribution of related data.

Small blocks - Tables with small rows that are accessed in a random fashion should be placed onto tablespaces with small block sizes. With random access and small block sizes, more of the RAM in the data buffer remains available to hold frequently referenced rows from other tables.

Large blocks - Row-ordered tables, single-table clusters, and tables with frequent full-table scans should reside in tablespaces with large block sizes. This is because a single I/O will fetch many related rows and subsequent requests for the "next" rows will already be in the data buffer.

The goal here is simple; we want to maximize the amount of available RAM memory for the data buffers by setting the block sizes according to the amount of I/O experienced by the table or index. Random access of small rows suggests small block sizes, while sequential access of related rows suggests large block sizes.

Here is a sample of an Oracle init.ora file that uses separate data buffers with different block sizes:

db_block_size=32768         -- This is the system-wide 

                            -- default block size



db_cache_size=3G            -- This allocates a total of 3 gigabytes 

                            -- for all of the 32K data buffers



db_keep_cache_size=1G       -- Here we use 1 gigabyte for the KEEP pool



db_recycle_cache_size=500M  -- Here is 500 meg for the RECYCLE pool

                            -- Hence, the DEFAULT pool is 1,500 meg



-- *****************************************************************

-- The caches below are all additional RAM memory (total=3.1 gig) 

-- that are above and beyond the allocation from db_cache_size

-- *****************************************************************



db_2k_cache_size=200M       -- This cache is reserved for random 

                            -- block retrieval on tables that

                            -- have small rows.



db_4k_cache_size=500M       -- This 4K buffer will be reserved 

                            -- exclusively for the data dictionary.

                            -- Only the SYSTEM tablespace has 4K blocks



db_8k_cache_size=800M       -- This is a separate cache for 

                            -- segregating I/O for specific tables



db_16k_cache_size=1600M     -- This is a separate cache for 

                            -- segregating I/O for specific tables

Next let's move deeper and explore techniques for identifying hot data files within Oracle. By knowing those data files that have lots of I/O, we gain insight toward our goal of reducing I/O.

Disk I/O Tuning

For other operating environments, we are concerned whenever we see a backlog of I/O tasks waiting to access data on a single disk. For other operating systems, the iostat utility can be used to detect I/O issues.

Once you've identified the hot disks, look closely to find out which files and tables on the disks experience most of the activity, so that you can move them to less-active disks as needed. The actual process of identifying hot files and disks involves running data collection utilities, such as STATSPACK and the UNIX iostat utility, and then using the collected I/O data to pinpoint the sources of excessive I/O measurements.

Here are the cardinal rules for disk I/O:

  • Understand I/O - There is a difference between a busy disk and a disk that is waiting for I/O to complete. In the next section, we will explore the UNIX iostat utility and show how you can identify busy disks.
  • Monitor disk I/O - Many disk arrays such as EMC provide sophisticated disk monitoring tools such as Open Symmetrics Manager and Navistar. These tools report on more than simple disk waits, and highlight contention for disks, channels, and disk adapters.
  • Use RAID properly - If you are using RAID such as RAID 0+1, the Oracle data blocks will be spread randomly across all of the disks, and load will rise and fall in a uniform fashion.
  • Control where disk I/O happens - Senior Oracle DBAs often prefer not to implement RAID striping so that they have more control over the disk I/O subsystem.
Now that we understand the basic principles behind locating hot disks, let's see how STATSPACK can be extended to capture file I/O information.

STATSPACK Reports for Oracle Datafiles

To perform I/O load balancing, we need to get information about the amount of I/O for an Oracle datafile, relative to the total I/O from the database. Remember, a hot file is not necessarily causing a disk bottleneck. The goal of the STATSPACK technique below is to alert the Oracle DBA to those datafiles that are taking a disproportionate amount of I/O relative to other files in the database.

The script we use for this purpose is called rpt_hot_files.sql, and this script is also incorporated into our generalized DBA alert script, statspack_alert.sql.

The rpt_hot_files.sql script is listed below.

To get the data we need, we rely on two STATSPACK tables:

  • stats$sysstat - The stats$sysstat table contains two important metrics. These are used to compute the total read I/O and write I/O for the entire database:
    • Total physical reads (statistic#=40)
    • Total physical writes (statistic#=44)
  • stats$filestatxs - The stats$filestatxs table contains detailed read I/O and write I/O, totaled by datafile name.
We then compare the system-wide total for read and write I/O with the individual I/O for each Oracle datafile. This allows us to quickly generate an alert report to tell us which files are having the most I/O activity. If we were judicious in placing important tables and indexes into separate tablespaces and datafiles, this report will tell us exactly which database objects are the most active.

Note that you can adjust the thresholds for the rpt_hot_files.sql script. You can set the threshold to 25 percent, 50 percent, or 75 percent, reporting on any files that exceed this threshold percentage of total read and write I/O.

This is a very important script and appears in the generic statspack_alert.sql script. It is critical that the DBA become aware whenever an Oracle datafile is consuming a disproportionate amount of disk I/O. The script below is somewhat complex, but it is worth your time to carefully examine it to understand the query. Lets examine the main steps of this SQL statement:

We select the individual I/O from stats$filestatxs and compare the value for each file to the total I/O as reported in stats$systat.

The WHERE clause determines when a file will be reported. You have the option of adjusting the reporting threshold by commenting out one of the three choices -- 25 percent, 50 percent, or 75 percent -- of the total I/O.

Rpt_hot_files.sql

It is highly recommended that the DBA run this STATSPACK report daily so the DBA can constantly monitor for hot datafiles. Below is a sample of the output from this script. Note how it identifies hot files on an hourly basis.

***********************************************************

This will identify any single file who's read I/O

is more than 50% of the total read I/O of the database.

***********************************************************





Yr. Mo Dy  Hr. FILE_NAME                                 READS PCT_OF_TOT

---------------- ----------------------------------- ----------- ----------

2000-12-14 14    /u02/oradata/prodb1/bookd01.dbf             354         62

2000-12-14 15    /u02/oradata/prodb1/bookd01.dbf             123         63

2000-12-14 16    /u02/oradata/prodb1/bookd01.dbf             132         66

2000-12-14 20    /u02/oradata/prodb1/bookd01.dbf             124         65

2000-12-15 15    /u02/oradata/prodb1/bookd01.dbf             126         72

2001-01-05 09    /u02/oradata/prodb1/system01.dbf            180         63

2001-01-06 14    /u03/oradata/prodb1/perfstat.dbf            752        100

2001-01-06 15    /u02/oradata/prodb1/bookd01.dbf             968         69





***********************************************************

This will identify any single file who's write I/O

is more than 50% of the total write I/O of the database.

***********************************************************





Yr. Mo Dy  Hr. FILE_NAME                               WRITES PCT_OF_TOT

---------------- ----------------------------------- ---------- ----------

2000-12-18 21    /u02/oradata/prodb1/bookd01.dbf           2654         58

2000-12-29 15    /u02/oradata/prodb1/bookd01.dbf           1095         49



When we know those data files that consume a disproportional amount of I/O, we can tune the I/O by moving the data files to other disks.

Conclusion

As we have noted, tuning disk I/O is the single most important Oracle tuning activity, and the vast majority of all Oracle tuning has the goal of reducing the amount of disk access. Configuration of the data buffer pools and optimal file placement also make a huge difference in Oracle performance, and this paper demonstrates several important tools and techniques for reducing expensive disk access.

For more complete information, see my Oracle Press books:

Oracle High-performance SQL Tuning

Oracle9i High-performance Tuning with STATSPACK


 

Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.  Feel free to ask questions on our Oracle forum.

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata?  Oracle technology is changing and we strive to update our BC Oracle support information.  If you find an error or have a suggestion for improving our content, we would appreciate your feedback.  Just  e-mail:  

and include the URL for the page.


                    









Burleson Consulting

The Oracle of Database Support

Oracle Performance Tuning

Remote DBA Services


 

Copyright © 1996 -  2017

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.

Remote Emergency Support provided by Conversational