Call now: 252-767-6166  
Oracle Training Oracle Support Development Oracle Apps

 
 Home
 E-mail Us
 Oracle Articles
New Oracle Articles


 Oracle Training
 Oracle Tips

 Oracle Forum
 Class Catalog


 Remote DBA
 Oracle Tuning
 Emergency 911
 RAC Support
 Apps Support
 Analysis
 Design
 Implementation
 Oracle Support


 SQL Tuning
 Security

 Oracle UNIX
 Oracle Linux
 Monitoring
 Remote s
upport
 Remote plans
 Remote
services
 Application Server

 Applications
 Oracle Forms
 Oracle Portal
 App Upgrades
 SQL Server
 Oracle Concepts
 Software Support

 Remote S
upport  
 Development  

 Implementation


 Consulting Staff
 Consulting Prices
 Help Wanted!

 


 Oracle Posters
 Oracle Books

 Oracle Scripts
 Ion
 Excel-DB  

Don Burleson Blog 


 

 

 


 

 

Oracle Technology Network

Auto-Tuning Oracle : Oracle SGA

By Donald K. Burleson
 

For a complete description of the v$ views, get the "Oracle reference poster".

Download the sample code accompanying this article (Listings 1-7)

As DBAs become more sophisticated in their self-tuning endeavors, many Oracle metrics may become self-tuning. In Oracle , we see more self-tuning capability than ever before, and it is even easier to write detection scripts and schedule tasks to adjust Oracle based on processing needs.

For example, the dynamic memory allocation features of Oracle  make it possible to create a self-tuning Oracle SGA. In this article, I'll explain how to examine the Oracle instance and adjust the memory regions for sort_area_size, large_pool_size, pga_aggregate_target, sga_max_size and db_cache_size according to the processing demands on the server and within the database. The techniques discussed here are based on the use of Statspack to monitor memory regions over time and develop signatures of system resource usage.

I will also discuss how to create an intelligent mechanism for automatically re-configuring Oracle  according to its current processing needs, and provide sample code that should get you started writing your own automation scripts. (For example, I will present a script that will automatically identify small, frequently-used segments and assign them to the KEEP pool for full caching.) Because every database is different, these scripts are deliberately abbreviated and simplified for clarity. Hence, you will need to expand the examples and write custom automation scripts that are appropriate to your environment.

The shops that will benefit most from automated self-tuning are those with the following characteristics:

  • Bi-modal systems — Systems that alternate between online transaction processing (OLTP) and data warehouse processing modes will especially benefit from self-tuning RAM regions.

     

  • 32-bit shops — Shops that are running 32-bit servers are constrained by the size of their RAM regions (about 1.7GB max size). For these shops, making the most effective use of RAM resources is especially important.

It is also important to remember that there are downsides to having a very large db_cache_size. While direct access to data is done with hashing, at times the database must examine all of the blocks in the RAM cache:

  • Systems with high invalidations — Whenever a program issues a truncate table, uses temporary tables or runs a large data purge, Oracle must sweep all of the blocks in the db_cache_size to remove dirty blocks. This approach can cause excessive overhead for system with a db_cache_size greater than 10gB.

     

  • High Update Systems — The database writer (DBWR) process must sweep all of the blocks in db_cache_size when performing an asynchronous write. Having a huge db_cache_size can cause excessive work for the database writer.

     

  • Oracle Real Application Clusters (RAC) systems — Systems using Oracle RAC may experience high cross-instance call when using a large db_cache_size in multiple RAC instances. This inter-instance "pinging" can cause excessive overhead, which is why RAC DBAs try to segregate RAC instances to access specific areas of the database.

First, let's review the principles behind creating a self-tuning database.

Principles Behind Self-Tuning

One of the most common techniques for reconfiguring an Oracle instance is to use a script that is invoked with dbms_job or an external scheduler such as the UNIX cron. To illustrate a simple example, consider a database that runs in OLTP mode during the day and data warehouse mode at night. For this type of database, you could schedule a job to reconfigure the instances SGA memory to the most appropriate configuration for the type of processing being done inside the Oracle instance.

Listing 1 contains a UNIX script that is used to reconfigure Oracle for decision-support processing. Note the important changes to the configuration in the shared_pool, db_cache_size, and pga_aggregate_target to accommodate data warehouse activity. This script is scheduled with dbms_job to be invoked at 6:00pm each night.

Listing 1:

#!/bin/ksh

# First, we must set the environment . . . .
ORACLE_SID=$1
export ORACLE_SID
ORACLE_HOME=`cat /etc/oratab|grep ^$ORACLE_SID:|cut -f2 -d':'`
#ORACLE_HOME=`cat /var/opt/oracle/oratab|grep ^$ORACLE_SID:|cut -f2 -d':'`
export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$PATH
export PATH

$ORACLE_HOME/bin/sqlplus -s /nologin<<!
connect system/manager as sysdba;
alter system set db_cache_size=1500m;
alter system set shared_pool_size=500m;
alter system set pga_aggregate_target=4000m;

In Listing 1, we see the alter system commands that establish the foundation for self-tuning Oracle  database. Remember that RAM is an expensive Oracle server resource, and the DBA has the responsibility to fully allocate RAM resources on the server. RAM that is not utilized wastes expensive hardware resources.

Even when fully allocated, over-allocating RAM is wasteful. For example, allocating a shared_pool_size=400m when you only need 200m is inefficient because that RAM could be used by another area of the SGA such as the db_cache_size.

To illustrate the concept of RAM reconfiguration, consider the following example with a 16K data buffer that is under-allocated and is experiencing a poor data buffer hit ratio, and a 32K data buffer that is over-allocated and has a good data buffer hit ratio (see Figure 1).

 

Over-allocated and under-allocated RAM regions
Figure 1: Over-allocated and under-allocated RAM regions

 

Using alter system commands, we can adjust the RAM between the data buffers to re-allocate the RAM where it is needed most (see Figure 2).

Dynamic re-allocation of RAM in Oracle10g
Figure 2: Dynamic re-allocation of RAM in Oracle10g

 

You can use the alter system commands in many types of Oracle scripts including dynamic SQL, dbms_job, and shell scripts.  Listing 2 is a simple SQL*Plus script that adjusts the RAM cache sizes; this script prompts you for the name of the cache and the sizes and issues the appropriate alter system commands to adjust the size of the RAM regions.

Listing 2:

-- ****************************************************************
-- Dynamic SGA modification
--
-- Copyright (c) 2003 By Donald K. Burleson - All Rights reserved.
-- ****************************************************************
set heading off
set feedback off
set verify off

accept decrease_pool char   prompt 'Enter cache to decrease: '
accept increase_pool char   prompt 'Enter cache to increase: '
accept change_amount number prompt 'Enter amount to  change: Ô

spool run_sga.sql
select
   'alter system set &decrease_pool = '||to_char(to_number(value)-&change_amount)||';'
from
   v$parameter
where
   name = lower(Ô&decrease_poolÔ);
select
   'alter system set &increase_pool = '||to_char(to_number(value)+&change_amount)||';'
from
   v$parameter
where
   name = lower(Ô&increase_poolÕ;
spool off
set feedback on
@run_sga

Here's the output:

SQL> @dyn_sga

Enter cache to decrease: shared_pool_size
Enter cache to increase: db_cache_size
Enter amount to change: 1048576

alter system set shared_pool_size = 49283072;             
System altered.                      

alter system set db_cache_size = 17825792;       
System altered. 

Now that we see how the RAM regions can be easily altered in Oracle , let's examine some rules for invoking an automated adjustment of the RAM regions.

When to Trigger a Dynamic Reconfiguration

Whenever the instance monitoring scripts indicate an overstressed RAM region, you must choose which area to "steal" RAM from. Table 1 displays a sample of the threshold condition for triggering a dynamic memory changes for the three major areas of the SGA. Of course, every system is different, and you will want to adjust these thresholds according to your needs. For example, many shops have implemented multiple blocksizes, and have separate RAM areas for db_32k_cache_size (for index tablespaces), db_keep_cache_size (for small, frequently-referenced objects), and so on.

RAM area Overstressed condition Over-allocated condition
Shared pool Library cache misses No misses
Data buffer cache Hit ratio < 90% Hit ratio > 95%
PGA aggregate High multi-pass exec 100% optimal executions

It is important to remember that the needs of database will constantly change according to the SQL that is being executed; an optimal SGA at 9:00am is not likely to be optimal at 3:00pm. To see changes in processing characteristics, you can run Statspack reports to pinpoint those times when Oracle change RAM storage needs. You can also run the v$db_cache_advice, v$pga_target_advice, v$java_pool_advice and v$db_shared_pool_advice utilities to see the marginal benefits from changes to the size of RAM regions.

One popular approach to automatic dynamic SGA reconfiguration is to identify trends. You can use Statspack to predict those times when the processing characteristics change and use the dbms_job package, or dynamic SQL to fire ad-hoc SGA changes. Let's take a close look at the trend-based approach.

Developing System Signatures

A common approach to trend-based reconfiguration is to use Statspack historical data to develop predictable trends and use these trends to change the database based on the signatures.

This approach is analogous to just-in-time manufacturing, where parts appear on the plant floor just as they are needed for assembly. Oracle  enables the DBA to anticipate processing needs and regularly schedule appropriate intervention, thereby ensuring that SGA resources are delivered just-in-time for the processing change.

Self-tuning Oracle's memory region involves altering the values of several Oracle parameters. While there are over 250 Oracle  parameters that govern the configuration of every aspect of the database, only a handful of such parameters are important for automated Oracle SGA tuning:

  • db_cache_size db_cache_size determines the number of database block buffers in the Oracle SGA and represents the single most important parameter to Oracle memory.

     

  • db_keep_cache_size — This data buffer pool was a sub-pool of db_block_buffers in Oracle8i, but starting with Oracle9i Database became a separate RAM area.

     

  • db_nn_cache_size — Oracle  has separate data buffer pools, which you can use to segregate data and to isolate objects with different I/O characteristics.

     

  • shared_pool_size shared_pool_size defines the pool that is shared by all users in the system, including SQL areas and data dictionary caching.

     

  • pga_aggregate_target pga_aggregate_target defines the RAM area reserved for system-wide sorting and hash joins.

As you can see, it is no small task to zero-in on the most important measures of the health of your Oracle database. Let's start by examining trends within the library cache and determine how to automatically adjust the shared_pool_size.

Using the Oracle  Advisory Utilities

Oracle  has complete advisory utilities that will accurately predict the changes from altering any of the RAM region sizes. The advisory utilities in Oracle  include:

  • Shared pool advice — v$shared_pool_advice
  • PGA target advice — v$pga_target_advice
  • Data cache advice — v$db_cache_advice
  • Java Pool advice — v$java_pool_advice

These utilities are a great way to ensure that self-tuning changes are justified. The following sections will show how the advisory utilities are invoked and interpreted; when you are comfortable interpreting their output, you can write automated scripts to generate the advice, interpret the output, and automatically change the sizes of the RAM regions.

Shared Pool Advice Utility

This advisory functionality has been extended in Oracle9i Database Release 2 to include a new advice called v$shared_pool_advice, which may eventually extend to all SGA RAM areas in future releases.

Starting in Oracle9i Database Release 2, the v$shared_pool_advice view shows the marginal difference in SQL parses as the shared pool changes in size from 10% of the current value to 200% of the current value.

The shared pool advice utility is very easy to configure: When it is installed, you can run a simple script to query the v$shared_pool_advice view and see the marginal changes in SQL parses for different shared_pool sizes. The output from the below script will tell you the changes from dynamically increasing or decreasing the shared_pool_size parameter.

-- ************************************************
-- Display shared pool advice
-- ************************************************

set lines  100
set pages  999

column 	c1	heading 'Pool |Size(M)'
column 	c2	heading 'Size|Factor'
column 	c3	heading 'Est|LC(M)  '
column 	c4	heading 'Est LC|Mem. Obj.'
column 	c5	heading 'Est|Time|Saved|(sec)'
column 	c6	heading 'Est|Parse|Saved|Factor'
column	c7	heading 'Est|Object Hits'   format 999,999,999


SELECT
   shared_pool_size_for_estimate	c1,
   shared_pool_size_factor		c2,
   estd_lc_size			c3,
   estd_lc_memory_objects		c4,
   estd_lc_time_saved		c5,
   estd_lc_time_saved_factor	c6,
   estd_lc_memory_object_hits	c7
FROM 
   v$shared_pool_advice;

  
                                                 Est        Est
                                                  Time      Parse
     Pool        Size        Est     Est LC      Saved      Saved          Est
   Size(M)     Factor    LC(M)    Mem. Obj.      (sec)     Factor  Object Hits
---------- ---------- ---------- ---------- ---------- ---------- ------------
        48         .5         48      20839    1459645          1  135,756,032
        64      .6667         63      28140    1459645          1  135,756,101
        80      .8333         78      35447    1459645          1  135,756,149
        96          1         93      43028    1459645          1  135,756,253
       112     1.1667        100      46755    1459646          1  135,756,842
       128     1.3333        100      46755    1459646          1  135,756,842
       144        1.5        100      46755    1459646          1  135,756,842
       160     1.6667        100      46755    1459646          1  135,756,842
       176     1.8333        100      46755    1459646          1  135,756,842
       192          2        100      46755    1459646          1  135,756,842

Here we see the statistics for the shared pool in a range from 50% of the current size to 200% of the current size. These statistics can give you a good idea about the proper size for the shared_pool_size. If you are automatic the SGA region sizes with automated alter system commands, creating this output and writing a program to interpret the results is a great way to ensure that the shared pool and library cache always have enough RAM. Next, let's see how we can track shared pool usage over time and develop "signatures" that will allow us to predict those times when shared pool adjustment are required.

We can use Statspack to create a listing of those times when the library cache miss ratio falls below a specified level, as shown in Listing 3.

Listing 3:

-- ****************************************************************
-- Display hourly library cache stats with STATSPACK
--
-- Copyright (c) 2003 By Donald K. Burleson - All Rights reserved.
-- ****************************************************************

set lines 80;
set pages 999;

column mydate heading 'Yr.  Mo Dy  Hr.'      format a16
column c1     heading "execs"                format 9,999,999
column c2     heading "Cache Misses|While Executing" format 9,999,999
column c3     heading "Library Cache|Miss Ratio" format 999.99999

break on mydate skip 2;

select
   to_char(snap_time,'yyyy-mm-dd HH24')  mydate,
   sum(new.pins-old.pins)                c1,
   sum(new.reloads-old.reloads)          c2,
   sum(new.reloads-old.reloads)/
   sum(new.pins-old.pins)                library_cache_miss_ratio
from
   stats$librarycache old,
   stats$librarycache new,
   stats$snapshot     sn
where
   new.snap_id = sn.snap_id
and
   old.snap_id = new.snap_id-1
and
   old.namespace = new.namespace
group by
   to_char(snap_time,'yyyy-mm-dd HH24');

The output below indicates a repeating RAM shortage in the shared pool between 9:00 and 10:00am every day.

                        Cache   Misses    Library Cache
Yr. Mo Dy  Hr.  execs   While Executing   Miss Ratio
--------------- ------- --------------- -------------
2001-12-11 10    10,338          6,443        .64
2001-12-12 10   182,477         88,136        .43
2001-12-14 10   190,707        101,832        .56
2001-12-16 10    72,803         45,932        .62

In this example, the DBA needs to schedule additional RAM for the shared_pool_size during the deficient period.

Developing Signatures for pga_aggregate_target

The PGA region in Oracle  is very important because it governs the speed of sorting operations and SQL hash joins. You may want to dynamically change the pga_aggregate_target parameter when any one of the following conditions are true:

  • Whenever the value of the v$sysstat statistic "estimated PGA memory for one-pass" exceeds pga_aggregate_target, you want to increase pga_aggregate_target.

     

  • Whenever the value of the v$sysstat statistic "workarea executions — multipass" is greater than 1 percent, the database may benefit from additional RAM memory.

     

  • It is possible to over-allocate PGA memory, and you may consider reducing the value of pga_aggregate_target whenever the value of the v$sysstat row "workarea executions—optimal" consistently measures 100 percent.

The v$pgastat view provides instance-level summary statistics on the PGA usage and the automatic memory manager. To get a quick overview, a simple query provides excellent overall PGA usage statistics for all Oracle  connections:

 

check_pga.sql
-- *************************************************************
-- Display detailed PGA statistics
--
-- *************************************************************
column name  format a30
column value format 999,999,999

select
   name, 
   value 
from
   v$pgastat
;

The output of this query might look like this:

NAME                                                   VALUE     
------------------------------------------------------ ----------
aggregate PGA auto target                             736,052,224
global memory bound                                        21,200
total expected memory                                     141,144
total PGA inuse                                        22,234,736
total PGA allocated                                    55,327,872
maximum PGA allocated                                  23,970,624
total PGA used for auto workareas                         262,144
maximum PGA used for auto workareas                     7,333,032
total PGA used for manual workareas                             0
maximum PGA used for manual workareas                           0
estimated PGA memory for optimal                          141,395
maximum PGA memory for optimal                        500,123,520
estimated PGA memory for one-pass                         534,144
maximum PGA memory for one-pass                        52,123,520

In the preceding display from v$pgastat, we see the following important statistics:

  • Total PGA used for auto workareas — This statistic monitors RAM consumption of all connections running in automatic memory mode. Remember, not all internal processes are allowed by Oracle to use the automatic memory feature. For example, Java and PL/SQL will allocate RAM memory, and it will not be counted in the total PGA statistic. Hence, you should subtract this value from the total PGA allocated to see the amount of memory used by connections and the RAM memory consumed by Java and PL/SQL.

     

  • Estimated PGA memory for optimal/one-pass — This statistic estimates how much memory is required to execute all task connections RAM demands in optimal mode. Remember, when Oracle  experiences a memory shortage, the DBA will invoke the multi-pass operation to attempt to locate recently freed RAM memory. This statistic is critical for monitoring RAM consumption in Oracle , and most Oracle DBAs will increase pga_aggregate_target to this value.

 

If you have Oracle  you can use the new advisory utility dubbed v$pga_target_advice. This utility will show the marginal changes in optimal, one-pass, and multipass PGA execution for different sizes of pga_aggregate_target, ranging from 10% to 200% of the current value.

Listing 4 shows a sample query using this new utility, and here is a sample of the output.

Listing 4:

-- ************************************************
-- Display pga target advice
-- ************************************************

column c1 heading 'Target(M)'
column c2 heading 'Estimated|Cache Hit %'
column c3 heading 'Estimated|Over-Alloc.'

SELECT
   ROUND(pga_target_for_estimate /(1024*1024)) c1,
   estd_pga_cache_hit_percentage               c2,
   estd_overalloc_count                        c3
FROM
   v$pga_target_advice;

Here we see that we have over-allocated pga_aggregate_target for current processing, and it is safe to steal RAM from this region and allocate it elsewhere:

             Estimated   Estimated
 Target(M) Cache Hit % Over-Alloc.
---------- ----------- -----------
       113          73           0
       225          81           0
       450          92           0
       675         100           0
       900         100           0
      1080         100           0
      1260         100           0 <= current size
      1440         100           0
      1620         100           0
      1800         100           0
      2700         100           0
      3600         100           0
      5400         100           0
      7200         100           0

As you can see, you can easily create automated methods for detecting PGA memory shortages (using Statspack) and writing jobs to dynamically change the pga_aggregate_target to ensure optimal RAM usage for sorts and hash joins.

Developing Signatures for Data Buffers

The DBA will notice that in practice, variation in the data buffer hit ratio (DBHR) will increase with the frequency of measured intervals. For example, Statspack may report a DBHR of 92 percent at hourly intervals, but indicate wide variation when the ratio is sampled in two-minute intervals, as shown in Figure 3.

Statspack report, two minute intervals

 

As a general guideline, all memory available on the host should be tuned, and the db_cache_size should be allocated RAM resources up to the point of diminishing returns (see Figure 4). This is the point where additional buffer blocks do not significantly improve the buffer hit ratio.

allocated Ram resources up to the point of diminishing returns

 

The new v$db_cache_advice view is similar to an older utility that was introduced in Oracle7, x$kcbrbh, which was used to track buffer hits; similarly, the x$kcbcbh view was used to track buffer misses. The data buffer hit ratio can provide data similar to that from v$db_cache_advice, so most Oracle tuning professionals can use both tools to monitor the effectiveness of their data buffers.

The script in Listing 5 can be used to perform the cache advice function, when the v$db_cache_advice utility has been enabled and the database has run long enough to give representative results. Using this script, you can get cache advice for all of your buffer pools, including your 2k, 4k, 8k, 16k and 32k data buffers.

Listing 5

-- ****************************************************************
-- Display cache advice
-- ****************************************************************


column c1   heading 'Cache Size (m)'        format 999,999,999,999
column c2   heading 'Buffers'               format 999,999,999
column c3   heading 'Estd Phys|Read Factor' format 999.90
column c4   heading 'Estd Phys| Reads'      format 999,999,999

select
   size_for_estimate          c1,
   buffers_for_estimate       c2,
   estd_physical_read_factor  c3,
   estd_physical_reads        c4
from
   v$db_cache_advice
where
   name = 'DEFAULT'
and
   block_size  = (SELECT value FROM V$PARAMETER
                   WHERE name = 'db_block_size')
and
   advice_status = 'ON';

The output from the script is shown below. Note that the values range from 10 percent of the current size to double the current size of the db_cache_size.

                                Estd Phys    Estd Phys
 Cache Size (MB)      Buffers Read Factor        Reads
---------------- ------------ ----------- ------------
              30        3,802       18.70  192,317,943 <- 10% size
              60        7,604       12.83  131,949,536
              91       11,406        7.38   75,865,861
             121       15,208        4.97   51,111,658
             152       19,010        3.64   37,460,786
             182       22,812        2.50   25,668,196
             212       26,614        1.74   17,850,847
             243       30,416        1.33   13,720,149
             273       34,218        1.13   11,583,180
             304       38,020        1.00   10,282,475 Current Size
             334       41,822         .93    9,515,878
             364       45,624         .87    8,909,026
             395       49,426         .83    8,495,039
             424       53,228         .79    8,116,496
             456       57,030         .76    7,824,764
             486       60,832         .74    7,563,180
             517       64,634         .71    7,311,729
             547       68,436         .69    7,104,280
             577       72,238         .67    6,895,122
             608       76,040         .66    6,739,731 <- 2x size

As noted in Figure 4, your optimal setting for the data buffer is the spot where the marginal benefit of additional buffer decreases. Of course, this optimal point will change over time, and that is why we need proactive SGA reconfiguration so that we can change the data buffer size according to current processing needs.

For trend analysis, the variations in the DBHR are not important and the average data buffer hit ratios can be generated along two dimensions: Average DBHR by day of the week and average DBHR by hour of the day.

Remember, change occurs in the data buffers rapidly, and sometimes a long-term analysis will provide clues that point to processing problems within the database. Almost every Oracle database exhibits patterns that are linked to regular processing schedules, called signatures.

The output from a Statspack DBHR hourly average script is shown below. The report displays the average hit ratio for each day, based on six months of data collection. The DBHR signature of the database becomes obvious if this data is plotted in a spreadsheet.

hr   BHR
-- -----
00   .94
01   .96
02   .91
03   .82
04   .80
05   .90
06   .94
07   .93
08   .96
09   .95
10   .84
12   .91
13   .96
14   .95
17   .97
18   .97
19   .95
20   .95
21   .99
22   .93
23   .94

A plot of the data is shown in Figure 5, and we see some interesting repeating trends.

a plot of the data

 

It is clear from the chart that the DBHR drops below 90 percent at 3:00am, 4:00am, and 10:00am daily. To solve this problem, the DBA might schedule a dynamic adjustment to add more RAM to db_cache_size every day.

A similar script can yield the average DBHR by day of the week as shown in Listing 6.

Listing 6

set pages 999;

column bhr format 9.99
column mydate heading 'yr.  mo dy Hr.'

select
   to_char(snap_time,'day')      mydate,
   avg(
   (((new.consistent_gets-old.consistent_gets)+
   (new.db_block_gets-old.db_block_gets))-
   (new.physical_reads-old.physical_reads))
   /
   ((new.consistent_gets-old.consistent_gets)+
   (new.db_block_gets-old.db_block_gets))
   ) bhr
from
   perfstat.stats$buffer_pool_statistics old,
   perfstat.stats$buffer_pool_statistics new,
   perfstat.stats$snapshot               sn
where
   new.name in ('DEFAULT','FAKE VIEW')
and
   new.name = old.name
and
   new.snap_id = sn.snap_id
and
   old.snap_id = sn.snap_id-1
and
   new.consistent_gets > 0
and
   old.consistent_gets > 0
having
   avg(
   (((new.consistent_gets-old.consistent_gets)+
   (new.db_block_gets-old.db_block_gets))-
   (new.physical_reads-old.physical_reads))
   /
   ((new.consistent_gets-old.consistent_gets)+
   (new.db_block_gets-old.db_block_gets))
   ) < 1
group by
   to_char(snap_time,'day');

The output from the script is below.

DOW         BHR
--------- -----
sunday      .91
monday      .98
tuesday     .93
wednesday   .91
thursday    .96
friday      .89
saturday    .92

We can then paste this data into a spreadsheet and chart it as shown in Figure 6.

chart of data

 

This report is useful in ascertaining a periodic or regular buffer signature, and the graph clearly shows the need to increase the db_cache_size on Mondays and Fridays. To understand why, you would use Statspack to investigate the differences between these days and other days of the week.

Next, let's examine another self-tuning method where we automatically identify and assign objects into the KEEP pool for full RAM caching.

Automating KEEP Pool Assignment

According to Oracle documentation, "A good candidate for a segment to put into the KEEP pool is a segment that is smaller than 10% of the size of the DEFAULT buffer pool and has incurred at least 1% of the total I/Os in the system." It is easy to locate segments that are less than 10% of the size of their data buffer, but Oracle does not have any direct mechanism to track I/O at the segment level. One solution is to place each segment into an isolated tablespace, in which case Statspack will show the total I/O, but this is not a practical solution for complex schemas with hundreds of segments.

Because the idea behind the KEEP is full caching, we want to locate those objects that are small and experience a disproportional amount of activity. Using this guideline, you might consider caching any objects where:

  • The object consumes more than 10% of the total size of the data buffer
  • More than 50% of the object already resides in the data buffer (according to an x$bh query).

To identify these objects, we start by explaining all of the SQL in the databases looking for small-table, full-table scans. Next, we repeatedly examine the data buffer cache, seeing any objects that have more than 80% of their blocks in RAM.

The best method for identifying tables and indexes for the KEEP pool examines the current blocks in the data buffer. For this query, the rules are simple: Any object that has more than 80% of its data blocks in the data buffer should probably be fully cached.

It is highly unlikely that an undeserving table or index would meet this criterion. Of course, you would need to run this script at numerous times during the day because the buffer contents change very rapidly.

The script in Listing 7 can be run every hour via dbms_job, and automate the monitoring of KEEP pool candidates. Every time it finds a candidate, the DBA will execute the syntax and adjust the total KEEP pool size to accommodate the new object.

Listing 7

set pages 999
set lines 92


spool keep_syn.lst

drop table t1;

create table t1 as
select
   o.owner          owner,
   o.object_name    object_name,
   o.subobject_name subobject_name,
   o.object_type    object_type,
   count(distinct file# || block#)         num_blocks
from
   dba_objects  o,
   v$bh         bh
where
   o.data_object_id  = bh.objd
and
   o.owner not in ('SYS','SYSTEM')
and
   bh.status != 'free'
group by
   o.owner,
   o.object_name,
   o.subobject_name,
   o.object_type
order by
   count(distinct file# || block#) desc
;

select
   'alter '||s.segment_type||' '||t1.owner||'.'||s.segment_name||' storage (buffer_pool keep);'
from
   t1,
   dba_segments s
where
   s.segment_name = t1.object_name
and
   s.owner = t1.owner
and
   s.segment_type = t1.object_type
and
   nvl(s.partition_name,'-') = nvl(t1.subobject_name,'-')
and
   buffer_pool <> 'KEEP'
and
   object_type in ('TABLE','INDEX')
group by
   s.segment_type,
   t1.owner,
   s.segment_name
having
   (sum(num_blocks)/greatest(sum(blocks), .001))*100 > 80
;

 

Here is the output from this script:

alter TABLE LIS.BOM_DELETE_SUB_ENTITIES storage (buffer_pool keep);
alter TABLE LIS.BOM_OPERATIONAL_ROUTINGS storage (buffer_pool keep);
alter INDEX LIS.CST_ITEM_COSTS_U1 storage (buffer_pool keep);
alter TABLE ISM3.FND_CONCT_PROGRAMS storage (buffer_pool keep);
alter TABLE ISM3.FND_CONCT_REQUESTS storage (buffer_pool keep);
alter TABLE IS.GL_JE_BATCHES storage (buffer_pool keep);
alter INDEX IS.GL_JE_BATCHES_U2 storage (buffer_pool keep);
alter TABLE IS.GL_JE_HEADERS storage (buffer_pool keep);

When you have identified the segments for assignment to the KEEP pool, you will need to adjust the db_keep_cache_size parameter to ensure that it has enough blocks to fully cache all of the segments assigned to the pool.

Of course, there are many exceptions to the automated approach. For example, these scripts do no handle table partitions and other object types. They should be used only as a framework for you KEEP pool caching strategy, and not be run as-is.

The Future of Database Self-Tuning

While there are many new Oracle  features that make automated tuning easier, the most directly relevant features are the new dbms_advisor PL/SQL package and the Java pool advisory utility:

  • Java Pool Advisory — This utility helps predict the benefit of adding or removing RAM frames from the java_pool_size memory region.

     

  • SQLAccess Advisor — This is an exciting new Oracle  feature of the new dbms_advisor package that allows you to get advice on the creation, maintenance, and use of indexes and materialized views.

     

  • The tune_mview advisor utility — This is another new component of the Oracle  dbms_advisor package that allows you to get expert advice on what materialized views can be used to hyper-charge your SQL queries.

We will be taking a close look at these new utilities in a later OTN installment and see how they supplement the arsenal of tools for automating Oracle  tuning.

Remember, whenever Oracle provides a powerful tool such as the alter system statements, you should leverage them to automate routine DBA tasks. That will free more of your time to pursue more complex areas of Oracle database administration.


 

 

��  
 
 
Oracle Training at Sea
 
 
 
 
oracle dba poster
 

 
Follow us on Twitter 
 
Oracle performance tuning software 
 
Oracle Linux poster
 
 
 

 

Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.  Feel free to ask questions on our Oracle forum.

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata?  Oracle technology is changing and we strive to update our BC Oracle support information.  If you find an error or have a suggestion for improving our content, we would appreciate your feedback.  Just  e-mail:  

and include the URL for the page.


                    









Burleson Consulting

The Oracle of Database Support

Oracle Performance Tuning

Remote DBA Services


 

Copyright © 1996 -  2020

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.