|
|
|
Using Oracle data buffer cache Advice
Oracle Tips by Burleson Consulting
Nov 18, 2002 |
For more detailed internals on
v$db_cache_advice, see my updated notes on
how to predict I/O reduction from a
larger data cache.
Database Buffer Cache
The database buffer cache holds copies of data blocks read from
the data files. The term data block is used to describe a block
containing table data, index data, clustered data, and so on.
Basically, it is a block that contains data. All user processes
concurrently connected to the instance share access to the database
buffer cache. The database buffer cache is logically segmented into
multiple sets. This reduces contention on multiprocessor systems.
This area of the SGA contains only the buffers themselves and not
their control structures. For each buffer, there is a corresponding
buffer header in the variable area of the SGA.
The ability to compute
the optimal size of the data buffers is a critical
task for large databases. As databases grow to
hundreds of billions of bytes, it becomes economically
impractical to cache the entire database in RAM. So
Oracle professionals must find the point of
diminishing marginal returns for the addition of RAM
resources. being able to do this can save the company
hundreds of thousands of dollars in RAM expenses.
Oracle9i introduces a new view,
v$db_cache_advice, that can predict the benefit of
additional data buffers in the data buffer cache.
There has been some question about whether the data buffer cache advisory is
a ratio-based tool (and hence is invaluable as the data buffer hit ratio).
MOSC
Note:148511.1 says that the estd_physical_read_factor is computed
as the "ratio of the number of estimated physical reads to the number of
reads in the real cache." See
how to predict I/O reduction
from a larger data cache.
This
v$db_cache_advice view shows the estimated miss rates for twenty
potential buffer cache sizes, ranging from 10 percent
of the current size to 200 percent of the current
size. This allows Oracle DBAs to accurately predict
the optimal size for the RAM data buffer. Let's look
at some examples to illustrate this process.
How does it
work?
With the data buffer set to a very small size, a small
increase to the size of the RAM data buffers results in a large reduction in
Disk I/O:
However, the high reduction in Disk I/O does not
continue ad infinitum. As the RAM size approaches the database size, the
marginal reduction in Disk I/O is smaller because all databases have
infrequently accessed data.
As a general rule, all available
memory on the host should be tuned, and RAM
resources should be given to db_cache_size up
to a point of diminishing returns. There
is a point where the addition of buffer blocks will
not significantly improve the buffer hit ratio, and
this gives the Oracle DBA the ability to find the
optimal amount of buffers.
This new DB cache advice feature is very similar to
the Oracle7 utility that predicted the benefit from
adding additional data buffers. This utility used a
view called x$kcbrbh to track buffer hits and
the x$kcbcbh to track buffer misses.
Just like the Oracle7 model, you must pre-allocate the
RAM memory for the data buffers to use this
functionality. The cache advice feature is enabled by
setting the init.ora parameter, db_cache_advice,
to the value on or ready. These values
are set dynamically with the alter system
command, so the DBA can turn on the predictive model
while the database is running.
Since you must pre-allocate the additional RAM data
buffers for the db_cache_size to use v$db_cache_advice,
you may only want to use this utility once to
determine an optimal size. Remember, you can also use
the data buffer cache hit ratio to gather similar
data.
Once the v$db_cache_advice is enabled and the database
has run for a representative time period, the query in
Listing A can be run to perform the prediction.
column c1 heading 'Cache Size (m)' format 999,999,999,999
column c2 heading 'Buffers' format 999,999,999
column c3 heading 'Estd Phys|Read Factor' format 999.90
column c4 heading 'Estd Phys| Reads' format 999,999,999
estd_physical_read_factor c3,
block_size = (SELECT value FROM V$PARAMETER
WHERE name = 'db_block_size')
The output shows that the range of values is from 10 percent of
the current size up to double the current size.
Cache Size (MB) Buffers Read Factor Reads
---------------- ------------ ----------- ------------
30 3,802 18.70 192,317,943 < 10% size
60 7,604 12.83 131,949,536
91 11,406 7.38 75,865,861
121 15,208 4.97 51,111,658
152 19,010 3.64 37,460,786
182 22,812 2.50 25,668,196
212 26,614 1.74 17,850,847
243 30,416 1.33 13,720,149
273 34,218 1.13 11,583,180
304 38,020 1.00 10,282,475 Current Size
608 76,040 .66 6,739,731 < 2x size
Here, you can see no peak in total Disk I/O and no
marginal trends with the addition of more RAM buffers.
This is very typical of data warehouse databases that
read large tables with full-table scans. Consequently,
there is no specific optimal setting for the
db_cache_size parameter. In other words, Oracle has an
insatiable appetite for data buffer RAM, and the more
you give to db_cache_size, the less Disk I/O will
occur.
The general rule for adding blocks to db_cache_size is
simple: As long as marginal gains can be achieved from
adding buffers and you have the memory to spare, you
should increase the value of db_cache_size. Increases
in buffer blocks increase the amount of required RAM
memory for the database, and it is not always possible
to hog all the memory on a processor for the database
management system. So a DBA should carefully review
the amount of available memory and determine the
optimal amount of buffer blocks.
A word of warning: When the DBA sets dba_cache_advice=on,
Oracle will steal RAM pages from the shared pool RAM
area, often with disastrous result to the objects
inside the library cache. For example, if the existing
setting for db_cache_size is 500 MB, Oracle will steal
a significant amount of RAM from the shared pool. To
avoid this problem, the DBA should set db_cache_advice=ready
in the init.ora file. When this is done, Oracle will pre-allocate the RAM memory at database startup time.
For more sophisticated Oracle databases, you
can control not only the number of buffer blocks but
also the block size for each data buffer. For example,
you might want to make some of the buffer blocks very
large so that you can minimize I/O contention.
Remember, the cost for an I/O for a 32 KB block is not
significantly more expensive than an I/O for 4 KB
block. A database designer might choose to make
specific data buffers large to minimize I/O if the
application clusters records on a database block,
while keeping other data blocks small.
Make a
prediction
With the increasing flexibility and sophistication
of Oracle, the database administrator is
challenged to determine the optimal sizes of all SGA
regions. Making these decisions properly can mean up
to millions of dollars in savings of RAM resources.
The DBA cache advice facility is just one more way
that the DBA can leverage upon the intelligence of
the Oracle database to predict the optimal size
for the RAM data caches.
However, there are some imitation to
the Oracle data buffer cache advisor. Setting the dba_cache_advice=on
while the database is running will cause Oracle to grab RAM pages from the
shared pool RAM area, and the buffer cache advisory uses this RAM in a simple
simulation to help capture elapsed time data for logical I/O and physical disk
reads, using different buffer sizes.
While the
internals of the utility are not disclosed, we see
these parameters that effect the execution of the
v$db_cache_advice. (Note: These may
not be used within the historical data buffer
reports):
-
_db_cache_advice_batch_size
-
_db_cache_advice_sample_factor
-
dba_cache_advice=on or ready
First, there are
several versions of the data buffer cache advisor:
-
Real-time
- The v$db_cache_advice utility samples
data from the SGA in real-time, and creates a
limited elapsed-time report from the SGA
accumulators and addition memory from the
dba_cache_advice settings.
-
Historical
with AWR and STATSPACK - It appears that the
AWR and STATSPACK data buffer advisories use the
DBA_HIST and STATSPACK table data for the
report.
We also see some
limitations to the accuracy of the data buffer
advisor because of the architecture:
-
Limited
deltas - Using only two observations for
logical reads and physical I/O are not enough
data for a meaningful prediction. The
"current workload" assumption has a wide
variance, and the numbers for a one minute
report will be quite different from a one
hour report.
-
Limited
metrics - All of the advice from the data
buffer cache advisory is limited to logical I/O
and physical I/O at the system-wide level.
-
Assumption
of optimization? - The AWR data buffer cache
advisor (and possibly the related
v$db_cache_advice utility), only uses
limited
data points and some experts suggest that it assumes that the
existing data buffer size is already optimal (the point
at which the working set of frequently-used data
blocks are cached, and additions to the data
buffer result in marginally declining reductions
in physical reads).
Hence, on the
margin, the data buffer cache advisory is inaccurate
for database with an undersized db_cache_size
(and db_keep_cache_size, etc.). With
the data buffer set to a very small size, a small
increase to the size of the RAM data buffers results
in a large reduction in Disk I/O.
The data buffer cache advisory may
underestimate the benefits with a too-small cache
However, the high reduction in Disk
I/O does not continue ad infinitum. As the RAM size
approaches the database size, the marginal reduction
in Disk I/O is smaller because all databases have
infrequently accessed data.
Using v$db_cache_advice
You enable the v$db_cache_advice with the db_cache_advice
parameter. The values are ready, on and off, and they can be set in
the init.ora file, or dynamically enabled with an "alter system set
db_cache_advice" command.
MOSC
Note:148511.1 says that the v$db_cache_advice view contains these
columns, and the the main predictor (estimated physical read factor), is the
ratio of estimated disk reads for each of the listed cache sizes:
-
id - This is the ID number for the buffer
pool, (values from 1 to 8).
-
name - Oracle allows for multiple data
buffer pools as set by (db_cache_size,
db_keep_cache_size, db_recycle_cache_size,
and the instantiated blocksize buffers
db_2k_cache_size, db_4k_cache_size,
db_8k_cache_size, db_16k_cache_size and on
non-Windows platforms, db_32k_cache_size).
-
block_size - The blocksize for the data
buffer (2k, 4k, 8k, 16k and 32k).
-
advice_status - On of Off.
-
size_for_estimate - This is the baseline
cache size for the prediction. (e.g. 100m)
-
buffers for estimate - The number of data
buffers (e.g. 512).
-
estd_physical_read_factor - The ratio of
the number of estimated physical reads to the
number of reads
in the real cache.
-
estd_physical_reads - This is the guess
about the number of disk reads for each listed
cache size (from 0.1 to 2.0).
Data buffer cache advisory usage notes
When using the
v$db_cache_advice report, the DBA sets db_cache_advice=on,
Oracle will steal RAM pages from the shared pool RAM
area, often with disastrous result to the objects
inside the library cache. For example, if the existing
setting for db_cache_size is 500 MB, Oracle will steal
a significant amount of RAM from the shared pool.
To
avoid this problem, the DBA should set db_cache_advice=ready
in the init.ora file, but only when using the advisory
(always turn off the RAM overhead by setting (db_cache_advice=off)
when your data buffer optimization is complete. When this is done, Oracle will pre-allocate the RAM memory at database startup time.
Internals of the AWR report for the data buffer advisor
The DBA invokes the standard awrrpt.sql in the $ORACLE_HOME/rdbms/admin
directory. The awrrpt.sql script, in turn, calls awrrpti.sql,
which actually pulls the report. Unfortunately, they use an internal query,
so the internal machinations are hidden:
select output from
table(dbms_workload_repository.&fn_name( :dbid,
:inst_num,
:bid, :eid,
:rpt_options ));
Overhead of v$db_cache_advice
MOSC
Note:148511.1 says that the there will be additional CPU overhead when
running the data buffer cache advisor, and that the largest resource
overhead would be the 100 bytes of additional RAM overhead that is applied
to each buffer whenever the utility is invoked (or when you have set
db_cache_advice=on or db_cache_advice= ready).
"The advisory requires memory to
be allocated from the shared pool (of the order
of 100 bytes per buffer)."
In sum, the
usefulness of a data buffer cache advisory is
undisputed, but the true way to a successful
predictive model is to use the scripts from
my book "Oracle
Tuning: The Definitive Reference".
These provide a valid time-series analysis
since the single delta values in the advisory are
not sufficient. Using STATSPACK data for
consistent gets and physical reads, statistically
significant trends can be established.
See my related notes on v$db_cache_advice:
|
If you like Oracle tuning, see the book "Oracle
Tuning: The Definitive Reference", with 950 pages of tuning tips and
scripts.
You can buy it direct from the publisher for 30%-off and get
instant access to the code depot of Oracle tuning scripts. |
|
|