memory content explained
In a paper titled “Advanced
Management Of Working Areas in Oracle 9i/10g”, author Joze
Senegacnik notes internals for Oracle PGA management in 10g release
This comprehensive and
detailed paper on the latest PGA memory structures contains many
“The contents of the PGA varies depending on whether we are using
DEDICATED or SHARED server mode, but generally we can use the
following description of PGA parts:
The memory allotted to hold logon information and other session
details. When a shared server model is used, this kind of
information is stored in
because it needs to be persistent between calls.
SQL Execution Memory:
The memory allotted for the execution of SQL statements The SQL
Execution Memory has a persistent and a run time area:
The persistent area requires persisting across multiple executions
of the same SQL statement and contains the information such as bind
details, data type conversion, etc. This persistent area is
de-allocated only when the cursor is closed. When the shared server
processes model is used the persistent area is a part of the SGA
(part of Large pool if properly configured).
The runtime area
contains information used while a SQL statement is being executed.
Its size depends on the number and size of rows being processed as
well as the type and complexity of SQL statement as. It is
de-allocated when the execution completes. For shared sever
processes, the run time area is resides in the PGA for DML/DDL
operation and in the SGA for queries.
The feedback loop is closed by the local memory manager. It uses the
current value of the memory bound and the current profile of a work
area to determine the correct amount of PGA
expected size, which can be made available to
this work area. The calculation of the expected work area size for
each active operation is done based on the following rules:
The expected size can never be less than the minimum memory
requirement of the operation and more than its cache size.
If the global memory bound is between minimum and cache requirement,
the expected size will be equal to the bound.
The only exception to this rule is a sort operation. For a sort
operation, the expected size under will be equal to one pass size if
the bound is less than its cache size is less than the bound. This
is due to the fact that the sort operation does not benefit from
more than one pass memory unless the whole operation can be
performed in cache, as explained earlier.
For parallel operations, the expected size is multiplied by the
degree of parallelism.
Finally, no single operation will be allowed to “hog” all available
memory. Therefore, the expected size can never be more than 5% of
the overall target for serial and more than 30% for parallel