|
 |
|
Data Guard Tuning SQL Apply Tips
Oracle Database Tips by Donald BurlesonDecember 9, 2015
|
Oracle Data Guard -
Performance Tuning of Data Guard Configuration
Tuning Tips for the SQL Apply Operation
In the previous section, information was
presented on the methods and the use of the Oracle data dictionary
to diagnose performance issues and determining the bottlenecks. This
section will focus on the changes required on a logical standby
database or in some cases on the primary database to alleviate the
performance problems.
-
Uniquely identifying a row in the
table and avoiding full table scans can optimize the performance of
the SQL apply operation. It is necessary to verify that there are no
tables in the primary database without a primary key or unique index
defined to them. If there are any such tables, adding a Primary Key
RELY constraint will minimize the amount of work required by the SQL
apply process to uniquely identify rows from these tables. If the
SQL apply operation is doing lots of full table scans, consider
adding indexes on these tables in the logical standby database.
-
Reducing the level of transaction_consistency will always
result in better performance. Evaluate the requirements of
transaction consistency based on the usage of the logical standby
database. If the logical Oracle instance
is used only for disaster
recovery purposes and no other processes such as reporting services
are accessing the logical standby database, consider setting the transaction_consistency to NONE. This will not guarantee any
read consistent data until all the logs are applied. If the database
is used for reporting, consider setting the transaction_consistency to READ_ONLY. Full transaction
consistency should be avoided wherever possible. This is the default
value, so it is important to remember to change it after creating a
logical standby database.
-
In general, increasing the shared
pool size improves the performance of the log apply service, subject
to the page out of SGA from memory. If the output of the logstdby_stats.sql script shows a significant "Memory wait" or
"Unsuccessful Handling Of Low Memory", consider changing the memory
allocation for the log apply service. Before increasing the size of
shared pool, check the "free memory" from the shared pool. By
default, the SQL apply service can use only up to 25% of the shared
pool size. If the "free memory" in shared pool is not enough,
considering increasing the size of shared pool through the
initialization parameter shared_pool_size. The amount of
memory that the SQL apply process can consume can be changed using
the dbms_logstdby.apply_set procedure. For example, the
following statement will set a 100MB reserve for the SQL apply
process from shared pool:
EXEC DBMS_LOGSTDBY.APPLY_SET('MAX_SGA',100);
-
Increasing the memory allocated for
the SQL apply process will certainly reduce the unsuccessful
handling of low memory conditions. The benefits gained by the
increase in memory should be weighed against the pageout counnt.
-
If the APPLIER process is falling behind in the
SQL apply operation, consider increasing the number of parallel
servers.
Tuning Tips for the SQL Apply Operation
-
During the heavy transaction period, the output of the script
sql_apply_progress.sql may indicate that the SQL apply process
is not making any progress. ORA-16127 will appear in the output of
the sql_apply_progress.sql script. Oracle suggests reducing
the value for the eager_size parameter and the max_transaction_count parameter using the procedure,
dbms_logstdby.apply_set. The eager_size parameter should
be in the range of 100 and the max_transaction_count
parameter should be around 12.
EXEC DBMS_LOGSTDBY.APPLY_SET ('_MAX_TRANSACTION_COUNT',12); EXEC DBMS_LOGSTDBY.APPLY_SET('_EAGER_SIZE',100);
These two parameters are not
documented in Oracle documentation and can change without any prior
notice. However, Oracle's support site provides more information on
these two parameters.
|