Call now: 252-767-6166  
Oracle Training Oracle Support Development Oracle Apps

 
 Home
 E-mail Us
 Oracle Articles
New Oracle Articles


 Oracle Training
 Oracle Tips

 Oracle Forum
 Class Catalog


 Remote DBA
 Oracle Tuning
 Emergency 911
 RAC Support
 Apps Support
 Analysis
 Design
 Implementation
 Oracle Support


 SQL Tuning
 Security

 Oracle UNIX
 Oracle Linux
 Monitoring
 Remote s
upport
 Remote plans
 Remote
services
 Application Server

 Applications
 Oracle Forms
 Oracle Portal
 App Upgrades
 SQL Server
 Oracle Concepts
 Software Support

 Remote S
upport  
 Development  

 Implementation


 Consulting Staff
 Consulting Prices
 Help Wanted!

 


 Oracle Posters
 Oracle Books

 Oracle Scripts
 Ion
 Excel-DB  

Don Burleson Blog 


 

 

 


 

 

 

 

 

11g SQL Performance Analyzer Using PL/SQL

Oracle 11g New Features Tips by Donald BurlesonJune 28, 2015

Oracle 11g New Features Tips

The built in packages DBMS_SQLTUNE and DBMS_SQLPA contain all procedures and functions needed to perform an SPA tuning session. This is a part of Oracle's top-down tuning approach, whereby we tune the workload and instance before tuning individual SQL statements:

The following is a closer look at the steps of the workflow:

Step 1:

To capture the workload for testing, a SQL Tuning Set (STS) must be created. The package DBMS_SQLTUNE is used for this. The STS is the unit which will be packed into a staging table and shipped to the testing system. An STS is a schema object owned by a user. It is possible to use the actual workload from the cursor cache or, as an alternative source, choose a historical workload retrieved from the Automatic Workload Repository by using an AWR snapshot or an AWR baseline.  

A SQL Tuning Set contains:                                                                                 

  • The text of the captured SQL statements

  • The name of the parsing schema and used bind variable values

  • Performance statistics for the execution of the statements such as average elapsed time and execution count

  • The execution plans the optimizer has created

  • Row source statistics (number of rows processed for each step of execution)

It is possible to filter out unwanted statements from a workload. Incremental capturing can be used to populate SQL statements over a period of time from the cursor cache into an STS.

First, create the STS with the built in package dbms_sqltune:

Lutz @ orcl11g SQL> EXEC DBMS_SQLTUNE.CREATE_SQLSET(-
  sqlset_name => 'my_workld_sts', -
  description => 'my test workload');

Then capture the workload into the STS. Here is a simple example for capturing SQL statements from the cursor cache into an STS:

Lutz @ orcl11g SQL> EXEC DBMS_SQLTUNE.CAPTURE_CURSOR_CACHE_SQLSET( -
                                        sqlset_name     => 'my_workld_sts', -
                                        time_limit      =>  300); -- seconds

It is possible to specify a repeat interval and add filters for the capture process as well as options for how to handle repeating SQL.

Step 2:

After creating the STS which now contains the workload data, it must be loaded (packed) into a staging table which then can be shipped to the testing system using expdp / impdp or a database link.

Create the staging table:

Lutz @ orcl11g SQL> EXEC DBMS_SQLTUNE.CREATE_STGTAB_SQLSET(table_name => 'MY_STAGETABLE'); -- the name of the table is case sensitive!

Then load the STS into the staging table:

Lutz @ orcl11g SQL> EXEC DBMS_SQLTUNE.PACK_STGTAB_SQLSET(-
                           sqlset_name - => 'my_workld_sts', -
                           staging_table_name => 'MY_STAGETABLE');

Now the workload is ready for shipping to the test system.

After importing the staging table with the STS into the test system, unload it from the imported staging table (unpack) and then perform the first test run of the workload. This unloads STS from the staging table into the data dictionary:

Lutz @ orcl_test SQL> EXEC DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET(-
                           sqlset_name - => 'my_workld_sts', -
                           staging_table_name => 'MY_STAGETABLE');

Step 3:

In this and the next steps, use the built-in 11g package dbms_sqlpa. Then run the imported workload two times on the testing system. The before change run is used to capture the first set of performance statistics for the captured workload in the test environment. These can then be compared to the performance statistics which will be taken for the second run which will be called the after change run.

First, create a tuning task on the test system:

Lutz @ orcl_test SQL> var task_name varachar2(30)
                       exec :task_name:= dbms_sqlpa.create_analysis_task( -
                            sqlset_name=>'my_workld_sts', task_type => 'sqlpa'); 

Next, perform the before change run:

Lutz @ orcl_test SQL> exec dbms_sqlpa.execute_analysis_task(:task_name, -
                     execution_type=>'test execute' ?
                      execution_name=> ?before_changes?);

% The parameter task_type with value TEST EXECUTE puts all SQL texts into    the report. This parameter also accepts the value EXPLAIN PLAN which   creates a report that holds the explain plans for all statements of the workload. 

This creates the first set of performance statistics for the test workload on the test system. Look at the performance report of the first test run. Here is the syntax:

Lutz @ orcl_test SQL> SELECT dbms_sqlpa.report_analysis_task(-     
                                task_name=>:task_name,-
                                type=>'text',-
                                section=>'summary')
                      FROM dual;

The function report_analysis_task returns a CLOB (Character Large Object).

Step 4:

Make the changes to the test system to see the impact on the workload.

Step 5:

After changing the system, perform the after change run using exactly the same syntax as for the before change run. The overloaded procedure SET_ANALYSIS_TASK_PARAMETER can be used to adjust the analysis task.

DBMS_SQLPA.SET_ANALYSIS_TASK_PARAMETER(
  task_name         IN VARCHAR2,
  parameter         IN VARCHAR2,
  value             IN VARCHAR2);

DBMS_SQLPA.SET_ANALYSIS_TASK_PARAMETER(
  task_name         IN VARCHAR2,
  parameter         IN VARCHAR2,
  value             IN NUMBER);

This is a list of the parameters which can be adjusted:

APPLY_CAPTURED_COMPILENV
COMPARISON_METRIC
DAYS_TO_EXPIRE
DEFAULT_EXECUTION_TYPE
EXECUTION_DAYS_TO_EXPIRE
EXECUTION_NAME1
EXECUTION_NAME2
LOCAL_TIME_LIMIT
BASIC_FILTER
PLAN_FILTER
RANK_MEASURE1
RANK_MEASURE2
RANK_MEASURE3
RESUME_FILTER
BASIC_FILTER
SQL_IMPACT_THRESHOLD
SQL_LIMIT
SQL_PERCENTAGE
TIME_LIMIT
WORKLOAD_IMPACT_THRESHOLD

Run the workload for the second time:

 Lutz @ orcl_test SQL> exec dbms_sqlpa.execute_analysis_task(:task_name, -
                     execution_type=>'test execute' ?
                     Execution_name => ?after_changes?);

Then review the task report summary:

Lutz @ orcl_test SQL> SELECT dbms_sqlpa.report_analysis_task(-     
                                task_name=>:task_name,-
                                type=>'text',-
                                section=>'summary')
                      FROM dual;  

Step 6:

In the next step, analyze the performance for the two runs and compare the results:

Lutz @ orcl_test SQL> exec dbms_sqlpa.execute_analysis_task(:task_name,
              execution_type => ?compare performance?,
                              execution_name => ?analysis_results?,
                              execution_params => dbms_advisor.arglist(-
                                              ?execution_name1?, ?before_changes?,
                                              ?execution_name2?, ?after_changes?,
                                              ?comparison_metric?, ?buffer_gets?)); 

Lutz @ orcl_test SQL> SELECT dbms_sqlpa.report_analysis_task(-
                              task_name=>:task_name,-
                              type=>'text',-
                              section=>'summary')
                      FROM dual;

The value compare [performance] for the parameter excution_type can only be used after two runs when either test_execute or explain_plan have been performed.

% By default, the COMPARISON_METRIC parameter value  elapsed_time is   used for the comparison.

% It is also possible to choose one of the following metrics for the analysis task:   buffer_gets, cpu_time, direct_writes, optimizer_cost or disk_reads.

% Use an arithmetic expression for the metric. An example is cpu_time +   buffer_gets * 1.5

Step 7:

Now it is possible to identify the SQL statements which have encountered performance regression after the changes have been made to the testing system. There are a number of data dictionary views which can be used to monitor the SQL Performance Analyzer:

1.      DBA?USER_ADVISOR_SQLPLANS: list of all?user's SQL execution plans

2.      DBA?USER _ADVISOR_SQLSTATS: list of all?user's SQL compilation and execution statistics

3.      DBA?USER _ADVISOR_TASKS: details about the advisor task

4.      DBA?USER _ADVISOR_EXECUTIONS: list of metadata for the execution of the task.

5.      DBA?USER _ADVISOR_FINDINGS: list of analysis findings.

Step 8:

In the last step,  the regressing statements are handed over to the SQL Tuning Advisor or the SQL Access Advisor to see if it can find better execution plans. The output of this tuning task could be recommendations to implement SQL profiles for the regressing statements or other implementations. 

The exact impact of the changes to the system on the original workload have been reviewed and after having tested for improvements and regressions, one could apply the tested changes to the production as well. The SQL profiles could be shipped back to the production system and imported there.

As a final step, feed these improved plans into the SQL Plan Baselines in the SQL Management Base (SMB) of the production system in its SYSAUX tablespace. These concepts will be covered in a later section of this chapter.

Next, force the optimizer in the production database to use the better execution plans which the SQL Performance Advisor has found on the testing system.

 

This is an excerpt from the new book Oracle 11g New Features: Expert Guide to the Important New Features by John Garmany, Steve Karam, Lutz Hartmann, V. J. Jain, Brian Carr.

You can buy it direct from the publisher for 30% off.

 

 
��  
 
 
Oracle Training at Sea
 
 
 
 
oracle dba poster
 

 
Follow us on Twitter 
 
Oracle performance tuning software 
 
Oracle Linux poster
 
 
 

 

Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.  Feel free to ask questions on our Oracle forum.

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata?  Oracle technology is changing and we strive to update our BC Oracle support information.  If you find an error or have a suggestion for improving our content, we would appreciate your feedback.  Just  e-mail:  

and include the URL for the page.


                    









Burleson Consulting

The Oracle of Database Support

Oracle Performance Tuning

Remote DBA Services


 

Copyright © 1996 -  2020

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.