Call now: 252-767-6166  
Oracle Training Oracle Support Development Oracle Apps

 
 Home
 E-mail Us
 Oracle Articles
New Oracle Articles


 Oracle Training
 Oracle Tips

 Oracle Forum
 Class Catalog


 Remote DBA
 Oracle Tuning
 Emergency 911
 RAC Support
 Apps Support
 Analysis
 Design
 Implementation
 Oracle Support


 SQL Tuning
 Security

 Oracle UNIX
 Oracle Linux
 Monitoring
 Remote s
upport
 Remote plans
 Remote
services
 Application Server

 Applications
 Oracle Forms
 Oracle Portal
 App Upgrades
 SQL Server
 Oracle Concepts
 Software Support

 Remote S
upport  
 Development  

 Implementation


 Consulting Staff
 Consulting Prices
 Help Wanted!

 


 Oracle Posters
 Oracle Books

 Oracle Scripts
 Ion
 Excel-DB  

Don Burleson Blog 


 

 

 


 

 

 
 

Oracle extended explain plan statistics - SQLTXPLAIN.SQL

Oracle Tips by Burleson Consulting

January 14, 2013

Here is a great new script from Carlos Sierra, a brilliant developer at Oracle Corporation.  This script is an enhancement to the Center of Excellence (COE) script top produce a super-detailed trace of SQL execution.  This new script is remarkable, and I hope that you enjoy it as much as I do.  The MOSC page is reproduced here with Mr. Sierra's permission:

    Using SQLT to create a test case

   SQLT  FAQ

SQLT is now commonly used by Oracle Technical Support for diagnosing SQL performance problems.

Carolos has graciously allowed me to have it reproduced below for your convenience.  Carlos' script enhances standard execution plan analysis by providing:

  1. Enhanced Explain Plan (including execution order, indexed columns, rows count and blocks for tables);
  2. Schema Object Attributes for all Tables and Indexes accessed by the SQL statement being diagnosed (including: object dependencies, tables, indexes, columns, partitions, sub-partitions, synonyms, policies, triggers and constraints);
  3. CBO Statistics (for levels: table, index, partition, sub-partition and column);
  4. Histograms (includes table, partition and sub-partition levels);
  5. Space utilization and administration (for tables, indexes, partitions and sub-partitions);
  6. Objects, Segments, Extents, Tablespaces and Datafiles (including current performance of datafiles);
  7. RDBMS Initialization Parameters INIT.ORA (required, recommended and default for an APPS 11i database, and all other parameters SET on the INIT.ORA file);
  8. Source Code in which the SQL Statement and accessed Tables depend on (Triggers description and body, Views columns and text, Packages specs and bodies, Procedures and Functions).

 
Get the Complete
Oracle SQL Tuning Information 

The landmark book "Advanced Oracle SQL Tuning  The Definitive Reference"  is filled with valuable information on Oracle SQL Tuning. This book includes scripts and tools to hypercharge Oracle 11g performance and you can buy it for 30% off directly from the publisher.

*************************************************************
This article is being delivered in Draft form and may contain
errors.  Please use the MOSC "Feedback" button to advise
Oracle of any issues related to this article.
*************************************************************

SQLTXPLAIN.SQL - Enhanced Explain Plan and related diagnostic info for one SQL statement (8.1.5-9.2.0)

Carlos Sierra

Apps AOL-BDE Performance

Oracle Support

October 2002 (updated on 12-NOV-02)


Purpose

Given one SQL Statement as input, generate a comprehensive report that includes:

  1. Enhanced Explain Plan,
  2. Details about all schema objects in which the SQL statement depends on,
  3. Rows count for all Tables accessed by the SQL Statement provided as input,
  4. All Indexes and indexed columns for Tables reported,
  5. CBO Stats at all levels for these Tables and Indexes,
  6. Relevant INIT.ORA Parameters

The generated report includes extensive information that can be used to:

  1. Expedite the data collection process requested by Oracle APPS Support when reporting one FORM or Concurrent Program performing poorly (Note:169935.1),
  2. Diagnose and troubleshoot typical SQL Tuning issues (specially valuable for both APPS and non-APPS databases).

The scripts in this note supersede these two: COE_XPLAIN.SQL (Note:156958.1) and COE_IMPORT_STATTAB.SQL (Note:156964.1).


Scope & Application

This document (same as Note:215187.1) can be used by technical analysts engaged in reporting, diagnosing or troubleshooting SQL Tuning issues.

The report produced by the main script on this note includes the following sections, directly related to the SQL statement <sql.txt> provided as input to the SQLTXPLAIN.SQL:

  1. Enhanced Explain Plan (including execution order, indexed columns, rows count and blocks for tables);
  2. Schema Object Attributes for all Tables and Indexes accessed by the SQL statement being diagnosed (including: object dependencies, tables, indexes, columns, partitions, sub-partitions, synonyms, policies, triggers and constraints);
  3. CBO Statistics (for levels: table, index, partition, sub-partition and column);
  4. Histograms (includes table, partition and sub-partition levels);
  5. Space utilization and administration (for tables, indexes, partitions and sub-partitions);
  6. Objects, Segments, Extents, Tablespaces and Datafiles (including current performance of datafiles);
  7. RDBMS Initialization Parameters INIT.ORA (required, recommended and default for an APPS 11i database, and all other parameters SET on the INIT.ORA file);
  8. Source Code in which the SQL Statement and accessed Tables depend on (Triggers description and body, Views columns and text, Packages specs and bodies, Procedures and Functions).

This note IS NOT restricted to Oracle APPS databases.  It can be used for SQL Tuning exercises on any RDBMS with Release 8.1.5 or higher.  If you need to generate Explain Plans for Oracle 8.0 or 7.3, use Note:156959.1 or Note:156960.1 respectively.


SQLTXPLAIN.SQL compared to COE_XPLAIN.SQL

SQLTXPLAIN.SQL is one SQL*Plus script that using a small staging repository and a PL/SQL package creates a comprehensive report to gather relevant information on ONE SQL statement (sql.txt).  COE_XPLAIN.SQL performs the same functionality, with some limitations and restrictions.

SQLTXPLAIN.SQL differs from COE_XPLAIN.SQL in the following ways:

  1. SQLTXPLAIN.SQL collects more data about the objects in which the SQL Statement <sql.txt> depends on.  It uses V$OBJECT_DEPENDENCY to find these dependencies.

  2. SQLTXPLAIN.SQL can be used by multiple users concurrently.  It keeps all staging data organized by unique STATEMENT_ID, therefore it can handle concurrency and historical data.
  3. SQLTXPLAIN.SQL creates a better organized and documented report output.  Report sections that are not needed on a specific SQL Statement <sql.txt> are just skipped over in the new report without any headers or references.
  4. SQLTXPLAIN.SQL allows to keep multiple versions of CBO Stats into the same table SQLT$STATTAB.  Therefore, similar sets of CBO Stats can be restored into the Data Dictionary multiple times during a SQL Tuning exercise (without loosing original Stats).
  5. SQLTXPLAIN.SQL is subject to future improvements and additions, while COE_XPLAIN.SQL is not.
  6. SQLTXPLAIN.SQL performs better than COE_XPLAIN.SQL for same SQL Statement <sql.txt>.
  7. SQLTXPLAIN.SQL reports sup-partitions details.
  8. SQLTXPLAIN.SQL reports actual LOW and HIGH 'boiled' values of all columns on tables being accessed.  It also reports Histograms in a more comprehensive format.
  9. SQLTXPLAIN.SQL does not report some data shown on COE_XPLAIN.SQL that was actually not used during SQL Tuning exercises, making the new report easier to understand.
  10. COE_XPLAIN.SQL evolved during 2 years, while SQLTXPLAIN.SQL was designed from scratch.

How SQLTXPLAIN.SQL Works

SQLTXPLAIN.SQL has only one user parameter, which is the filename referenced in this document as <sql.txt>.  This file sql.txt contains ONE SQL Statement for which the user wants to create a comprehensive report.

SQLTXPLAIN.SQL uses the SQL*Plus EXPLAIN command to generate the Explain Plan into one of the SQLT$ staging tables.  It finds back this particular SQL command in the SQL Area and gets its object dependencies.  From the PLAN_TABLE and the V$OBJECT_DEPENDENCY, it finds all accessed Tables.  Then it gets all related Indexes, and it continues cascading into the 'ALL_' and 'DBA_' views to retrieve all the related data.

SQLTXPLAIN.SQL makes several calls to Procedures on dedicated Package SQLT$XPLAIN.  All the data extraction and subsequent storage into staging tables is performed by the Procedures called from the main script SQLTXPLAIN.SQL.

All staging SQLT$ Tables include a column STATEMENT_ID, which contains the host name, the instance name and one sequence number.  Column STATEMENT_ID guarantees that each execution of the SQLTXPLAIN.SQL script generates a unique identifier.  Therefore, data on these staging Tables can be easily migrated across similar instances.

Once all staging data has been extracted, the SQLTXPLAIN.SQL script generates the spool file that becomes the final report.  This report is divided into sections with a 'tag' representing the 'ALL_', 'DBA_' or 'V$' view from where the displayed information was extracted.  The meaning of each column is usually self-explanatory.  The user of the report can always refer to the DATABASE REFERENCE documentation for a detailed explanation of the extracted columns.  Some derived values are better understood by finding how they are calculated directly from the SQLTXPLAIN.SQL script itself, and the following two others: SQLTCTAB.SQL and SQLTCPKGB.PLS (see list of Supplied Scripts below).

The basic information on the generated report can be understood by anybody: Tables, Indexes, and so on.  More detailed information displayed in the report is only valuable to analysts familiar with the SQL Tuning process.

If you find the output of the SQLTXPLAIN.SQL too detailed, and just need the basics, use SQLTX.SQL instead.  This included script provides: one Explain Plan, a list of accessed Tables (including NUM_ROWS), and a list of related Indexes (including their columns).  All in a very quick execution.


Configuring SQLTXPLAIN.SQL

SQLTXPLAIN.SQL includes 6 seeded parameters.  Leave the default seeded values unless you need to modify according to particular requirements, or as per Support instructions:

  1. p_days_to_keep_repo (30):  Every execution of SQLTXPLAIN.SQL purges data from the staging repository that is older than value specified.

  2. p_compute_count_star (YES):  Performs a COUNT(*) on all accessed Tables.  It is useful to determine if CBO Stats should be re-gathered.

  3. p_include_col_stats (YES):  CBO Stats for all indexed columns are always reported.  When this parameter is YES, CBO Stats for all table columns (besides indexed columns) are also reported.

  4. p_include_extents (NO):  When set to YES, Segments and Extents are reported.  This info can be valuable when using in conjunction to raw SQL Trace generated by EVENT 10046 Level 8 (see Note:171647.1).  Seeded value is NO because of known performance issues on 'ALL_' and 'DBA_' views.

  5. p_include_histograms (YES):  When set to YES, Table, Partition and Sub-partition Histograms are included for all indexed columns or table columns (the latter if p_include_col_stats is also set to YES).

  6. p_include_source (YES):  Controls if source code, on which the SQL Statement depends, is reported or not (Triggers, Views, Packages, Procedures and Functions).

Before the staging SQLT$ repository is created (during the installation), you may want to change the Tablespace to be used.  See Installing SQLTXPLAIN.SQL and Staging Repository below.


Installing SQLTXPLAIN.SQL and Staging Repository

  1. Download latest file SQLT.zip and uncompress into dedicated directory from where SQL*Plus can be executed.
  2. If a different Tablespace other than USER's default needs to be used, modify script SQLTCTAB.SQL accordingly (one line).  Tablespace specified must exists prior to installation.  Estimated space utilization is in the range of 10-50MB.
  3. To install or re-install, connect into SQL*Plus with USER that has access to all Tables used by SQL Statement on <sql.txt>, plus access to 'ALL_', 'DBA_' and 'V$' views.  If using on an APPS database, connect with USER APPS.
# sqlplus apps/<apps_pwd> 
 
SQL> START SQLTCREATE.SQL; 

If not sure if Staging Repository has been previously installed or not, simply re-install following same instructions above.  Once installed, you are ready to use the SQLTXPLAIN.SQL and SQLTSTATTAB.SQL scripts concurrently.

If SQLTCPKGB.PLS errors with PLS-00201 while executed by SQLTCREATE.SQL, execute then SQLTGRANT.SQL connecting into SQL*Plus with a user that has the privilege of granting SELECT ON data dictionary views and ANALYZE ANY to another user (SYSTEM, SYS or INTERNAL).  Inline parameter is USER which will be executing SQLTXPLAIN.SQL and which tried to install SQLTCREATE.SQL.

# sqlplus system/<system_pwd>
SQL> START SQLTGRANT.SQL <user>;

To uninstall the whole tool and repository, execute SQLTDROP.SQL and remove SQLT* scripts from dedicated OS directory that contains them.  If script SQLTGRANT.SQL was ever used, execute also SQLTREVOKE.SQL after SQLTDROP.SQL and before removing all SQLT* scripts.


Using SQLTXPLAIN.SQL

For brief instructions and feedback, please refer to the INSTRUCTIONS.TXT file included on same SQLT.zip downloaded file.

Start by uncompressing latest file SQLT.zip into one dedicated directory from where you can connect into SQL*Plus.

The file sql.txt, whose filename is provided as an inline parameter when SQLTXPLAIN.SQL is executed, has some restrictions and characteristics explained below.  If you were not provided with one sql.txt file, create it and place under same dedicated directory where you placed all SQLT scripts.  SQLTXPLAIN.SQL will try to open file <sql.txt> under same directory where you placed all SQLT scripts.

Restrictions and characteristics for file sql.txt:

  1. It is a plain text file (flat file) with one and only one valid SQL Statement to be explained (SELECT, INSERT, UPDATE or DELETE),
  2. It cannot have empty lines (blank lines),
  3. At the very end of the file, after the very last character of the SQL Statement, one and only one 'carriage return' ('enter' or 'line feed') should be provided, with no spaces before or after it (review file sql0.txt provided as an example),
  4. The SQL Statement should NOT have a semicolon ';' at the end,
  5. If you get an error similar to 'Bind variable "b2" not declared', you have empty lines within the SQL Statement, or at the end (review sql0.txt provided as a correct example),
  6. Do NOT replace bind variables with literals.  Since the SQLTXPLAIN.SQL script does not execute the SQL Statement provided on the <sql.txt> file, there is no need to replace the bind variables on it.  Actually, by replacing the bind variables with literals, the resulting Explain Plan can change substantially and may lead to confusion or false conclusions,
  7. The filename <sql.txt> is NOT hard-coded.  Therefore, if multiple SQL Statements are being diagnosed, use filenames sql1.txt, sql2.txt, sql3.txt, etc.; or any other set of names,
  8. File sql.txt is usually created out of the TKPROF by extracting a specific expensive SQL Statement.  File sql.txt is normally created with a simple Cut&Paste OS command into one new flat file,
  9. File sql0.txt is provided as an example only (use it to test SQLTXPLAIN.SQL on APPS databases)

To execute the SQLTXPLAIN.SQL script, login into SQL*Plus.  If using Oracle APPS, login with APPS USER and password.  If using on a non-APPS database, connect into SQL*Plus with same USER that CAN execute the SQL Statement provided within the <sql.txt> file.  Be aware that the USER executing the SQLTXPLAIN.SQL script must have access to the objects specified in the sql.txt file, PLUS to the 'ALL_', 'DBA_' and 'V$' views (see also SQLTGRANT.SQL).

# sqlplus apps/<apps_pwd>
SQL> START SQLTXPLAIN.SQL sql.txt;

SQLTXPLAIN.SQL creates three files: SQLTXPLAIN.LOG, SQLT_<statement_id>.TXT and a Raw SQL Trace.  The first two are the log file and the actual report.  Use any text editor to review them (TextPad and WordPad work fine).  Be sure your lines are not wrapping around.  These two files are usually created under the same directory from where SQLTXPLAIN.SQL was executed.  On NT, these files may get created under $ORACLE_HOME/bin instead.

In most cases, the *.TXT report is enough to understand a specific Explain Plan.  In some cases, two additional files may be required:

  1. Raw SQL Trace generated by SQLTXPLAIN.SQL.  Generated always under the 'udump' directory on the database server.  Filename and exact location can be found on the report or log files.  Please do not TKPROF this particular raw SQL Trace.
  2. Export file of the table SQLT$STATTAB, which contains the CBO Stats related to the SQL Statement <sql.txt>.  See Migrating CBO Stats across Similar Instances below for details on its use.  To export this table SQLT$STATTAB, use ORACLE_HOME 8.0.6 (if using Oracle APPS), and execute command below.  Keep in mind that exported file is always BINARY and should be treated as such (if XFR as ASCII, this BINARY file gets corrupted).

Exporting Table SQLT$STATTAB

# exp apps/<apps_pwd>
file=SQLT tables='SQLT$STATTAB'

If you have problems executing the script SQLTXPLAIN.SQL, read this note in detail and review/test example file provided sql0.txt (for Oracle APPS databases).

If you notice in the output report that CBO Stats are outdated, or just want to refresh them to test if the Explain Plan changes, use provided script SQLTGSTATS.SQL.  This script requires one parameter, which is the STATEMENT_ID identifying your unique execution of the SQLTXPLAIN.SQL.  It then executes either FND_STATS or DBMS_STATS (APPS or non-APPS) and gathers all CBO Stats related to Tables and Indexes shown in the SQLTXPLAIN.SQL output report.  If after gathering Stats the Explain Plan changes, try testing the performance of the new plan.


Migrating CBO Stats across Similar Instances

During a SQL Tuning exercise it is common that in order to reproduce the same Explain Plan into another but similar instance, the set of CBO Stats used by the CBO to generate a 'bad' Explain Plan needs to be migrated from a 'Source' Instance with the 'bad' Explain Plan into a 'Destination' instance with the currently 'good' Explain Plan.

To migrate CBO Stats from the Source instance into the Destination one, simply:

  1. Execute SQLTXPLAIN.SQL for same SQL Statement <sql.txt> in both Source and Destination instances.
  2. Review and compare both *.TXT reports generated.
  3. Export Table SQLT$STATTAB on Source instance (use Oracle Home 8.0.6 if on APPS 11i).
  4. XFR exported BINARY file SQLT.dmp from source to destination server (FTP or email).
  5. Import Table SQLT$STATTAB on Destination instance.
  6. Restore CBO Stats for STATEMENT_ID into Data Dictionary using the SQLTSTATTAB.SQL script on destination instance.
  7. You are ready to execute SQLTXPLAIN.SQL on destination instance using CBO Stats from source instance.

Importing Table SQLT$STATTAB

# imp apps/<apps_pwd>
file=SQLT tables='SQLT$STATTAB' ignore=y

Using SQLTSTATTAB.SQL

This script requires only one parameter, which is the STATEMENT_ID for which the corresponding CBO Stats are being restored from Table SQLT$STATTAB into the Data Dictionary.  Since STATEMENT_ID is always unique across instances, Table SQLT$STATTAB supports CBO Stats for same <sql.txt> file from different instances (for example Test and Production).

Execute this SQLTSTATTAB.SQL on the destination instance passing as inline parameter the full or partial name of the STATEMENT_ID from the source instance.  Table SQLT$STATTAB should be exported from the source instance and imported into the destination instance, prior to execution of SQLTSTATTAB.SQL.

To execute the SQLTSTATTAB.SQL script, login into SQL*Plus.  If using Oracle APPS, login with APPS USER and password.  If using on a non-APPS database, connect into SQL*Plus with same USER that will execute the SQLTXPLAIN.SQL script in the destination instance.  Be aware that the USER executing the SQLTXPLAIN.SQL and  SQLTSTATTAB.SQL scripts must have access to the objects specified in the sql.txt file, PLUS ANALYZE ANY privilege (see also SQLTGRANT.SQL).

# sqlplus apps/<apps_pwd>
SQL> START SQLTSTATTAB.SQL <statement_id>;

If SQLTSTATTAB.SQL finds CBO Stats for columns that exist in the source instance but not on the destination one, it just skips the CBO Stats for these columns and reports them as 'MISSING COLUMNS'.

Source and Destination instances should be very similar (like Test and Production, or two APPS instances on same release).


Supplied Scripts

Scripts for installation and maintenance

  • SQLTCREATE.SQL:  installs and re-installs the entire SQLT environment (calls SQLTDROP.SQL, SQLTCTAB.SQL, SQLTCPKGS.PLS and SQLTCPKGB.PLS).

  • SQLTDROP.SQL:  uninstalls the entire SQLT environment by dropping SQLT Package, Views, Tables and Sequence.

  • SQLTCTAB.SQL:  creates all SQLT$ schema objects except Package SQLT$XPLAIN (creates Sequence, Tables and Views).  It also seeds required and recommended INIT.ORA parameters for APPS 11i.  If a different Tablespace is required other than default for USER, modify this script (only one line) prior to execution.

  • SQLTCPKGS.PLS:  creates Package Specs for SQLT$XPLAIN.

  • SQLTCPKGB.PLS:  creates Package Body for SQLT$XPLAIN.

  • SQLTGRANT.SQL: grants SELECT ON some data dictionary views, and ANALYZE ANY, to user of SQLTXPLAIN.SQL.

  • SQLTREVOKE.SQL: revokes grants created by SQLTGRANT.SQL to user of SQLTXPLAIN.SQL.

  • SQLTRUNC.SQL:  truncates all staging Tables on the SQLT environment, with the exception of the seeded required and recommended INIT.ORA parameters for APPS 11i.

  • SQLTPURGE.SQL:  purges old data out of the staging Tables on the SQLT environment, leaving only 'X' number of days (where 'X' is an input execution parameter).

Scripts for SQL Tuning

  • SQLTXPLAIN.SQL:  generates a SQLTXPLAIN report and a raw SQL Trace.  It also dumps CBO Stats for one STATEMENT_ID from the Data Dictionary into the Table SQLT$STATTAB.

  • SQLTX.SQL: This is the light version of the SQLTXPLAIN.SQL.  Executes using same syntax and generates just an Explain Plan, list of Tables (including NUM_ROWS), and list of Indexes (and indexed columns).

  • SQLTGSTATS.SQL: Uses FND_STATS (APPS) or DBMS_STATS (non-APPS) to gather the CBO Stats for all Tables and Indexes related to one execution of the SQLTXPLAIN.SQL.

  • SQLTSTATTAB.SQL:  restores CBO Stats for one STATEMENT_ID from the Table SQLT$STATTAB back into the Data Dictionary.

Other files included inside SQLT.zip

  • INSTRUCTIONS.TXT:  Brief instructions in how to install and execute SQLTXPLAIN.SQL.

  • NOTE.HTML:  This document (same as Note:215187.1)

  • sql0.txt:  Example of one <sql.txt> file that can be used to test SQLTXPLAIN.SQL on APPS 10.7, 11.0 or 11i databases (RDBMS releases 8.1.5 or higher).

    Background and Structure

    • What is SQLTXPLAIN, also known as SQLT

      SQLT is a tool that can be used to diagnose why a specific SQL statement may be performing poorly. It takes a single SQL statement as input and outputs a set of diagnostics that can be used to improve the statement's performance.

      SQLT provides large amounts of information to a SQL Tuner on a wide range of topics pertinent to that SQL including execution plans, Cost-based Optimizer CBO statistics, schema objects metadata, performance statistics, configuration parameters, and other elements that influence the performance of the one SQL being analyzed. An example Menu showing the various options is shown below:

      Example SQLT Menu

      In addition it provides history information for these (some of this history needs AWR to access) and links in to other useful tools such as the SQL Tuning Advisor (subject to license). It also generates a transportable, 'no data' testcase.
    • What is the origin of SQLT?

      SQLTXPLAIN is a tool provided by Oracle Server Technologies Center of Expertise - ST CoE. It have been in usage in support for over 10 years following its creation in 1999. It has been developed and improved over that time to reflect changes in the Oracle Database and to add additional functionality. It is constantly evolving and being updated.
    • Is there a light weight way of gathering SQLT-like information?

      Although it does not have the depth or scope of SQLT, there is a script called SQLHC which provides a subset of the SQL used by the SQLT script. SQLHC is shipped along with SQLT and can be executed using the following 2 scripts:
             sqlhc.sql    -- Runs from a SQL_ID
             sqlhcxec.sql -- Runs from a Script file
      

      for more details regarding SQLHC, see:
      Document 1366133.1 SQL Tuning Health-Check Script (SQLHC)s
    • IS SQLHC a complement or a substitute for SQLT?

      SQLT is far more advanced than SQLHC; while it incorporates similar checks to those done in SQLHC it offers far wider functionality. SQLHC is simply a quick health check for a statement and is a subset of the SQL used by the SQLT script but unlike SQLT, has no database footprint. It is a relatively short SQL script that generates a HTML output. It extracts database statistic information using selects or standard APIs (such as DBMS_XPLAN) from the database and combines that with AWR output if the correct licenses are available. An overview video about SQLHC is available here :

      Document 1455583.1 SQL Tuning Health-Check Script (SQLHC) Video

      For general usage information, see:

      Document 1366133.1 SQL Tuning Health-Check Script (SQLHC)
      Document 1417774.1 FAQ: SQL Health Check (SQLHC) Frequently Asked Questions
    • What would you use first? SQLHC or SQLT?

      When you initially encounter a problem with a SQL statement, we would advise that you use SQLHC.
      If that finds nothing wrong then follow that up with SQLT.
      Of course you can start with SQLT but you may not need all the functionality that it provides and with SQLHC you do not have to worry about database footprint and all that entails.

    • Can SQLT be used safely in production?

      Yes. But this is something you would need to validate If you decide to use it, initially you would probably want to install it on test. If you can reproduce the issue you are seeing there, then you do not need to install on production. If it only happens in production, then once you are happy in Test then you can schedule the installation on production and capture the information from there.
    • What level of expertise is required to use SQLT?

      SQLT assumes that the user knows about SQL Tuning. It is not designed to take you step by step through an issue if you do not know anything about tuning SQL. It will not teach you SQL Tuning. It is similar to reading an AWR report - if you are familiar with Database Performance concepts then AWR will be familiar to you and SQLT is similar.

    Installation, Setup and Usage

    • Where can I download SQLT?

      You can download SQLT from:

      Document 215187.1 SQLT (SQLTXPLAIN) - Tool That Helps To Diagnose SQL Statements Performing Poorly
    • How Do I install SQLT

      You can find the instructions in file sqlt_instructions.html included in the zip file downloaded above.
    • If SQLT is already installed, how Do I determine the version?

      The following SQL, run as SYS or the SQLTXPLAIN user will provide version information:
      COL sqlt_version FOR A40;
      
      SELECT
      'SQLT version number: '||sqltxplain.sqlt$a.get_param('tool_version')||CHR(10)||
      'SQLT version date  : '||sqltxplain.sqlt$a.get_param('tool_date')||CHR(10)||
      'Installation date  : '||sqltxplain.sqlt$a.get_param('install_date') sqlt_version
      FROM DUAL
      /
      
      SQLT_VERSION
      ----------------------------------------
      SQLT version number: 11.4.4.8
      SQLT version date  : 2015-09-27
      Installation date  : 2015-10-24/09:56:12
      
    • What does SQLT do during installation?

      SQLT creates a repository in the database under the SQLTXPLAIN user to record information concerning the queries that are analyzed by the tool and also creates a number of directories on disk. Full details can be found in the documentation contained in the zip file downloaded above.
    • Can SQLT be installed into an existing schema instead of creating a new schema?

      SQLT requires its own schema to be installed
    • Do you need the SYS password to run SQLT?

      In order to install SQLT, you need to be connected internal. To run SQLT once it is installed, No.
    • Can I reset the password for sqltxplain ?

      Yes. Although the SQLT schema (SQLTXPLAIN) is fixed, the password can be changed just like any normal Oracle password.
    • If the Applications password is unavailable, is SYS/SYSTEM a viable alternative?

      You can run with these users but this is NOT the preferred way running SQLT; you should try to get the applications password if possible. The primary issue is that without the application password some pieces of information will be missing such as the 'explain plan for' output and although in most cases this is not required, in some cases this information is important.
    • Which Schema should I register as the application scema? For example we have 3 schemas in the DB with cross permissions - Schema A is the main schema but uses objects from B and C. How does that work with SQLT?

      When you install SQLT it asks who is the main application user. You can pick any schema for this, it does not matter. When you want to examine a SQL with SQLT then you need to register the schema that actually runs the SQL. So, once you have installed SQLT, connect as SYS and grant the SQLT user role to the schema you run the SQL as. Then use those schemas to run the SQLT against the SQL that those schemas run. If you don't grant the role to all the schemas and have only referenced one, SQLT will still be able to pick up the references to tables and object in the SQL that are from any other schemas. All the register does is setup the runtime environment to run as that particular user.
    • When database is upgraded from 10.2 to 11.2, what is SQLT upgrade path?

      If the database is upgraded SQLT should work without any actions being performed. To be completely safe re-execute spcreate.sql
    • If I install SQLT on a production Db and it is later cloned, will SQLT work "out-of-the-" box on the clone?

      In the past this would not work because there was a dependency on the bdump/udump directories. But today it will check and refresh the pointers so it should work.
    • Is any special setup needed to install SQLT on a server with multiple database instances?

      SQLT install is primarily about installing SQLT into a particular database. If you have multiple databases you have to install SQLT into each one where you have SQL statements you want to analyze. You can, however, install SQLT into multiple databases from the same install directory.
    • How Frequently should SQLT be run?

      SQLT is not designed to check all SQL statements, rather, when you have concerns about the performance of a single SQL statement then run it for that SQL.

      SQLT takes a snapshot of the SQL and the environment around it, so use SQLT when you want to analyze the environment at a particular time. If SQL has intermittent performance issues then run in SQLT to collect information from the good and the bad times. If it works on one system and not another then run SQLT on both. Then compare the information and use the differences to identify where the problem lies.

      When used with AWR (with appropriate licenses), SQLT can also look at historical data.
    • Can install be scripted/automated?

      The SQLT install process itself is scripted in a script called sqlt/install/sqcreate.sql If you have dozens of installations to do on multiple machines with the same answers to the questions, then you may want to automate the install process. In the instructions file, after it describes the installation steps it has options for an automatic install with no user input giving you 2 options that you can follow to do this. This can enable you to automate multiple installs.
    • Do we need to worry about maintenance for SQLEXPLAIN tablespace?

      The tool has a repository which can grow over time. There is a purge script in the utl (utilities) directory called "sqlthistpurge.sql" which can purges a range of statements from the SQLT repository based on SQL_ID. But in most cases the repository will not get large because we only store metadata NOT application data.

    SQLT Runtime/Installation Errors

    • I am receiving unexpected SQLT Installation errors, what should I do?

      If, during installation, you receive unexpected errors then, as a first step you should re-download the "sqlt.zip" file from a different browser and re-try the install as sometimes this has resolved issues.
      If that does not work then most common issues are someting to do with the setup of the database unrelated to SQLT itself. For example: if you have a password policy that enforces passwords must be at least 8 characters, then do not give SQLT a password that is <8 characters because this will fail and can cause all sorts of errors! Ensure that you examine all the install logs to diagnose issues.

      There are some common questions and resolutions in the section below.

      Failing that, you can post questions in our dedicated threads in the Database Tuning Forum:
    • I am receiving unexpected SQLT Runtime errors, what should I do?

      The vast majority of errors received when executing SQLT are nothing to do with SQLT itself, rather they are problems with the environment itself that SQLT has detected. When you receive an error, be sure check the logs that a produced (typically called sqltMODE.log) to read the error message itself since this will often give vital clues as to potential solutions. For example, if you are running a SQLT XECUTE on an insert statement and you receive the following:
          SQL> START run/sqltxecute.sql input/sample/script1.sql
          SQL> INSERT INTO my_table SELECT * FROM another_table;
          INSERT INTO my_table SELECT * FROM another_table;
          *
          ERROR at line 1:
          ORA-00001: unique constraint (MY_PRIMARY_KEY) violated 
      

      ORA-00001 indicates that a unique constraint has been violated. If the insert in question has been executed more than once, it is possible that the data has already been inserted and so violates the primary key conditions. This error is nothing to do with SQLT, you would get the same error running the SQL standalone in a completely separate session. The solution in this case is to insert unique data. If you encounter errors you cannot resolve, please log a SR with support, add a comment to this document or contact the author.
    • I receive "ORA-01924: role 'SQLT_USER_ROLE' not granted or does not exist"

      When running various SQLT methods (XPLAIN, XECUTE etc.) for the first time by a user, you may receive:
      SET ROLE sqlt_user_role
      *
      ERROR at line 1:
      ORA-01924: role 'SQLT_USER_ROLE' not granted or does not exist
      
      Typically, this error occurs because you have not granted the 'SQLT_USER_ROLE' role to the user you are trying to run the SQLT as. Certain methods (XTRACT, XECUTE Method etc) require the application user executing SQLT to be granted the 'SQLT_USER_ROLE' role. You can execute the grant directly, or you can execute the script:
      sqlt/install/sqguser.sql

      connected as SYS.

      During SQLT Installation, the sqguser.sql is executed against the "application user" you have specified. If you are using SQLT with another user this error will occur if you have not executed the script.
    • I receive "ORA-00942: table or view does not exist" during SQLT install

      When ORA-942 errors are encountered, such as:
      ORA-00942: table or view does not exist
      These indicate that a table or view that SQLT is expecting to exist at this point does not. There are a number of reasons this can occur so to diagnose and resolve the cause of the problem you need to review the NN_*.log files created during installation and fix the errors reported.

      Being that ORA-942 indicates that a "table or view does not exist", look for a reason in the log why the object in question may not have created such as insufficient space or quota (or similar) to create all the required objects in the original tablespace .
    • I see PLS-00201 and ORA-00904 errors in the Log

      The SQLT log actually contains 2 error messages as an example:
      NOTE:
      If you get one of these errors it means SQLTXPLAIN is not installed:
      PLS-00201: identifier 'SQLTXPLAIN.SQLT$A' must be declared
      ORA-00904: "SQLTXPLAIN"."SQLT$A"."GET_STATEMENT_ID_C": invalid identifier
      Review NN_*.log files created during installation and fix errors reported.

      The example simply points you towards a potential resolution IF you actually see these errors elsewhere in the log. If you do not receive the actual errors then you can ignore this advisory message.

      If you do see a "real" PLS-201 in the log, then you need to review the NN_*.log files created during installation.

      PLS-201 means that : "You tried to reference either:
      an undeclared variable, exception, procedure, or other item,
      or an item to which no privilege was granted
      or an item to which privilege was granted only through a role."

      This usually means that there are missing privileges on the object in question for a particular user.
      You should be able to determine which user from the log and fix the errors reported.
    • During a 10g install, I receive "PLS-00920: parameter plsql_native_library_dir is not set" during SQLT install

      During a 10g install, if you receive the following:
      Errors for PROCEDURE SYS.SQLT$_TRCA$_DIR_SET:
      
      LINE/COL ERROR
      -------- -----------------------------------------------------------------
      318/3    PLS-00920: parameter plsql_native_library_dir is not set
      Then this means that the "plsql_native_library_dir" parameter value is unset on the instance. This parameter is needed to allow the SQLT code to work.

      To resolve the issue, define a directory path for the PLSQL_NATIVE_LIBRARY_DIR parameter with ALTER SYSTEM, and then restart the installation. You can find details of the parameter in the Oracle Documentation: PLSQL_NATIVE_LIBRARY_DIR. The directory path that you specify should be readable and writable by the account used to install the Oracle software installer.

      This error should not appear on 11g or higher.
    • During SQLT Installation, I receive ora-3113/3114 errors, what should I do?

      ORA-03113 and ORA-03114 errors are connection type errors that indicate that the client is not able to communicate with the server:
      ORA-03113: end-of-file on communication channel
      ORA-03114: not connected to ORACLE
      If you receive either of these errors during installation or runtime SQLT, then this indicates that your process has been disconnected from the server for one of a huge number of reasons. To determine the cause you need to follow the normal steps for diagnosis of these errors, which is to examine the alert log for errors and traces around the time of the error and resolve them. These is comprehensive coverage of these errors on MOS, a good starting point is:
      Document 1506805.1 Master Note: Troubleshooting ORA-03113

    Compatible Versions and Licensing

    • Is the SQLT tool the same for all versions?

      No.
      SQLT has been developed over time and is currently on its 3rd revision which was designed for Oracle 10g Release and above.
      For 9.2 and 10.1 instances, a separate version is available from the Central SQLT article.
    • Can SQLT be used on Oracle and 10gR1?

      Yes. For 9.2 and 10.1 instances, a separate version is available from the Central SQLT article.
    • What is the licensing requirement for SQLT?

      SQLT requires no license and is FREE.

      SQLT can use data from the Oracle Diagnostic and/or the Oracle Tuning Packs if your site has a license for them. These two packs provide enhanced functionality to the SQLT tool. During SQLT installation you can specify if one of these two packages is licensed by your site. You can specify "T" for Oracle Tuning Pack, "D"
      for Oracle Diagnostics Pack or "N" for none (note that the Tuning pack license includes the Diagnostics pack).

      If "T" or "D" is selected, SQLT may include licensed content within the diagnostics files it produces. The Default is "T".
      If "N" is selected, SQLT installs with limited functionality but still provides some basic information that can be used for initial SQL performance diagnostics. With no packs SQLT will not use anything from AWR nor the tuning pack; so no ADDM, AWR, ASH and no SQL Tuning Advisor output will be included.

      You should check with your site license manager if you have doubts regarding the licensing arrangements with regard to Oracle packs.
      SQLT does NOT check for a valid license.
    • How do I check in the DB whether or not tuning and diagnostic packs are installed / available based on licensing

      Most packs are installed by default because the database itself uses the AWR repository. The license allows you to use the AWR repository.
      There is no way of checking if you have such a license in the database.

      The Diagnostic Pack is a Subset of the Tuning pack so if you have Tuning you have the diagnostic pack.
      You can refer to Licensing Guide for details about what is included in Tuning Pack and Diagnostic Pack. See:

      Oracle? Database Licensing Information
      11g Release 2 (11.2)
      Part Number E10594-18
      Chapter 2 Options and Packs
      Oracle Tuning Pack License
      http://docs.oracle.com/cd/E11882_01/license.112/e10594/options.htm#DBLIC170
    • What is missing from the reports if the Tuning Pack is not installed?

      SQLT uses packs to give it access to AWR (Automatic Workload Repository) based information (AWR information is accessible via the Diagnostic pack; the Tuning pack is a superset of the Diagnostic pack so either will provide access to this information). The AWR infrastructure is installed by default (because the database uses AWR information for internal purposes), the real question is whether you are licensed to use it or not.

      SQLT uses AWR to provide historical information regarding the execution of SQL. So, for example, if a plan had changed over time, it would use AWR information to report on that.

      If AWR information is not made available by the presence of a license for one of the aforementioned packs, then the AWR related information will not be reported.

      In addition the Tuning Pack provides access to the SQL Tuning Advisor.

      Without pack licenses, SQLT still provides some basic information that can be used for initial SQL performance diagnostics.
    • When using SQLT, is it more beneficial to have AWR than not having AWR?

      Yes, it is more beneficial to use AWR alongside SQLT.
      AWR information is a component of the diagnostic pack and is fundamental for performance diagnostics and interpretation. Generally, if you do not have AWR then you would need to use statspack but this is of limited use on later versions.

      If you have enterprise editions it is very useful to have AWR to diagnose performance issues.
      Specifically for SQLT, having AWR information provides more usable diagnostic output than without, so we would recommend it but it is not a pre-requisite.

    Functionality

    • What platforms does SQLT work on?

      SQLT is platform independent. SQLT uses SQL Scripts to generate output files. The SQLT report is in html format so any browser capable of displaying html will be able to display it.
    • Does SQLT capture All the SQL from a Database or capture specific schemas only?

      SQLT works on a single SQL statement at a time and captures only ONE specific SQL_ID or HASH_VALUE
    • Is SQLT able to analyze many SQLs simultaneously (e.g. from a SQL Tuning Set)?

      No. SQLT works on a single SQL statement at a time and captures only ONE specific SQL_ID or HASH_VALUE. If you want to perform comparative baseline analysis you should look at SQL Plan Management in conjunction with the SQL Performance Analyzer and Real Application testing (dependent on your requirements).
    • If we have multiple SQLs in a session, how do we use SQLT one by one?

      SQLT works on a single SQL statement at a time. You would need to supply the SQL_ID or HASH_VALUE for the SQL you are interested in from the session. Once you have analyzed one SQL_ID you can move it on to the next.
    • If we have multiple reports for the same query where the plan has changed, can these be compared?

      In this case use the COMPARE method. This allows you to take two similar sources but the same SQL statement performs fine in one of them but not in the other. This method helps to pin-point the differences between the two SOURCES in terms of plans, metadata, CBO statistics, initialization parameters and bug fix-control. See the COMPARE Section in the sqlt/sqlt_instructions.html file included in the zip file.
    • Can we use this tool for PL/SQL code?

      You can use SQLT on PL/SQL but if the focus is on a specific SQL statement, if you don't know where exactly the time is spent then 10046 or PL/SQL profiling may work better. Instructions on the use of the profiler can be found in the following article:

      Document 243755.1 Implementing and Using the PL/SQL Profiler
    • Where can I find the SQL_ID or HASH_VALUE?

      The SQL_ID can be found on an AWR report, and the HASH_VALUE on any SQL Trace (above the SQL text and identified by the "hv=" token).
      You can also find these columns in the V$SQL view using a select similar to:

      select SQL_ID, HASH_VALUE ,substr(SQL_TEXT,1,30) Text
      from V$SQL
      where SQL_TEXT like '%&Some_Identifiable_String%';
    • Can SQLT take the PLAN_HASH_VALUE as input?

      No, it uses the SQL_ID or HASH_VALUE (not PLAN HASH VALUE)
    • Can it be used on Queries containing Multiple Tables, Views etc?

      Yes. SQLT supports all valid SQL statements.
    • Can it be used where SQL references objects from multiple schemas?

      Yes.
      Note that when creating a SQLT testcase, SQLT will condense the objects from different schemas into a single remapped schema
    • Does SQLT work across multiple databases via a dblink?

      Yes, SQLT works in a distributed environment
    • Does SQLT handle literal values and bind variables?

      Yes. Because bind peeking can affect plans, SQLT will report bind variable history and the different plans that different binds have generated.
    • How does SQLT handle bind variables?

      Bind variable values are stored in metadata and are associated with any different plans that they generate.
    • Does SQLT provide formatted 10053 output?

      SQLT collects 10053 trace but it does not re-format it.

      10053 is an internal trace and it is not documented. It was created by development for development to analyse issues with the cost based optimizer. It is included in SQLT output for completeness and so that we have the trace in the event that there is a need to engage Oracle support or development.
    • Can I use previously generated trace file(s) such as 10046 or 10053 trace file(s) as an input to SQLT?

      No SQLT works from a SQL_ID and it generated the information from the database at the time the SQLT is executed. It does not re-format 'old' 10046 or 10053 trace
    • How do you execute the different SQLT Methods?

      The different methods are initiated with individual scripts, one for each method. For example:

      • XTRACT - sqlt/run/sqltxtract.sql
      • XECUTE - sqlt/run/sqltxtract.sql
      • XTRXEC - sqlt/run/sqltxtrxec.sql
      • XPLAIN - sqlt/run/sqltxplain.sql

      A detailed description of the difference is in the "sqlt_instructions.html" document included in sqlt.zip
    • What is the difference between XTRACT, XECUTE, XTRXREC and XPLAIN?

      These are the different methods by which you may execute the SQLT tool.
      A detailed description of the difference is in the "sqlt_instructions.html" document included in sqlt.zip
    • What should you run first XTRACT or XECUTE?

      Since the XECUTE method provides runtime information, that is going to have more information that XTRACT, however it depends on the circumstances. If the SQL does not complete in a timely fashion then XTRACT is more suitable. If the query does not take too long to run then the Execute option is going to give you more information. Additionally, EXECUTE mode contains various segment and session information that cannot be delivered with XTRACT so we would suggest you try to get Execute if possible.
    • If the SQL in question does not complete, is SQLT still useful?

      If the problem SQL statement does not finish, then SQLT can still give you useful information. What it can provide depends on whether the cause of the delay is generating the access path or executing the query.

      If the problem is at execution time then SQLT can still produce information about the access path that will be used. Additionally the SQLT calls to SQL monitor can provide you with partial execution information which can prove useful even though it is not complete. 11g XTRACT will give you the statistics for the execution even though it is not complete.

      If the query does not complete parsing, then the optimizer has not finished choosing an access path for the query so SQLT will only get limited information. Since SQLT is primarily a tool for displaying the results of generating an access path alongside the supporting information used to do that, you would probably be better off using something else to trace why the generation of the plan took too long (this will likely require interaction with support who will likely request traces such as 10046 and errorstack to start diagnosis of the issue).
    • Does sqlt actually execute the SQL statement or will collect info about executed statements only?

      This depends on the mode used. For example, XTRACT will pick up previously executed statements, XECUTE will execute the statement
    • Is a hard parse of the analyzed SQL mandatory for the XECUTE method? Is this the purpose of the "/* ^^unique_id */" comment?

      Yes. The hard parse is forced so as to guarantee the generation of the 10053 report.
    • Can SQLT be used in a distributed environment?

      Yes, we can use SQLT in a distributed environment. To do this, store SQLT on the remote node or nodes and the main node. When run on the main node, SQLT can then connect to the remote nodes and produce SQLT extracts from the remote nodes and package everything together.
    • Is the functionality provided in SQLTXPLAIN also available in RAT (Real Application Testing)?

      No.
      SQLT functionality is not available in RAT, they are two separate entities.

      SQLTXPLAIN is tool to enable users to analyze and tune the performance of a single SQL statement.
      The Real Application Testing option is a licensed product that allows potential issues with system changes to be identified before they are deployed in a production environment.
    • How does SQLT get historical information regarding SQL statements?

      If you are licensed to use AWR then SQLT can access AWR to access historical information.
    • Does SQLT contain any graphical output?

      As its primary output, SQLT generates an HTML report (along with a number of supporting files in various formats). There are no 'graphics' or 'graphs' as such. SQLT in general presents text output related to the SQL in question grouped and formatted for ease of use.
    • Can we use SQLT to identify different output for SQL on different clients?

      Yes. SQLT can help in diagnose this kind of issue. If a different plan is generated then the information related to that plan is recorded and can be compared to determine what is making the difference.
    • Can you set a time for when to run sqlt. For example, for the case where the query only runs slowly in the middle of the night?

      There isn't a facility directly withing SQLT that allows this automation. You could obviously automate it with external tools. However, since SQLT is able to look at the execution history, as long as the execution details are still in the history, then there is no need to execute SQLT at the time of the issue. In this example, you should be able to come in during the morning after the issue and use the SQL_ID to get the run details.

    Report Analysis

    • How do I analyze the report?

      If a SQL statement takes a long time, then collect information via SQLT. You then need to analyze the information which requires SQL Tuning expertise. If you do not have that expertise then consider getting someone else to help you such Advanced Customer Services or other consulting options or use the SQL Tuning Advisor (assuming that you have the required licenses). The following webcast also provides basic navigation help:
      Document 1456176.1 Oracle Database Advisor Webcast Schedule and Archive recordings
      "Database Tuning - Using SQLTXPLAIN To Diagnose SQL Statements Performing Poorly "

      which you could use alongside advisory content from My Oracle Support:
      Document 199083.1 Master Note: SQL Query Performance Overview
    • What are the main things to look for in the observations section?

      The key findings in the observations section are different from case to case. Something that is unimportant in one report may be crucial for another.For example, an index may have been dropped on one of the tables in a query. This might change the access path in some queries but have no affect on others. Because of this there is no way to mark something one finding as as more critical than another. The observations are simply presented as a "heads up" for evaluation as otherwise there is a possibility of supplying misleading information.
    • Is information from memory 'better' than the information from the DBA_HIST_* views?

      Both the information from memory and from AWR that SQLT uses may be aggregated. This means that if a single execution performs differently but uses the same plan then it may be difficult to identify that. In that case the AWR SQL Report is useful. See: awrsqrpt.sql from:
      Document 748642.1 How to Generate an AWR Report and Create Baselines
    • If the database structure has not changed, why is there more than one explain plan in SQLT?

      There are many reasons for the same SQL to generate different plans. One of these of course is that the underlying structures have changed but others include using different bind variable values and histograms to drive different execution plans, multiple sessions using slightly different optimizer 'versions' by changing parameters and datatypes or data lengths differing within one application as compared to another application. For more details on the various reasons see:
      Document 296377.1 Troubleshooting: High Version Count Issues
    • How can I implement an explain plan suggested by SQLT?

      SQLT does not suggest execution plans - SQL Tuning Advisor does that. If you have the Oracle Tuning Pack license then SQLT can execute SQL Tuning Advisor and this may suggest a better plan for the SQL. You can implement that via a SQL_profile. See:
      Document 271196.1 Automatic SQL Tuning - SQL Profiles.
      If you want to fix a particular plan that has already been detected then see review the dynamic readme generated by any of the main methods and look for "Create SQL Plan Baseline from SQL Set". This documents using SQL Plan Management SPM through a SQL Set.
    • How do I delete an existing SQL Profile created by SQLT?

      As mentioned, SQLT does not create profiles - SQL Tuning Advisor does that. If you have the Oracle Tuning Pack license then SQLT can execute SQL Tuning Advisor and this may suggest a better plan for the SQL. You can implement that via a SQL_profile. To delete a profile you use the standard procedure for deleting any SQL profile. See:
      Document 271196.1 Automatic SQL Tuning - SQL Profiles.

    Performance

    • Does SQLT take a lot of Database resources to execute?

      The amount of resource used depends on the method used. Some methods just extract plans from the repository and so are relatively light. Other methods (such as the XECUTE method) may need to run the SQL which means the resource taken is at least the resource needed to execute that statement. Usually the benefit of executing and capturing the execution of the statement is much higher than the few minutes it takes to run when compared to not having that information.
    • How much memory does SQLT use?

      SQLT is not resource intensive. It is just a script. If it runs for 10 mins then its the equivalent of running any other script for 10 mins.

    Software Compatibility

    • Does SQLT work with --Insert application here--

      SQLT is a series of SQL scripts and a repository that uses SQL commands run against the data dictionary and the repository to produce a report. It can work against any SELECT generated by any application software.
    • Are there any limitations on using SQLT in a Database running EBusiness Suite?

      No.
    • Does SQLT work with RAC and Exadata?

      Yes.
      SQLT is RAC and Exadata aware
    • Is any Exadata specific information captured in report for the SQL?

      Yes some parameters and IO calibration is captured but not much else. From the SQL Tuning point of view Exadata is not very different to a standard database.
    • What is the difference between SQLT and the SQL Performance Analyzer.

      The tools were designed for totally different reasons. SQL Performance Analyzer is part of Real Application Testing (RAT) suite and enables you to assess the impact of system changes on the response time of SQL statements. SQLT is to help you understand what the root cause of an issue with a particular statement is.
    • Is SQLT integrated into Oracle Enterprise Manager (OEM)/Grid Control?

      No.
      SQLT is a standalone tool but will work alongside these tools.
    • Why Would you want to use SQLT when you have OEM?

      OEM and SQLT performs two different job and are not mutually exclusive Oracle Enterprise Manager Database Control is the primary tool for managing your Oracle database. SQLT is a tool specifically for facilitating the tuning of individual SQL statements.

    SQLT Testcase Generator

    • What is the SQLT Testcase Generator?

      When SQLT gathers information about a candidate SQL statement, it automatically builds a testcase to enable the current plan in the SQL to be reproduced. It does this by storing the metadata of all the objects involved (including views and materialized views) in the query in a testcase schema so that it can be recreated on a test schema. No user data is stored, but the optimizer statistics are recorded so that the plan can be recreated. Most of the time data is not necessary to reproduce the plan (typically,it is only necessary to reproduce performance or incorrect results). See the following for more details:

      Document 1470811.1 How to Use SQLT (SQLTXPLAIN) to Create a Testcase Without Row Data
      Document 1465741.1 How to Use SQLT (SQLTXPLAIN) to Create a Testcase Containing Application Data
    • Does SQLT Testcase generate user data?

      No. It records the structure and the statistics only. These are all we need to reproduce the access path.
    • In order to reproduce a SQLT Testcase, is the same hardware required on each server?

      Usually no. SQLT records system statistics and these are usually sufficient to make the target system "think" that it is the same as the source
    • Can I have multiple system stats on a test system?

      SQLT does not contain a specific facility to load up multiple sets of system statistics, nor is there a history for system statistics. You can however save and restore different system statistics manually but there is only one set of system stats available at any time. See:

      Document 149560.1 How to Collect and Display System Statistics (CPU and IO) for CBO use
      Document 470316.1 Using Actual System Statistics (Collected CPU and IO information)

      Document 153761.1 System Statistics: Scaling the System to Improve CBO optimizer
      Document 427939.1 System Statistics: How to gather system stats for a particular batch of work
    • Do we need to take a backup before running SQLT?

      It is generally prudent to backup before installing any software on a production system.
      With a Testcase generated from SQLT, since this is usually going to be installed in a test system, there is not usually any specific need to backup any data. Note that if the system statistics of the target are going to be changed to replicate a plan, then you may wish to save these before modifying them.
    • Can you use SQLT to analyze a SQL statement just with a source system?

      It is best to analyze SQLT data on a source system since that is where the runtime information for the statement is recorded.
      In terms of a testcase you would probably be better running that on a test system as opposed to production.
    • Can we use SQLT testcase to reproduce Oracle materialized view issue on a Test system?

      Yes. All metadata is transferred to a special schema for the testcase together with the associated statistics and environment etc to allows us to reproduce the plan. No Data is transferred.
    • Are there any special considerations when working with dictionary and fixed objects? Do they map to one schema too?

      Dictionary/fixed objects are handled just the same as any other objects; if you have queries containing dictionary/fixed object then they are extracted into a special schema for the testcase together with the associated statistics and environment etc to allows us to reproduce the plan. No Data is transferred.
    • If we use Real Application Cluster (RAC) for our Production database, and no RAC for our Development database, then does the testcase still reproduce the SQL Plan in the Development database ?

      Yes. You can reproduce the same plan in DEV regardless of RAC or not.
    • Is the Objective of SQLT Testcase to only reproduce the same execution plan, not help to retrieve the best execution plan?

      SQLT Testcase reproduces the same execution plan so that you can then use tools to determine a better plan if that is possible. Tools such SQL Tuning Advisor enable that and SQLT integrates that if you have the relevant licenses.

    Miscellaneous

    • In SQLT Version 11.4.4.8 and below "!=" is converted into "=", potentially giving wrong information?

      This issue is fixed in SQLT Version 11.4.4.9 and above. As a workaround convert instances of "!=" in the input SQL into "<>". To find your current SQLT Version, run the following SQL as the SYS or SQLTXPLAIN user:
      COL sqlt_version FOR A40;
      
      SELECT
      'SQLT version number: '||sqltxplain.sqlt$a.get_param('tool_version')||CHR(10)||
      'SQLT version date  : '||sqltxplain.sqlt$a.get_param('tool_date')||CHR(10)||
      'Installation date  : '||sqltxplain.sqlt$a.get_param('install_date') sqlt_version
      FROM DUAL
      /
      
      SQLT_VERSION
      ----------------------------------------
      SQLT version number: 11.4.4.8
      SQLT version date  : 2015-09-27
      Installation date  : 2015-10-24/09:56:12
      
    • What is the CBO?

      CBO is the Cost-Based Optimizer.
      The Cost-Based Optimizer is built-in software that determines the most efficient way to execute a SQL statement using statistics to determine the most optimal access path for queries.

      Oracle? Database Performance Tuning Guide
      11g Release 2 (11.2)
      Part Number E16638-06
      Chapter 11 The Query Optimizer
      http://docs.oracle.com/cd/E11882_01/server.112/e16638/optimops.htm
    • What is AWR?

      AWR is the Automatic Workload Repository. See document:

      Document 1363422.1 Automatic Workload Repository (AWR) Reports - Start Point
    • In the SQLT report, why is the "Last rows" not "Actual rows"?

      "Last rows" is used because those numbers are related to the last execution. If this varies for some reason then this information may be important. The data is the same you can get from RUNSTATS_LAST in DBMS_XPLAN.DISPLAY_CURSOR, please refer to DBMS_XPLAN reference document for details.

    Original Shipped FAQ

    These questions are taken from the Original FAQ section within the "sqlt_instructions.html" in the sqlt.zip downloadable from:

    Document 215187.1 SQLT (SQLTXPLAIN) - Tool That Helps To Diagnose SQL Statements Performing Poorly
    • What is new in this SQLT release?

      A summary of recent SQLT changes can be found in file: "sqlt/doc/sqlt_changes.html"
    • How do I make a SQL statement use a plan that I can only achieve by using CBO Hints?

      The question assumes you cannot modify the original SQL for whatever reason.
      You want to "capture" the plan from a modified version of your SQL (the one with CBO Hints) and associate this "good" plan to the original un-modified SQL.

      If on 11g consider a SQL Plan Baseline by using "sqlt/utl/coe_load_sql_baseline.sql".
      If on 10g consider a custom SQL Profile by using "sqlt/utl/coe_load_sql_profile.sql"
    • ORA-00942: table or view does not exist: Error at line 1: SELECT prev_sql_id, prev_child_number FROM sys.my_v$session

      Using XECUTE method and getting error above. It means the "script.sql" passed to this method is bogus. Your script must be able to execute stand-alone without errors.
    • "ORA-07445" or "ORA-03114: not connected to ORACLE".

      Review alert.log and the trace referenced by this error. If failed SQL uses V$SQL_PLAN, then there is a database bug in your system. You can pursue a fix logging a ticket and providing alert.log and the referenced trace. SQLT can workaround this bug with some loss of functionality:
      EXEC sqltxplain.sqlt$d.set_param('predicates_in_plan', 'N');

      If alert.log includes error :
      BEGIN CTXSYS.CTX_REPORT.CREATE_INDEX_SCRIPT(index_name => :name, report => :report); END;

      then try SQLT for a 2nd time. If problem persists, then apply following workaround with minimal loss of functionality:
      EXEC sqltxplain.sqlt$d.set_param('domain_index_metadata', 'Y');
    • How do I analyze a distributed query?

      Install this SQLT tool in your local and remote nodes. Then on your local node make SQLT aware of the remote nodes by registering the database links your SQL or the dependent objects referenced:
      EXEC sqltxplain.sqlt$i.register_db_link('db_link_name');
    • XTRACT or XECUTE methods are taking too long.

      Review corresponding log file. The culprit could be the 11g Test Case Builder TCB and/or the SQL Tuning Advisor STA. These two utilities are usually time consuming. You can sacrifice their functionality and improve the performance of SQLT by disabling SQLT access to the TCB and reducing the threshold time for the STA:
      EXEC sqltxplain.sqlt$a.set_param('test_case_builder', 'N');
      EXEC sqltxplain.sqlt$a.set_param('sta_time_limit_secs', '30');


      If SQLT still takes a long time, and the schema objects behind your SQL contain a large number of sub-partitions, you can reduce the granularity of the data collected:
      EXEC sqltxplain.sqlt$a.set_param('c_gran_segm', 'PARTITION');
      EXEC sqltxplain.sqlt$a.set_param('c_gran_cols', 'PARTITION');
      EXEC sqltxplain.sqlt$a.set_param('c_gran_hgrm', 'PARTITION');
    • How do I register additional SQLT users?

      SQLT users must be granted the SQLT_USER_ROLE role. You can execute the grant directly, or you can execute "sqlt/install/sqguser.sql" connected as SYS.
    • Can I change the SQLT staging directory?

      Yes, use "sqlt/utl/sqltcdirs.sql" passing the full directory path. This path cannot contain the "?" or "*" symbols and it is case sensitive in most systems. This directory must exist in the server and ORACLE must have read/write access to it. The recommended directory is UDUMP. Aliases are not allowed.
    • How to suppress the export of the SQLT repository?

      SQLT repository is needed to generate a Test Case TC and to troubleshoot a query performing poorly. If you are sure the exported repository is not needed, use this command to disable its automatic export:
      EXEC sqltxplain.sqlt$a.set_param('export_repository', 'N');
    • What is my SQLT tool version?

      SELECT sqltxplain.sqlt$a.get_param('tool_version') FROM DUAL;
    • How to keep the SQLT repository small?

      You can purge the SQLT repository periodically by using provided script:
      "sqlt/utl/sqlthistpurge.sql
    • How can I expedite "SELECT COUNT(*)" on application Tables?

      If you look at the SQLT log file and identify that most time is spent when SQLT is performing a COUNT(*) in all related application tables, then you can supress this operation with minimum functionality impact:
      EXEC sqltxplain.sqlt$a.set_param('count_star_threshold', '0');
    • How can I expedite "Taking a snapshot of some Data Dictionary objects"?

      A snapshot of some Data Dictionary objects takes between a couple of minutes and up to several hours, depending on the size of the data dictionary. This is the last step of the installation. If killed, it will be automatically executed when XECUTE is first used.

      To expedite the snapshot time, and with some loss of functionality, you can disable the snapshot of extents by executing these commands connecting as SQLTXPLAIN:
      SET SERVEROUT ON;
      EXEC trca$g.set_param('capture_extents', 'N');
      EXEC trca$t.refresh_trca$_dict_from_this;


      If still "Taking a snapshot of some Data Dictionary objects" takes longer than several hours, it can be disabled altogether with significant loss of functionality, by executing these commands connecting as SQLTXPLAIN:
      SET SERVEROUT ON;
      EXEC trca$g.set_param('refresh_dict_repository', 'N');
      EXEC trca$t.refresh_trca$_dict_from_this;
    • How do I provide feedback on this SQLT tool?

      If you have an open ticket with Oracle Support where SQLT was requested, use that ticket.
      You can also add comments to this FAQ or the main SQLT document,
      else you may want to email its author (Carlos Sierra).

    What further information is available?

    • SQLT Webcasts

      An ongoing series of webcasts on SQLT (and related tools such as SQLHC) are being presented and recorded.
      For details (look under the Oracle Database Section) see:

      Document 740966.1 Advisor Webcast Current Schedule

      Recorded Webcasts will be uploaded within a few weeks of completion here:

      Document 1456176.1 Oracle Database Advisor Webcast Archives
      Document 740964.1 Advisor Webcast Archived Recordings

      Some past topics have been:

      • Database Tuning - Using SQLTXPLAIN To Diagnose SQL Statements Performing Poorly
      • How to create in 5 minutes a SQL Tuning Test Case using SQLTXPLAIN

    following SQL, run as SYS or the SQLTXPLAIN user will provide version information:

    COL sqlt_version FOR A40;

    SELECT
    'SQLT version number: '||sqltxplain.sqlt$a.get_param('tool_version')||CHR(10)||
    'SQLT version date : '||sqltxplain.sqlt$a.get_param('tool_date')||CHR(10)||
    'Installation date : '||sqltxplain.sqlt$a.get_param('install_date') sqlt_version
    FROM DUAL
    /



 

 

��  
 
 
Oracle Training at Sea
 
 
 
 
oracle dba poster
 

 
Follow us on Twitter 
 
Oracle performance tuning software 
 
Oracle Linux poster
 
 
 

 

Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.  Feel free to ask questions on our Oracle forum.

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata?  Oracle technology is changing and we strive to update our BC Oracle support information.  If you find an error or have a suggestion for improving our content, we would appreciate your feedback.  Just  e-mail:  

and include the URL for the page.


                    









Burleson Consulting

The Oracle of Database Support

Oracle Performance Tuning

Remote DBA Services


 

Copyright © 1996 -  2017

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.

Remote Emergency Support provided by Conversational