Call now: 252-767-6166  
Oracle Training Oracle Support Development Oracle Apps

 E-mail Us
 Oracle Articles
New Oracle Articles

 Oracle Training
 Oracle Tips

 Oracle Forum
 Class Catalog

 Remote DBA
 Oracle Tuning
 Emergency 911
 RAC Support
 Apps Support
 Oracle Support

 SQL Tuning

 Oracle UNIX
 Oracle Linux
 Remote s
 Remote plans
 Application Server

 Oracle Forms
 Oracle Portal
 App Upgrades
 SQL Server
 Oracle Concepts
 Software Support

 Remote S


 Consulting Staff
 Consulting Prices
 Help Wanted!


 Oracle Posters
 Oracle Books

 Oracle Scripts

Don Burleson Blog 







Oracle shared pool internals

Oracle Tips by Steve Karam

February 20, 2015

Also see basic shared_pool concepts.

Parsing = Translation and Optimization internals

When a statement is given to Oracle in the form of Structured Query Language (SQL), the statement must first be understood by the Oracle Instance (the brain, as it were). The statement is checked for syntax and permissions, and then broken down to a hash value so Oracle can offer an optimal response. The optimization phase of the parse is most important, and is done via one of three methods: hard parse, soft parse, or session cached cursor.

  • Hard Parse - Oracle searches for the statement in an area called the Library Cache, a subsection of the Shared Pool. If the statement is not found, Oracle invokes a tool called the Cost Based Optimizer (CBO) in order to come up with an optimal execution plan. This invocation is widely considered a very expensive process for CPU, just as a well thought out response to a question requires more brainpower.
  • Soft Parse - Oracle searches for the statement in the Library Cache and finds it. This allows Oracle to re-use execution plans without having to re-optimize the query and is akin to responding "Good, you?" when asked how you are doing.
  • Cached Cursors - Oracle notices that you have run the same statement over and over again, and caches the execution plan into your private memory so the Shared Pool is no longer burdened for your session. This is akin to someone asking you the same question over and over again, at which point the answer is immediately given with no real thought.

Hard Parses are a necessary evil in order to initially optimize a query. Soft parses are generally considered to be good, but can be problematic under heavy concurrency (explained later). Cached Cursor responses are great for performance, but require RAM resources granted explicitly via Oracle parameters or client/OCI settings.

SQL Query executions go through three main phases:

  1. Parse - Translate and optimize the query
  2. Execute - Lay down the execution plan and run the query
  3. Fetch - Pull back data from Oracle objects based on the execution plan

It is important to note that most programming languages explicitly call each of the three phases. When a query is run, it is possible to parse the query (known as a parse call) and then not execute or fetch the data. Reusing parsed queries (parse once, execute many) is a highly recommended practice.

Shared Pool internals

When queries are hard parsed, the execution plans generated by the CBO are placed in the Shared Pool; specifically, in a subsection called the Library Cache. If the same query comes in again, Oracle must simply search the Library Cache and return the stored execution plan (soft parse).

However, both hard and soft parses put strain on the Shared Pool. The Shared Pool is simply a chunk of RAM, and CPU must manage RAM. This means that if 100 users are simultaneously trying to access the Shared Pool, CPUs will be dedicated to these users to access the RAM. If there are more users than there are CPUs, a queue will form and the CPU will work until all the users are done.

Think of it as a trip to the grocery store. To take items from the store, a cashier must first process you. If there are 100 shoppers trying to check out and only two cashiers, a large queue will form and long wait times will ensue. The bottleneck in this case is cashiers, and you could even call the event "waiting on resource: cashiers". We can fix this by adding more cashiers, thereby cutting down on the wait times overall despite high concurrency.

In Oracle, the items are RAM segments, the cashiers are CPUs, and the shoppers are users or sessions. When more sessions need chunks of RAM than there are CPUs, enqueues and waits occur which cause the overall accessibility to the database to degrade. Wait events such as "latch: library cache" and "cursor pin S wait on X" are Oracle's internal names for various types of wait events, just as "waiting on the slow cashier with no bagger" is our name for a wait event at the grocery store.

Continuing on this analogy, let's say we have 20 shopper and 10 cashiers. This type of queue is very reasonable and will be processed rather quickly. Now imagine we throw in 20 more shoppers who come in to buy the exact same three or four items every few minutes. Then imagine we bring in a few shoppers who load up their cart, get to the front of the line, and then decide they don't want the items after all. On top of that, we throw in a few store employees who walk through the line demanding various sub-tasks on top of the normal cashier duties. This situation will fly into chaos, as anyone who has ever gone to Wal-Mart on a Saturday afternoon can tell you.

The same thing occurs on Oracle. The shared pool is required for parsing, for queries from dual, for sequences, and countless other components. There are sessions which come through over and over and over to run the same thing every time, sessions that make a parse call but don't actually execute or fetch, sessions that bring in huge queries that take forever to parse, and so on.

In addition, Oracle has its own control structures, sequences, views, and other components that must be handled behind the scene while all this is going on. And here we've only discussed the checkout process! Imagine if during all this the cashier also had to order custom or out-of-stock items on the fly from warehouses (akin to disk reads from storage).

However, the answer to this predicament is not always to add more CPUs. We can also cut down on this behavior by introducing better design and optimizations. For instance:

  • Making the shared pool bigger allows more queries to be soft parsed instead of hard parsed, cutting down on CPU and allocating more available latches.
  • Allowing repeat queries to parse in session memory instead of the shared pool reduces strain.
  • Ensuring that queries parse only when absolutely necessary at the application level reduces parse calls and therefore contention on the shared pool.
  • Not running meaningless queries like "select 1 from dual" to check for database availability cuts down on parse calls, not to mention hits to the dual table which is in the shared pool.


A Real Life Shared Pool  internals Problem

I had a client who called me in due to an incredibly severe issue with the shared pool. They followed all the "rules": they used bind variables religiously, avoided anonymous PL/SQL blocks, and set a decent value for session_cached_cursors. Yet the system would become unresponsive and the following waits kept occurring when concurrency got really heavy:

  • latch: library cache
  • row cache lock
  • cursor pin S wait on X
  • latch: shared pool

AWR/ASH and V$ data showed that soft parses were happening almost exclusively. But even though the system was properly caching and re-using SQL in the shared pool resulting in soft parse, the contention buildup against the shared pool was too much for the system to handle and it would come to a grinding halt.

In the end, we went through a massive effort to reduce usage on the shared pool (hard or soft). This was accomplished by removing unnecessary DUAL calls (the code was doing a select from DUAL on every page) and most importantly, setting up true client cursor caching in the JBoss application server.

First have a look at this bit of nastiness:

Elapsed:               59.50 (mins)
   DB Time:           16,227.94 (mins)

Remember that Average Active Sessions (AAS) is equal to DB Time over Elapsed Time. This gives us a good indication of wait/work load during the 60 minute window. As you can see, the AAS during this time was ~273, far too high for most any respectable system.

Looking at the Top 5 Timed Events, you can see that we're abnormally tied up in concurrency wait:

Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
latch: library cache                169,214   3,677,789  21735  377.7 Concurrenc
row cache lock                       31,355      60,136   1918    6.2 Concurrenc
enq: TX - row lock contention        20,268      58,719   2897    6.0 Applicatio
latch: shared pool                  102,219      54,985    538    5.6 Concurrenc
cursor: pin S wait on X           4,895,909      53,456     11    5.5 Concurrenc

There were reams of other statistics and information used to correctly identify the issue, but the key symptom could be seen right here in the AWR under the SQL ordered by Parse Calls section (query text has been removed to protect the guilty):

% Total
 Parse Calls  Executions     Parses    SQL Id
------------ ------------ --------- -------------
     283,466      283,061     12.07 bd7nzg41xqmtu
Module: JDBC Thin Client

     204,990      204,987      8.73 bajtpw34f1h8n
Module: JDBC Thin Client
     166,868      166,870      7.11 avc1jqzz04wpr

     108,239      108,239      4.61 4rg9kqppkwb2x
Module: JDBC Thin Client

You should notice three things from the above info:

  1. The SELECT 'x' FROM DUAL query is definitely excessive, having occurred 46 times per second.
  2. More importantly, the Parse Calls match Executions 100% in every query
  3. There are a LOT of query executions

Interestingly enough, conventional wisdom, advice, and even training always warns of the dangers of hard parsing vs. soft parsing, and vaguely mentions parse to execute ratio and how important it is, but for the most part the focus is given to the type of parse.

But the case above shows that even soft parses aren't enough with massive concurrency, an overabundance of small queries, and meticulous bind variable practices combined to create a huge case of shared pool contention with ~99% soft parses.

The application software in use was JBoss which has a feature called the Prepared Statement Cache. Since querying in JBoss is done by setting up a Prepared Statement and running it against the DB, the Prepared Statement Cache allowed JBoss to dedicate memory to keep the cursor open and reusable following a successful parse/execute/fetch.

Even with all the bind variables in the world, we couldn't "Parse Once, Execute Many" without the client keeping the parse cursor open. Take a look what happened when we found the root cause of the issue in the application server settings and corrected it (a value of 15 for prepared-statement-cache-size worked nicely):

% Total
Parse Calls  Executions     Parses    SQL Id
------------ ------------ --------- -------------
      13,481       66,349      2.32 8g5h090r1uxmc
Module: JDBC Thin Client

      11,346      104,237      1.95 5nz8u1h2qyr92
Module: JDBC Thin Client

      11,083       63,029      1.90 78xz4j1g270uw
Module: JDBC Thin Client

      10,663       35,397      1.83 02s458bkx4z4a
Module: JDBC Thin Client

Notice that the parse to execute ratio was improved dramatically. We also got rid of the DUAL queries. Consequently, the latch issue went away never to return. The shared pool operates at about 10% of the wait it used to incur and never hits that breaking point where performance degrades exponentially.

The final lessons:

  • Soft parses are preferable to hard parses in an OLTP application where query stability is preferred (though auto-gathered histograms sometimes messed this up in 10g before adaptive cursor_sharing was introduced.
  • Bind variables can and should be used to improve plan reuse and reduce hard parses (CURSOR_SHARING=FORCE is something of an alternative, but not preferred to actual bind variable usage in the code)
  • To actually lower your parse to execute ratio, bind variables aren't always enough. Your client needs to actually be configured to take advantage of them. This one doesn't seem to make it into all the how-to's or best practices, but it should

Get the Complete
Oracle SQL Tuning Information 

The landmark book "Advanced Oracle SQL Tuning  The Definitive Reference"  is filled with valuable information on Oracle SQL Tuning. This book includes scripts and tools to hypercharge Oracle 11g performance and you can buy it for 30% off directly from the publisher.



Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.  Feel free to ask questions on our Oracle forum.

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata?  Oracle technology is changing and we strive to update our BC Oracle support information.  If you find an error or have a suggestion for improving our content, we would appreciate your feedback.  Just  e-mail:  and include the URL for the page.


Burleson Consulting

The Oracle of Database Support

Oracle Performance Tuning

Remote DBA Services


Copyright © 1996 -  2016

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.


Oracle Training at Sea
oracle dba poster

Follow us on Twitter 
Oracle performance tuning software 
Oracle Linux poster