Call now: 252-767-6166  
Oracle Training Oracle Support Development Oracle Apps

 
 Home
 E-mail Us
 Oracle Articles
New Oracle Articles


 Oracle Training
 Oracle Tips

 Oracle Forum
 Class Catalog


 Remote DBA
 Oracle Tuning
 Emergency 911
 RAC Support
 Apps Support
 Analysis
 Design
 Implementation
 Oracle Support


 SQL Tuning
 Security

 Oracle UNIX
 Oracle Linux
 Monitoring
 Remote s
upport
 Remote plans
 Remote
services
 Application Server

 Applications
 Oracle Forms
 Oracle Portal
 App Upgrades
 SQL Server
 Oracle Concepts
 Software Support

 Remote S
upport  
 Development  

 Implementation


 Consulting Staff
 Consulting Prices
 Help Wanted!

 


 Oracle Posters
 Oracle Books

 Oracle Scripts
 Ion
 Excel-DB  

Don Burleson Blog 


 

 

 


 

 

 


Can you have too much cache?

December 2004
by Donald K. Burleson

There is a great debate about the rapidly-falling costs of RAM and the performance benefits of full caching of Oracle databases.

Let's take a closer look at the issues over large RAM data buffers, tuning by adjusting system parameters and using fast hardware to correct sub-optimal Oracle code:

Skeptic comment:

There is nothing wrong with disk I/O and full caching can hurt Oracle performance.

Have you noticed that all Oracle10g "world record" benchmarks use over 50 gig data caches? I worked with one of these TPCC benchmarks and ran repeatable timings. Up to the point where the working set was cached, the benefit of a larger data cache outweighed the LIO overhead.

Also, using multiple blocksizes also helped greatly and the appropriate blocksize was used depending on the types of objects in the tablespaces.

For example, small OLTP access rows like a 2k blocksize because you don't waste RAM hauling-in block space you don't need. A 32k tablespace for index range scans also showed a large, measurable performance improvement:

http://www.tpc.org/results/FDR/TPCC/HP%20Integrity%20rx5670%20Cluster%2064P_FDR.pdf

Skeptic comment:

The bigger your cache, the larger your LRU and Dirty List becomes.

This is true, but disk I/O is even more expensive! For example, check-out the Sun Oracle 10g benchmark:

http://sun.systemnews.com/articles/69/2/Oracle/11510

Remember, the size of the buffer cache depends on the size of the "working set" of frequently-referenced data! A very large OLTP system might need a 60 gig KEEP pool:

http://www.dba-oracle.com/art_dbazine_9i_multiblock.htm

Also, took into the mechanism inside 10g AMM. It computes the marginal benefits of additional data block buffers based of the costs of less PIO and does indeed consider the costs of serialization overhead.

Skeptic comment:

There comes a point where, for a particular 'unit' of hardware, the marginal cost of the increase in administrative overhead is greater than the marginal decrease in costs due to less physical I/O.

For reads, disk I/O is almost always shower then an LIO and full caching is great for read-only databases such as DSS, OLAP and DW systems! For write intensive database, a large cache can be a problem. That's why many DBA's place high DML objects in a separate tablespace (with a different blocksize), and map it to a smaller buffer.

Skeptic comment:

Why waste time adjusting initialization parameters and changing SQL optimizer statistics when the real problem is bad SQL?

When I visit a client I usually find thousands of sub-optimal SQL statements that would take months to manually tune. To get a head-start, I tweak the instance parms to broad-brush tune as much as possible to the lowest common denominator. Then I can sweep v$sql_plan and tune the exceptions. Tuning the instance first saves buckets of manual tuning effort and lowering optimizer_index_cost_adj will sometimes remove sub-optimal full-tables scans for hundreds of statement in just a few minutes. What's wrong with that?

Skeptic comment:

Throwing more memory at the problem does NOT make it go away.

I had a client last month will a REALLY messed-up database and it was HEAVILY I/O bound (85% read waits). Instead of charging $100k to fix the mess I replaced the disks to high-speed RAM-SAN (solid-state disk) for only $40k. The system ran 15x faster, in less than 24 hours, and the client was VERY happy. Granted, it's not elegant, but hey, why not throw hardware at lousy code if it's cheap and fast?

Like it or not, disk will soon be as obsolete as drums! I remember when 1.2 gig of disk costs $250k and I can now get 100 gig of SSD for only $110k.

I have several fully-cached clients (some using SSD and other with a huge KEEP pool), and their databases runs great. . .

I expect that Oracle Corporation will soon be working on a solid-state Oracle in a future release where the caches will disappear completely.

Of course, tuning the SQL is always the best remedy, and by reducing unnecessary consistent gets you often reduce PIO too! But it makes sense to me to tune at the system-level b y adjusting CBO parms (OIAC and optimizer_index_caching) and by improving the CBO statistics:

http://www.dba-oracle.com/oracle_tips_cost_adj.htm

http://technet.oracle.com/oramag/webcolumns/2003/techarticles/burleson_cbo_pt1_pt2.html

Skeptic comment:

Would you please explain why I have a database running at a BCHR of 99.9 and performance is still abysmal?

A high buffer cache hit ration with poor performance is often due to too many Logical I/O's (high consistent gets from sub-optimal SQL) or high library cache contention (excessive parsing).

Up to the point where the marginal benefit of adding blocks to the buffer declines (the second derivative of the 1/x curve of buffer utilization), tuning with BCHR makes sense. Beyond the point where the working set is cached, the marginal benefit from RAM data buffers declines:

http://www.dba-oracle.com/art_builder_buffers.htm

Skeptic comment:

Throwing hardware at an Oracle performance problem addresses the symptom, and not the root cause of the performance problem.

True, but I've never had a problem throwing hardware (cache, faster CPU) at a messed-up system. My client's often demand it because they don't want to pay a fortune for SQL tuning. In many cases it's faster and cheaper. It makes sense.

There are many system-level "silver bullets" to Oracle tuning, and IMHO you would be foolish not to try them. For example, if you could tune 500 DSQL statements by adding an index or building an MV, why not? Consulting client demand that you tune the whole system before tuning individual SQL statements:

http://www.orafaq.com/articles/archives/000027.htm

However, this will be a moot issue in 24 months when SSD makes those stupid magnetic platters obsolete.


Market Survey of SSD vendors for Oracle:

There are many vendors who offer rack-mount solid-state disk that work with Oracle databases, and the competitive market ensures that product offerings will continuously improve while prices fall.  SearchStorage notes that SSD is will soon replace platter disks and that hundreds of SSD vendors may enter the market:

"The number of vendors in this category could rise to several hundred in the next 3 years as enterprise users become more familiar with the benefits of this type of storage."

As of January 2015, many of the major hardware vendors (including Sun and EMC) are replacing slow disks with RAM-based disks, and Sun announced that all of their large servers will offer SSD.

Here are the major SSD vendors for Oracle databases (vendors are listed alphabetically):

2008 rack mount SSD Performance Statistics

SearchStorage has done a comprehensive survey of rack mount SSD vendors, and lists these SSD rack mount vendors, with this showing the fastest rack-mount SSD devices:

manufacturer model technology interface performance metrics and notes
IBM RamSan-400 RAM SSD

Fibre Channel
InfiniBand

3,000MB/s random sustained external throughput, 400,000 random IOPS
Violin Memory Violin 1010 RAM SSD

PCIe

1,400MB/s read, 1,00MB/s write with ×4 PCIe, 3 microseconds latency
Solid Access Technologies USSD 200FC RAM SSD

Fibre Channel
SAS
SCSI

391MB/s random sustained read or write per port (full duplex is 719MB/s), with 8 x 4Gbps FC ports aggregated throughput is approx 2,000MB/s, 320,000 IOPS
Curtis HyperXCLR R1000 RAM SSD

Fibre Channel

197MB/s sustained R/W transfer rate, 35,000 IOPS

Choosing the right SSD for Oracle

When evaluating SSD for Oracle databases you need to consider performance (throughput and response time), reliability (Mean Time Between failures) and TCO (total cost of ownership).  Most SSD vendors will provide a test RAM disk array for benchmark testing so that you can choose the vendor who offers the best price/performance ratio.

Burleson Consulting does not partner with any SSD vendors and we provide independent advice in this constantly-changing market.  BC was one of the earliest adopters of SSD for Oracle and we have been deploying SSD on Oracle database since 2005 and we have experienced SSD experts to help any Oracle shop evaluate whether SSD is right for your application.  BC experts can also help you choose the SSD that is best for your database.  Just  call 800-766-1884 or e-mail.:  for SSD support details.

DRAM SSD vs. Flash SSD

With all the talk about the Oracle “flash cache”, it is important to note that there are two types of SSD, and only DRAM SSD is suitable for Oracle database storage.  The flash type SSD suffers from serious shortcomings, namely a degradation of access speed over time.  At first, Flash SSD is 5 times faster than a platter disk, but after some usage the average read time becomes far slower than a hard drive.  For Oracle, only rack-mounted DRAM SSD is acceptable for good performance:

Avg. Read speed

Avg. write speed

Platter disk

10.0 ms.

  7.0 ms.

DRAM SSD

 0.4 ms.

  0.4 ms.

Flash SSD    

 1.7 ms.

 94.5 ms.

 

 

��  
 
 
Oracle Training at Sea
 
 
 
 
oracle dba poster
 

 
Follow us on Twitter 
 
Oracle performance tuning software 
 
Oracle Linux poster
 
 
 

 

Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.  Feel free to ask questions on our Oracle forum.

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata?  Oracle technology is changing and we strive to update our BC Oracle support information.  If you find an error or have a suggestion for improving our content, we would appreciate your feedback.  Just  e-mail:  

and include the URL for the page.


                    









Burleson Consulting

The Oracle of Database Support

Oracle Performance Tuning

Remote DBA Services


 

Copyright © 1996 -  2020

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.