Is there any limit to the
speed of Oracle? With Oracle announcing a new record one million
transactions per minute, many believe that there is nothing that Oracle
The fastest Oracle table insert rate I've ever seen was 400,000 rows per
second, about 24 millions rows per minute, using super-fast RAM disk
(SSD), but Greg Rahn of Oracle notes SQL insert rates of upwards of 6
million rows per second using the Exadata firmware:
"One of the faster bulk (parallel nologging direct path from external
table using direct path compression) load rates I've seen is just over
7.7 billion rows in under 20 minutes which equates to around 385,000,000
per minute or about 6,416,666 per second.
All the CPUs are running at around 99% user CPU during that load. That
was loading to spinning rust (Exadata Storage). It would be even faster
had compression not been used. That was on a HP Oracle DB Machine (64
Intel Harpertown CPU cores). "
However, what if we have a
requirement for a system that must accept high-volume data loads into a
500,000 rows per second
50 megabytes per second
Is this possible? Using the
right tricks you an make Oracle load data at unbelievable speed.
However, special knowledge and tricks are required.
Oracle provides us with
many choices for data loading, some way faster than others:
Batch Data Loading
If you are loading your data from flat files there
are many products and Oracle tools to improve your load speed:
Oracle10g Data Pump (available January 2004) ? With Data Pump
Import, a single stream of data load is about 15-45 times faster than
original Import. This is because original Import uses only
conventional mode inserts, whereas Data Pump Import uses the direct
path method of loading.
Oracle SQL*Loader ? Oracle SQL*Loader has
dozens of options including direct-path loads, unrecoverable, etc
and get super-fast loads. Here are tips for getting
high-speed loads with SQL*Loader.
Oracle import Utility ? Oracle has numerous options to
improve data load speed with it?s? import utility.
BMC Fast Import for Oracle ? Claims to be 2 to 5x faster
than Oracle import utility.
CoSORT FAst extraCT (FACT) for Oracle ? Claims to get Bulk
loads up to 90% faster when CoSORT pre-sorts the load file on the
table's index key. This also improves clustering_factor and
improves run-time SQL access speeds by reducing logical I/O.
Tips for super fast data loading
Don?t use standard SQL insets as they are far
slower than other approaches. If you must use SQL inserts, make sure to
use the APPEND hint to bypass the freelists and raise the high-water
mark for the table. You are way better off using PL/SQL with the bulk
insert features (up to 100x faster).
use standard SQL insets - They are far slower than other
approaches. If you must use SQL inserts, make sure to use the APPEND
hint to bypass the freelists and raise the high-water mark for the
table. Note: INSERT APPEND supports only
the subquery syntax of the INSERT statement, not the VALUES clause. You are way better off using PL/SQL with the bulk insert
features (up to 100x faster).
Partition - Load the data in a separate
partition, using transportable tablespaces.
Use SSD RAM Disk - Especially for
the insert partition, undo and redo. You can move the partition to
standard disk later.
Use parallel DML - Parallelize the data
loads according to the number of processors and disk layout. Try to
saturate your processors with parallel processes.
Disable constraints and indexes - Disable
during load and re-enable in parallel following the load.
Use multiple freelists or freelist groups for
target tables - Avoid using bitmap freelists ASS management
(automatic segment space management) for super high-volume loads.
Pre-sort the data in index key order -
This will make subsequent SQL run far faster for index range scans.
Size your log_buffer properly - If
you have waits associated to log_buffer size ?db log sync
wait?, try increasing to to 10m.
RAM Disk - Place undo tablespace and online redo logs on
Solid-state disk (RAM SAN),
Use SAME RAID ? Avoid RAID5 and use Oracle
Stripe and Mirror Everywhere approach (RAID 1+0, RAID10).
Use a small db_cache_size ? If
loading with DML a small data cache will minimize DBWR work during
async buffer cleanouts. In Oracleyou can use the alter system set
db_cache_size command to temporarily reduce the data buffer cache
Watch your commit frequency - At each
commit, Oracle releases locks and undo segments. For full data
blocks waiting to be written from the db_cache_size region,
blocks are marked as eligible for the DBWR process will asynchronously write all dirty blocks to disk,
performing an expensive full-scan of the RAM data buffer (db_cache_size).
Benchmarks suggest that you should commit as infrequently as possible
and use vary large undo segments to avoid a ORA-1555 (snapshot too
Use a large blocksize ? Data loads onto
32k blocksizes will run far faster because Oracle will be able to
insert more rows into an empty block before a write.
For more information and details, I'm available to
assist with your high-speed loading needs.
Get the Complete
Oracle Utility Information
The landmark book
Utilities The Definitive Reference" contains over 600 pages of
filled with valuable information on Oracle's secret utilities.
This book includes scripts and tools to hypercharge Oracle 11g
performance and you can
for 30% off directly from the publisher.