Call now: 252-767-6166  
Oracle Training Oracle Support Development Oracle Apps

 E-mail Us
 Oracle Articles
New Oracle Articles

 Oracle Training
 Oracle Tips

 Oracle Forum
 Class Catalog

 Remote DBA
 Oracle Tuning
 Emergency 911
 RAC Support
 Apps Support
 Oracle Support

 SQL Tuning

 Oracle UNIX
 Oracle Linux
 Remote s
 Remote plans
 Application Server

 Oracle Forms
 Oracle Portal
 App Upgrades
 SQL Server
 Oracle Concepts
 Software Support

 Remote S


 Consulting Staff
 Consulting Prices
 Help Wanted!


 Oracle Posters
 Oracle Books

 Oracle Scripts

Don Burleson Blog 









Using Export-Import to Reorganize Oracle Tables

Don Burleson


The Oracle export and import utilities can also be used for table reorganizations. The export-import approach is relatively stable, and has the advantage of being a proven method for reorganizing Oracle tables. We also have the option of creating a very large data buffer. With release 7.3 and above of Oracle, we can also use the DIRECT option to improve export and import performance.

With most Oracle tables growing into many gigabytes, few Oracle professionals know how to use Oracle export utility for large Oracle tables. One of the real drawbacks of the Oracle export utility is the set of restrictions imposed by the UNIX operating system.

As you probably know, some UNIX systems limit an individual file to two gigabytes. Fortunately, with a few tricks, very large Oracle tables can also be exported using the Oracle utility, and with the use of the UNIX split and compress commands, we can easily export a ten-gigabyte Oracle table. Listing 5-1 shows an example of a parameter file for an Oracle/Oracle export parameter file.



Notice that the output of this export is being directed to a file called COMPRESS_PIPE. The first step to running a split, compressed export is to create a named pipe for the compress and the split.

rm f compress_pipe
rm f split_pipe

mknod compress_pipe p
mknod split_pipe p

chmod g+w compress_pipe
chmod g+w split_pipe

Creating a compressed, split pipe for an export.

Now that the pipes are in place, we can submit our export, using the parameter file from Listing 5-3.  Note that the export job must be submitted from the directory where the pipes exist, and in the directory that will contain the exported dump files.

nohup split b500m < split_pipe > /tmp/exp_tab &

nohup compress < compress_pipe > split_pipe &

nohup exp parfile=export_tab.par file=compress_pipe > exp_tab.list 2>&1 &

The script commands to submit a split compressed export.

Note that the UNIX split command will place the output from the export into files called XAA, XAB, XAC, each in 500-megabyte chunks.

Now that we have the file exported into manageable pieces, we can import the file by reversing the process, piping the import utility through the uncompress and cat UNIX commands. Below, the first statement concatenates the file back together into the SPLIT_PIPE file.

The second command uncompresses the data from SPLIT_PIPE  and places the uncompressed, un-split file into EXPORT_PIPE. The third statement runs the import utility using EXPORT_PIPE as the input file (see Listing 5-4).

nohup cat xaa xab xac xad > split_pipe &
nohup uncompress c split_pipe > export_pipe &
nohup imp file=export_pipe

The script to import from a split, compressed export.

There are several other techniques you can use to improve the speed of the import utility.  These include:

  • Use a large BUFFER size in your parameter file. This reduces database I/O by reducing the number of times that Oracle has to go to the export file for data. Several megabytes is usually enough, but if you have the memory, consider using an even larger buffer size. Again, check for paging and swapping at the operating system level to see if your setup is too high.

  • Always use INDEXES=N. It is always faster to build the indexes after the table has been imported.

  • Use COMMIT=N in your parameter file. This will cause import to commit after each object (table), not after each buffer. This is why one large rollback segment is needed.

  • Use one large, dedicated rollback segment for the import. To do this, take all other rollback segments offline. One rollback segment, approximately 50% of the size of the largest table being imported, should be large enough.

  • Put the database in NOARCHIVELOG mode until the import is complete. This will reduce the overhead of creating and managing archive logs.  However, a cold backup must be taken immediately after doing unrecoverable operations to ensure roll-forward capability.

Get the Complete
Oracle Utility Information 

The landmark book "Advanced Oracle Utilities The Definitive Reference"  contains over 600 pages of filled with valuable information on Oracle's secret utilities. This book includes scripts and tools to hypercharge Oracle 11g performance and you can buy it for 30% off directly from the publisher.



Oracle Training at Sea
oracle dba poster

Follow us on Twitter 
Oracle performance tuning software 
Oracle Linux poster


Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.  Feel free to ask questions on our Oracle forum.

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata?  Oracle technology is changing and we strive to update our BC Oracle support information.  If you find an error or have a suggestion for improving our content, we would appreciate your feedback.  Just  e-mail:  

and include the URL for the page.


Burleson Consulting

The Oracle of Database Support

Oracle Performance Tuning

Remote DBA Services


Copyright © 1996 -  2020

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.