Call now: 252-767-6166  
Oracle Training Oracle Support Development Oracle Apps

 
 Home
 E-mail Us
 Oracle Articles
New Oracle Articles


 Oracle Training
 Oracle Tips

 Oracle Forum
 Class Catalog


 Remote DBA
 Oracle Tuning
 Emergency 911
 RAC Support
 Apps Support
 Analysis
 Design
 Implementation
 Oracle Support


 SQL Tuning
 Security

 Oracle UNIX
 Oracle Linux
 Monitoring
 Remote s
upport
 Remote plans
 Remote
services
 Application Server

 Applications
 Oracle Forms
 Oracle Portal
 App Upgrades
 SQL Server
 Oracle Concepts
 Software Support

 Remote S
upport  
 Development  

 Implementation


 Consulting Staff
 Consulting Prices
 Help Wanted!

 


 Oracle Posters
 Oracle Books

 Oracle Scripts
 Ion
 Excel-DB  

Don Burleson Blog 


 

 

 


 

 

 
 

Oracle expdp tips

Oracle Database Tips by Donald BurlesonUpdated April 12, 2015


The executable program for Oracle Data Pump export is named expdp, and located in the $ORACLE_HOME/bin directory. The expdp executable can read a list of directives, as specified by the parfile option to expdp.
 
Below is an example of an export parameter file: export_options.par.

SCHEMAS=SCOTT
DIRECTORY=datapump
DUMPFILE=scott.dmp
LOGFILE=scott.log

Using this parameter file, the export command line is executed by the following:

expdp parfile=export_options.par
 
you can add a where clause to your export syntax to extract a sub-set of your production rows.  In this example, we restrict the export to rows added in the last month:

expdp scott/tiger tables=tab1, tab2 query="where mymonth > sysdate - 31"

You can generate syntax for all tables in your schema by building the export syntax with a query against user_tables.

All about expdp

Data Pump Export (EXPDP) is a very modern, server based and highly scalable data unloading utility. On typical multi-processor servers with good disk-I/O subsystems, the time to unload hundreds of gigabytes to terabytes is both reliable and reasonable. And even though the dump files remain Oracle proprietary, there are also easily identifiable uses of XML within those files. Thus, uncompressed export files are semi-readable within a text editor, and as before, can be scanned with operating system commands such as string on UNIX.

As of Oracle 11g Release 2, the older client based export (i.e. exp) utility will no longer be available or supported. Data Pump Export will become the chief and only method available.

A good place to start is by identifying the most frequent data pump export command line parameters:

ATTACH

[SCHEMA.]JOB_NAME

Name of an already existing and executing job to connect to. Need EXP_FULL_DATABASE privilege for other schemas

COMPRESSION

META_DATA_ONLY | ALL | NONE | DATA_ONLY

Compress the dump file contents for the specified criteria

CONTENT

ALL | META_DATA_ONLY | DATA_ONLY

Filter the export of dump file contents to the specified criteria

DIRECTORY

DATA_PUMP_DIR | DIRECTORY_NAME

Name of directory object pointing to a valid server directory

DUMPFILE

[DIRECTORY_NAME:]FILE_NAME [, ...]

The name (and optionally the directory) of the export data file

ESTIMATE

N | Y

Do not export, but rather just estimate the disk space required

ESTIMATE_ONLY

BLOCKS, STATISTICS

Method export uses to calculate the disk space for data only

EXCLUDE

OBJECT_TYPE[:NAME_FILTER_EXPRESSION] [, ...]

Database object types as a whole or by object name filter to specifically exclude from the export.  Also see my

expdp exclude notes

FILESIZE

INTEGER [B | K | M | G]

The maximum file size permitted for any export dump file

FULL

N | Y

Whether to perform a full database export or not

Requires EXP_FULL_DATABASE privilege

INCLUDE

OBJECT_TYPE[:NAME_FILTER_EXPRESSION] [, ...]

Database object types as a whole or by object name filter to specifically include in the export.  Also see my

expdp include notes.

JOB_NAME

SYS_EXPORT_<mode>_NN | JOB_NAME

Name by which export job can be referenced (e.g. ATTACH)

LOGFILE

EXPORT.LOG | [DIRECTORY_NAME:]FILE_NAME

The name (and optionally the directory) of the export log file

NOLOGFILE

N | Y

Whether or not to suppress creation of the export log file

PARALLEL

1 | INTEGER

The maximum number of concurrent threads for the export

PARFILE

[DIRECTORY_SPECIFICATION]FILE_NAME

Name of the operating system specific parameter file

QUERY

[[SCHEMA.]TABLE_NAME:] FILTER_EXPRESSION

Data filter condition applied to all tables or by schema and object name filters during the export

REUSE_DUMPFILES

N | Y

Whether or not to overwrite pre-existing export dump files

SAMPLE

[[SCHEMA.]TABLE_NAME]N, where .000001>=N<100

SCHEMAS

SCHEMA [, ...]

The schema or schemas to export

Need EXP_FULL_DATABASE privilege for other schemas

STATUS

0 | INTEGER

The frequency in seconds which job displays client feedback

TABLES

[SCHEMA.]TABLE_NAME[:PARTITION_NAME] [, ...]

List of tables for a table mode database export

Restricted to a single schema

TABLESPACES

TABLESPACE_NAME [, ...]

List of tablespaces for a tablespace mode database export

TRANSPORT_FULL_CHECK

N | Y

Whether or not dependencies verified between transportable tablespace objects in the transport set to those outside the set

TRANSPORT_TABLESAPCES

TABLESPACE_NAME [, ...]

List of tablespaces for a transportable tablespace mode database export (target database version >= source version)

Table 6.1: Frequent EXPDP Command Line Parameters

Moreover, to use the stop and restart data pump job capabilities, run data pump in interactive mode so as to get the data pump prompt. Then the following commands are also quite frequently useful:

CONTINUE_CLIENT

Connect client to currently executing job or restarts the job, and resumes logging mode (i.e. status output)

EXIT_CLIENT

Disconnect client connection to currently executing job and terminate the client process, but leave server job running

KILL_JOB

Detach all client processes connected to this data pump job and then terminate (i.e. kill) the currently running job

START_JOB

Start or resume the current data pump job

STOP_JOB

[IMMEDIATE]

Detach all client processes connected to this data pump job and then orderly shutdown the currently running job

Table 6.2: Additional EXPDP Parameters

So examine some very common use cases and how data pump export would be used to extract the data.

Export Database Level

This is often referred to as a logical backup where physical backup means those performed either via RMAN, operating system commands, or via third party backup and recovery tools. One reason people used logical backup was historically for ease of recovery for when an object was accidentally dropped. However, with Oracle flashback technology, logical backups are becoming far less compelling.

 

Another reason people performed logical backups was to avoid the perils of the Oracle database migration process when a major new version released. DBAs would simply export the entire database, perform the upgrade, and then import the database.

 

But over the years the database migration has greatly improved, so this usage has also seen reduced importance. Probably the most prevalent reason for doing logical backups is to provide a simple backup and recovery mechanism with no additional software costs for development and test databases.

 

C:\> expdp bert/bert directory=data_pump_dir dumpfile=logical_backup.dmp full=y

 

Remember, the user running the data pump export at the database level must have exp_full_data privilege for this option to work. Otherwise, the following Oracle errors will be returned:

 

Export: Release 11.1.0.6.0 - Production on Friday, 27 June, 2015 11:20:44

Copyright (c) 2003, 2015, Oracle.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

ORA-31631: privileges are required

ORA-39161: Full database jobs require privileges

Export Tablespace Level

While this may seem like a very useful, and therefore common use case, reality is that often DBAs find that tablespaces are simply containers with space for object allocation. So it is not uncommon over time to find that tablespaces have an eclectic collection of objects. But for those lucky enough and smart enough to have kept some logical rationale to tablespace object placement, here is an example of data pump entirely exporting two specific tablespaces:

 

C:\> expdp bert/bert directory=data_pump_dir dumpfile=multi_tablespace.dmp tablespaces=users,sysaux

 

One other use for the export at the tablespace level would be if one wished to recreate the tablespace with a different block size such as a tablespace level reorganization. It would be possible to export the tablespace, drop it, recreate it with the new block size, and then import the data. Finally, the tablespace level export can be used as part of the process to merge tablespaces, but that is pretty rare.

 

As at the database level, the user running the data pump export at the tablespace level must have exp_full_data  privilege for this option to work if that tablespace contains objects from schemas other than the one running the export. Otherwise, the following Oracle errors will be returned:

 

Export: Release 11.1.0.6.0 - Production on Friday, 27 June, 2015 11:20:44

Copyright (c) 2003, 2015, Oracle.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

ORA-31631: privileges are required

ORA-39161: Full database jobs require privileges

Export Schema Level

There are far too many good reasons to perform data pump exports at the schema level to either explain or justify them all. Suffice it to say that the export of entire schemas is probably one of the most frequently utilized modes. For example, one uses a development database where developers are writing code against a collection of related tables.

 

Each time a developer runs some code that has yet to pass unit testing, it is possible that the data's ending state may not be entirely consistent, i.e. bug in code may invalidate the data. So the developer needs a way to reset the data between runs. If the DBA makes a schema level data pump export of the base data, then it is a simple procedure to restore the data. Here is an example of exporting two specific schemas.

 

C:\> expdp bert/bert directory=data_pump_dir dumpfile=multi_schema.dmp schemas=bert,movies

 

Note that when exporting at this level of granularity, the EXCLUDE and INCLUDE options become quite useful. For example, if one wanted to perform that exact same export without corresponding grants, indexes and statistics, here is the additional data pump export syntax required:

 

C:\> expdp bert/bert directory=data_pump_dir dumpfile=schema_exclude.dmp schemas=bert,movies exclude=grant,index,statistics

 

And if the DBA had instead preferred to only unload those tables that start with the string ?MOVIE?, here is the data pump export command:

 

C:\> expdp bert/bert directory=data_pump_dir dumpfile=schema_include.dmp schemas=bert,movies include=table:\"like 'MOVIES%?\?

 

Like the database and tablespace levels, the user running the data pump export at the schema level must have exp_full_data  privilege for this option to work when requesting schemas other than the one running the export.

Export Table Level

Table level data pump export jobs are probably the second most often utilized mode. It is very easy to think in terms of tables when working with data. Table level mode just seems to be the natural granularity of choice. Return to the prior example of the developer working on code who needs the ability to refresh those tables between runs.

 

The DBA could either export just the tables that developer needs for that programming task, or better yet, the DBA could permit and instruct the developer to export the tables being worked upon. Either way, the data pump export job would work in table mode and for the tables requested, as shown here.

 

C:\> expdp bert/bert directory=data_pump_dir dumpfile=multi_table.dmp tables=movies.customer,movies.employee

 

Note that the table level mode data pump exports have to be sourced from but one schema, or the following error will occur:

 

C:\> expdp bert/bert directory=data_pump_dir dumpfile=multi_table.dmp tables=movies.customer,movies.employee,bert.junk

 

Export: Release 11.1.0.6.0 - Production on Saturday, 28 June, 2015 6:40:21

Copyright (c) 2003, 2015, Oracle.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

UDE-00012: table mode exports only allow objects from one schema

Export Data Subsets

This is probably the most powerful and useful aspect of the data export process, and yet it remains highly underutilized. For instance, if one wants to extract data from a table by using a filter upon the rows being returned. That is easily accomplished via a normal SELECT command's WHERE clause placed in the query parameter passed to the export process. Then one could easily export only those customers who live in Texas as follows:

 

C:\> expdp bert/bert directory=data_pump_dir dumpfile=just_texas.dmp schemas=movies query=movies.customer:\"where state='TX'\"

 

That seems easy enough - but there is a small catch. The QUERY clause is applied to all the tables in the export set, so all the tables better have the columns referenced by that WHERE clause. A common example would be a schema table design where each table contains a last modified date column. So if the DBA wanted to unload just records in that schema which had been modified within the past three months, here is the data pump export command for that:

 

C:\>expdp bert/bert directory=data_pump_dir dumpfile=last_mod_date.dmp schemas=movies query=\"where last_mod_date is not null and last_mod_date > SYSDATE-90\"

 

Yet as easy and powerful as this method is, there is another method that sometimes can be exactly what one is looking for - the subset by random sample method. If one wanted to export 10% of one's production data for use in development or test environments, then it applies the sample percentage against each object exported.

 

C:\> expdp bert/bert directory=data_pump_dir dumpfile=sample.dmp schemas=movies sample=10

 

However, there is one major drawback to the sample method:  it does not export referentially correct subsets of data. That is because it merely applies a simple algorithm, namely that the percentage represents the probability that a data block of rows will be included in the export's sampling of the data. This is plainly applied at the table level across all of its data blocks. The sample method does not adhere to any referential integrity constraints or foreign keys defined in the data dictionary. So if an effort to export a 10% sample of the entire schema is made, it will generally end  up with messages like those shown below.

 

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

ORA-39083: Object type REF_CONSTRAINT failed to create with error:

ORA-02298: cannot validate (BERT.MOVIETITLE_FK) - parent keys not found

Failing sql is:

ALTER TABLE "BERT"."MOVIETITLE" ADD CONSTRAINT "MOVIETITLE_FK" FOREIGN KEY ("CATEGORYID") REFERENCES "BERT"."MOVIECATEGORY" ("CATE

GORYID") ENABLE

 

However, there are ways to get around this. One way could be to create a SQL*Plus script to generate a parameter file with a series of query parameters that would sample the data and retain the foreign key relationships. But that would constitute a two-step process: run the script to create parameter files and then run the data pump export with no easy way via the database to look at the intermediate results to verify their accuracy before attempting the actual data load.

 

So instead, the preference is to execute the extract_data_subset.sql SQL*Plus script, shown below, to create the subset of the data in a second schema. Then the data can be readily examined for accuracy, and that schema can finally be exported once it is known to be correct.

 

<  extract_data_subset.sql script

 

set linesize 200

set serveroutput on size 100000

 

create or replace package subsetdata

as

  procedure xgo (xsource varchar2, xtarget varchar2, xpercent integer);

end;

/

show error

create or replace package body subsetdata

as

  type tnames is table of varchar2(32);

  done_arr tnames := tnames();

  done_cnt integer := 0;

 

  procedure xsample (xsource varchar2, xtarget varchar2, current_table varchar2, xpercent integer)

  is

    s1   varchar2(256) := 'create table ' || xtarget || '.' || current_table || '

as

select * from ' || xsource || '.' ||current_table;

    s2   varchar2(256) := 'where rownum <= (select ceil(' || to_char(xpercent/100) || ' * count(*))

from ' || xsource || '.' || current_table || ');';

    s3   varchar2(256) := 'create table ' || xtarget || '.' || current_table || ' as

select T0.* from   ' || xsource || '.' || current_table || '  T0,';

    s4   varchar2(256) := 'where';

    cnt1 integer       := 0;

    cnt2 integer       := 0;

    i    integer       := 0;

    j    integer       := 0;

    es3  varchar2(1)   := '';

    es4  varchar2(4)   := '';

  begin

    i := 0;

    es3 := ',';

    select count(*)

    into cnt1

    from dba_constraints fk,

         dba_constraints pk

    where fk.constraint_type = 'R'

      and fk.owner = xsource

      and fk.R_owner = xsource

      and fk.table_name = current_table

      and pk.constraint_type in ('P','U')

      and pk.owner = xsource

      and pk.table_name != current_table

      and fk.r_constraint_name = pk.constraint_name;

    if (cnt1 = 0) then

      dbms_output.put_line(s1);

      dbms_output.put_line(s2);

    else

      for c1 in (select pk.table_name,

                        fk.constraint_name fk_name,

                        pk.constraint_name pk_name

                   from dba_constraints fk,

                        dba_constraints pk

                   where fk.constraint_type = 'R'

                     and fk.owner = xsource

                     and fk.R_owner = xsource

                     and fk.table_name = current_table

                     and pk.constraint_type in ('P','U')

                     and pk.owner = xsource

                     and pk.table_name != current_table

                     and fk.r_constraint_name = pk.constraint_name

                ) loop

          i := i + 1;

          if (i = cnt1) then

            es3 := '';

          end if;

          s3 := s3 || '

 ' || xtarget || '.' || c1.table_name || '  T' || to_char(i) || es3;

          j := 0;

          es4 := ' and';

          select count(*)

          into cnt2

          from dba_cons_columns fk,

               dba_cons_columns pk

          where fk.constraint_name = c1.fk_name

            and fk.owner = xsource

            and fk.table_name = current_table

            and pk.constraint_name = c1.pk_name

            and pk.owner = xsource

            and pk.table_name != current_table

            and fk.position = pk.position;

          for c2 in (select fk.column_name fk_col,

                            pk.column_name pk_col

                     from dba_cons_columns fk,

                          dba_cons_columns pk

                     where fk.constraint_name = c1.fk_name

                       and fk.owner = xsource

                       and fk.table_name = current_table

                       and pk.constraint_name = c1.pk_name

                       and pk.owner = xsource

                       and pk.table_name != current_table

                       and fk.position = pk.position

                       order by fk.position

                    ) loop

            j := j + 1;

            if (i = cnt1) and (j = cnt2) then

              es4 := ';';

            end if;

            s4 := s4 || '

 ' || 'T0.' || c2.fk_col || ' = T' || to_char(i) || '.' || c2.pk_col || es4;

          end loop;

        end loop;

      dbms_output.put_line(s3);

      dbms_output.put_line(s4);

    end if;

    done_arr.extend(1);

    done_cnt := done_cnt+1;

    done_arr(done_cnt) := current_table;

  end;

 See code depot for full script 

  procedure xprocess (xsource varchar2, xtarget varchar2, current_table varchar2, xpercent integer)

  is

    i    integer := 1;

    flg  integer := 1;

    cnt1 integer := 0;

    cnt2 integer := 0;

  begin

    xsample (xsource, xtarget, current_table, xpercent);

    for c1 in (select fk.table_name

               from dba_constraints fk,

                    dba_constraints pk

               where fk.constraint_type = 'R'

                 and fk.owner = xsource

                 and fk.R_owner = xsource

                 and fk.table_name != current_table

                 and pk.constraint_type in ('P','U')

                 and pk.owner = xsource

                 and pk.table_name = current_table

                 and fk.r_constraint_name = pk.constraint_name

              ) loop

      select count(*)

      into cnt1

      from dba_constraints fk,

           dba_constraints pk

      where fk.constraint_type = 'R'

        and fk.owner = xsource

        and fk.R_owner = xsource

        and fk.table_name = c1.table_name

        and pk.constraint_type in ('P','U')

        and pk.owner = xsource

        and pk.table_name != current_table

        and fk.r_constraint_name = pk.constraint_name;

      if (cnt1 > 0) then

        cnt2 := 0;

        flg  := 0;

        for c2 in (select pk.table_name

                   from dba_constraints fk,

                        dba_constraints pk

                   where fk.constraint_type = 'R'

                     and fk.owner = xsource

                     and fk.R_owner = xsource

                     and fk.table_name = c1.table_name

                     and pk.constraint_type in ('P','U')

                     and pk.owner = xsource

                     and pk.table_name != current_table

                     and fk.r_constraint_name = pk.constraint_name

                  ) loop

          i := 1;

          while (i <= done_cnt) loop

            if (c2.table_name = done_arr(i)) then

              cnt2 := cnt2 + 1;

            end if;

            i := i + 1;

          end loop;

        end loop;

        if (cnt1 = cnt2) then

          flg := 1;

        end if;

      end if;

      if (flg = 1) then

        xprocess (xsource, xtarget, c1.table_name, xpercent);

      end if;

    end loop;

  end;

 

  procedure xgo (xsource varchar2, xtarget varchar2, xpercent integer)

  is

  begin

    for c1 in (select table_name

               from dba_tables tab

               where tab.owner = xsource

                 and NOT EXISTS (select 1

                                 from dba_constraints fk

                                 where fk.constraint_type = 'R'

                                   and fk.owner = xsource

                                   and fk.R_owner = xsource

                                   and fk.table_name = tab.table_name

                                )

              ) loop

      xprocess (xsource, xtarget, c1.table_name, xpercent);

    end loop;

  end;

end;

/

show error

 

prompt

###

###  subsetdata.xgo(SOURCE_SCHEMA,TARGET_SCHEMA,PERCENTAGE)

###

exec subsetdata.xgo('MOVIES','BERT',10)

 

Looking at the last statement in the extract_data_subset.sql SQL*Plus script, simply call the subsetdata.xgo procedure to run, thereby specifying the source and target schemas, plus the sampling percentage.

 

The extract_data_subset.sql SQL*Plus script walks the database referential integrity dependency tree and maintains it for the data sample being generated. The extract_data_subset.sql output, shown next, is an example of the generated script for copying 10% of the simple MOVIES demo schema copied to the BERT intermediate schema.

 

<  extract_data_subset.sql output

 

create table BERT.CUSTOMER

as

select * from MOVIES.CUSTOMER

where rownum <= (select ceil(.1 * count(*))

from MOVIES.CUSTOMER);

 

create table BERT.EMPLOYEE

as

select * from MOVIES.EMPLOYEE

where rownum <= (select ceil(.1 * count(*))

from MOVIES.EMPLOYEE);

 

create table BERT.MOVIERENTAL as

select T0.* from   MOVIES.MOVIERENTAL  T0,

 BERT.CUSTOMER  T1,

 BERT.EMPLOYEE  T2

where

 T0.CUSTOMERID = T1.CUSTOMERID and

 T0.EMPLOYEEID = T2.EMPLOYEEID;

 

create table BERT.MOVIECATEGORY

as

select * from MOVIES.MOVIECATEGORY

where rownum <= (select ceil(.1 * count(*))

from MOVIES.MOVIECATEGORY);

 

create table BERT.MOVIETITLE as

select T0.* from   MOVIES.MOVIETITLE  T0,

 BERT.MOVIECATEGORY  T1

where

 T0.CATEGORYID = T1.CATEGORYID;

 

create table BERT.MOVIECOPY as

select T0.* from   MOVIES.MOVIECOPY  T0,

 BERT.MOVIETITLE  T1

where

 T0.MOVIEID = T1.MOVIEID;

 

create table BERT.RENTALITEM as

select T0.* from   MOVIES.RENTALITEM  T0,

 BERT.MOVIERENTAL  T1,

 BERT.MOVIECOPY  T2

where

 T0.RENTALID = T1.RENTALID and

 T0.MOVIECOPYID = T2.MOVIECOPYID;

 

Now the data can be verified as correct using standard SQL SELECT commands and then export that data using the data pump export at the schema level. Although this method requires a little additional database disk space to build the subset data, disk space is so cheap and this method provides a simple method for examining intermediate results. Since it maintains referential integrity, it is obviously superior to the data pump export sample method.

 

For more on expdp, see these notes:

 

 

expdp dmp file location

expdp parameters for procedures functions and and packages

Data Pump Import/export parallel tips

Network_Link Tips

expdp flashback_time tips

expdp include tips

expdp exclude tips

 

 
 
 
Get the Complete
Oracle Utility Information 

The landmark book "Advanced Oracle Utilities The Definitive Reference"  contains over 600 pages of filled with valuable information on Oracle's secret utilities. This book includes scripts and tools to hypercharge Oracle 11g performance and you can buy it for 30% off directly from the publisher.
 

 



 

 

��  
 
 
Oracle Training at Sea
 
 
 
 
oracle dba poster
 

 
Follow us on Twitter 
 
Oracle performance tuning software 
 
Oracle Linux poster
 
 
 

 

Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.  Feel free to ask questions on our Oracle forum.

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata?  Oracle technology is changing and we strive to update our BC Oracle support information.  If you find an error or have a suggestion for improving our content, we would appreciate your feedback.  Just  e-mail:  

and include the URL for the page.


                    









Burleson Consulting

The Oracle of Database Support

Oracle Performance Tuning

Remote DBA Services


 

Copyright © 1996 -  2020

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.