You can initiate an export job from your server and have Data Pump
export data from a remote database to dump files located on the
instance from which you initiate the Data Pump export job.
expdp hr/hr DIRECTORY=dpump_dir1
Export and Import Over the
Network (by Kim Floss)
Typically, Oracle Data Pump Export results in
a binary format dumpfile that is then loaded (using Oracle Data Pump
Import) into another database. However, in addition to this
file-based Oracle Data Pump Export and Import, Oracle Data Pump
provides a network-based approach for direct loading to or unloading
from one database to another.
Most DBAs are probably familiar with using
the legacy Oracle Export and Import utilities over a network, using
named pipes as the mechanism through which to pass the data. One of
the problems with that approach can be that named pipes aren't
supported on every platform.
Instead of using named pipes, Oracle Data
Pump uses database links for exporting and importing data between
source and target over the network. And, as with the named pipe
mechanism, the network_link feature entirely bypasses dumpfile
creation, transmitting data from one database to another across the
This feature should be of keen interest to
any DBAs spending lots of time doing production-to-test refreshes
for development or extracting data from a read-only data warehouse.
(For a read-only database, you must use Oracle Data Pump Export's
network_link mechanism, initiating the process on the target
database, because Oracle Data Pump won't be able to create the
necessary master table on the read-only database.)
Because network bandwidth is the limiting
factor, the network_link capability is best used in conjunction with
the filtering capability, as a means of siphoning off a subset of
data (rather than an entire multigigabyte or -terabyte database)
from one database to another.
- Set up a remote
link from a satellite database (database 2) to the master
database (database 1).
- Create the import
as a SQL script to run weekly from the satellite database to
import selected tables from the source system, filtering and
transforming as needed (see Listing 2).
Code Listing 2: Sample script using
API for import
ind NUMBER; - Loop index number
jobhandle NUMBER; - Data Pump job handle
percent_done NUMBER; - Percentage of job complete
job_state VARCHAR2(30); - Keeps track of job state
le ku$_LogEntry; - work-in-progress and error messages
js ku$_JobStatus; - Job status from get_status
jd ku$_JobDesc; - Job description from get_status
sts ku$_Status; - Status object returned by get_status
? Create a (user-named) Data Pump job to do a "table-level" import
? using a network link
jobhandle := DBMS_DATAPUMP.OPEN('IMPORT','TABLE', 'RM_TECHLAB', 'DEV_REFRESH');
? Set parallelism to 1? network import/export always uses 1 anyway, may not
? be necessary to explicitly set...
? Remap schema objects from source named BATCH_JOBS to target KFLOSS.
? If a table already exists in the destination schema, don't override (this
? is default behavior.)
? Use statistics (rather than blocks) to estimate time.
? Start the job. An exception is returned if something is not set up properly.
? The import job should now be running. This loop monitors the job until
? it completes, displaying progress information if there are problems
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
js := sts.job_status;
? As the percentage-complete changes in this loop, the new value displays.
if js.percent_done != percent_done
DBMS_OUTPUT.PUT_LINE('*** Job percent done = ' || to_char(js.percent_done));
percent_done := js.percent_done;
? Displays any work-in-progress or error messages received for the job.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
le := sts.wip;
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
le := sts.error;
le := null;
if le is not null
ind := le.FIRST;
while ind is not null loop
ind := le.NEXT(ind);
? When the job finishes, display status before detaching from job.
DBMS_OUTPUT.PUT_LINE('Job has completed');
DBMS_OUTPUT.PUT_LINE('Final job state = ' || job_state);
When you run the script on the satellite
system, it connects to the source system and brings the data over
directly to the target. The master table is created in the schema of
the user on the initiating system and not in the database source.
This approach works well for read-only databases and also as a means
of refreshing development databases with extracts from a production