Question: I need a
v$ view or other method for displaying the history of export
and import jobs on my database. Are there any data
dictionary views that keep a historical record of
export/import activity? Can I use Oracle auditing to
get historical values for Data Pump jobs?
Answer: The data dictionary does
not, by default, provide a way to track Data Pump
export/import jobs. However, if you add the
undocumented parameters keep_master=y and
metrics=y directives to your Data Pump expdp or impdp
command, then a master table where data pump keeps its
metadata is retained in the data dictionary in a table named
myowner.sys_export_schema_01.
SET serveroutput on
SET
lines 100
DECLARE
job1 NUMBER;
BEGIN
job1 :=
DBMS_DATAPUMP.ATTACH('SYS_EXPORT_SCHEMA_01','MYOWNER');
DBMS_DATAPUMP.STOP_JOB (job1);
END;
/
Here is an example invocation of a Data Pump export with
keep_master=y:
c:\impdp
directory=DATA_PUMP_DIR job_name=sys_export_schema_01 full=y
metrics=y keep_master=y
Using metrics=y will include additional logging
information about the number of objects and the time it took
to process them in the log=myowner.log_file parameter.
c:\impdp
directory=DATA_PUMP_DIR logfile=sys_export_schema_01.log
full=y metrics=y
Here is a sample query that can be run if you use
keep_master=y:
select
tab_owner,
tab_name,
completed_rows,
tab_size,
elapsed_time
from (select
base_object_schema,
object_name,
(lag(object_name, 1)
over(order by process_order)) as TAB_NAME,
(lag(base_object_schema, 1) over(order by process_order)) as
TAB_OWNER,
completed_rows,
(dump_length / (1024 * 1024 * 1024)) || 'GB' TAB_SIZE,
elapsed_time
from
myowner.sys_export_schema_01
where
object_type_path = 'SCHEMA_EXPORT/TABLE/TABLE_DATA'
and
process_order >= 1)
where
object_name
is null;
At
the heart of the Data Pump operation is the Master Table.
This table is created at the beginning of a Data Pump
operation and is dropped at the end of the successful
completion of a Data Pump operation. The Master Table can
also be dropped if the job is killed using the kill_job
interactive command. If a job is stopped using the stop_job
interactive command or if the job is terminated
unexpectedly, the Master Table will be retained.
The keep_master parameter can be set to Y to retain the
Master Table at the end of a successful job for debugging
purposes. The name of the Master Table is the same as
the Data Pump job name and has the following columns:
SQL>
desc SYS_ESTIMATE_SCHEMA_01
Name
Null? Type
----------------------------------------- --------
----------------
PROCESS_ORDER
NUMBER
DUPLICATE
NUMBER
DUMP_FILEID
NUMBER
DUMP_POSITION
NUMBER
DUMP_LENGTH
NUMBER
DUMP_ALLOCATION
NUMBER
COMPLETED_ROWS
NUMBER
ERROR_COUNT
NUMBER
ELAPSED_TIME
NUMBER
OBJECT_TYPE_PATH
VARCHAR2(200)
OBJECT_PATH_SEQNO
NUMBER
OBJECT_TYPE
VARCHAR2(30)
IN_PROGRESS
CHAR(1)
OBJECT_NAME
VARCHAR2(500)
OBJECT_SCHEMA
VARCHAR2(30)
PARTITION_NAME
VARCHAR2(30)
FLAGS
NUMBER
COMPLETION_TIME
DATE
OBJECT_TABLESPACE
VARCHAR2(30)
SIZE_ESTIMATE
NUMBER
OBJECT_ROW
NUMBER
PROCESSING_STATE
CHAR(1)
PROCESSING_STATUS
CHAR(1)
BASE_OBJECT_TYPE
VARCHAR2(30)
BASE_OBJECT_NAME
VARCHAR2(30)
BASE_OBJECT_SCHEMA
VARCHAR2(30)
PARALLELIZATION
NUMBER
UNLOAD_METHOD
NUMBER
GRANULES
NUMBER
SCN
NUMBER
DOMAIN_INDEX
VARCHAR2(30)
DOMAIN_INDEX_SCHEMA
VARCHAR2(30)
GRANTOR
VARCHAR2(30)
NAME
VARCHAR2(30)
VALUE_T
VARCHAR2(4000)
VALUE_N
NUMBER
IS_DEFAULT
NUMBER
FILE_TYPE
NUMBER
USER_DIRECTORY
VARCHAR2(4000)
USER_FILE_NAME
VARCHAR2(4000)
FILE_NAME
VARCHAR2(4000)
EXTEND_SIZE
NUMBER
FILE_MAX_SIZE
NUMBER
EXTEND_ACTIVE
NUMBER
OVERFLOW_TO
NUMBER
PROCESS_NAME
VARCHAR2(30)
LAST_UPDATE
DATE
WORK_ITEM
VARCHAR2(30)
NON_TRANSACTIONAL
CHAR(1)
OBJECT_NUMBER
NUMBER
COMPLETED_BYTES
NUMBER
TOTAL_BYTES
NUMBER
METADATA_IO
NUMBER
DATA_IO
NUMBER
CUMULATIVE_TIME
NUMBER
OLD_VALUE
VARCHAR2(4000)
SEED
NUMBER
LAST_FILE
NUMBER
USER_NAME
VARCHAR2(30)
OPERATION
VARCHAR2(30)
JOB_MODE
VARCHAR2(30)
VERSION
NUMBER
DB_VERSION
VARCHAR2(30)
STATE
VARCHAR2(30)
PHASE
NUMBER
GUID
RAW(16)
START_TIME
DATE
BLOCK_SIZE
NUMBER
METADATA_BUFFER_SIZE
NUMBER
DATA_BUFFER_SIZE
NUMBER
DEGREE
NUMBER
LANGUAGE
VARCHAR2(30)
PLATFORM
VARCHAR2(100)
ABORT_STEP
NUMBER
INSTANCE
VARCHAR2(16)
The Master Table is
used to track the detailed progress information of a Data
Pump job:
* The current set of
dump files.
* The current state
of every object exported or imported and their locations in
the dump file set.
* The job's
user-supplied parameters.
* The status of
every worker process.
* The state of
current job status and restart information.
The Master Table is
created in the schema of the current user running the Pump
Dump export or import, and it keeps tracks of lots of
detailed information. As a result, this table can take
up a significant amount of storage space.
Tip: If an export or
an import job is running on a database with a large number
of database objects, make sure the schema user has
sufficient tablespace quota.
The Master Table is
the key to Data Pump's restart capability in the event of a
planned or unplanned job stoppage.
===================================================
Another approach is to create a logfile for the Data Pump
job and map the OS export log file as an external table.
These log files can be kept as historical records of
export/import activity and read using SQL:
set oracle_sid=youroraclesid
set uname=yourschema
set upass=yourpassword
set dt=%date:/=%
set exp_logfile=dailyexport_%dt:~3%.log
expdp %uname%/%upass%
schemas=%uname% directory=test_dir dumpfile=test.dmp
logfile=%exp_Logfile%
(
echo.drop table %uname%.temp_expdp_log;
echo.create table %uname%.temp_expdp_log ^(line
varchar2^(400^) ^)
echo. organization external
echo. ^(type ORACLE_LOADER
echo. default
directory test_dir
echo. access parameters
echo.
^(records delimited by newline
echo. ^)
echo.
location ^('%exp_logfile%'^)
echo. ^) reject limit
unlimited;
) > create_extlog.sql
sqlplus %uname%/%upass% @create_extlog.sql
|
|
|
Oracle Training from Don Burleson
The best on site
"Oracle
training classes" are just a phone call away! You can get personalized Oracle training by Donald Burleson, right at your shop!

|
|
|
|
|
Burleson is the American Team

Note:
This Oracle
documentation was created as a support and Oracle training reference for use by our
DBA performance tuning consulting professionals.
Feel free to ask questions on our
Oracle forum.
Verify
experience!
Anyone
considering using the services of an Oracle support expert should
independently investigate their credentials and experience, and not rely on
advertisements and self-proclaimed expertise. All legitimate Oracle experts
publish
their Oracle
qualifications.
Errata?
Oracle technology is changing and we
strive to update our BC Oracle support information. If you find an error
or have a suggestion for improving our content, we would appreciate your
feedback. Just
e-mail:
and include the URL for the page.
Copyright © 1996 - 2020
All rights reserved by
Burleson
Oracle ®
is the registered trademark of Oracle Corporation.
|
|