RMAN
Using Data Pump Import Utility (impdp)
As the name suggests, import (impdp) is the reverse of export
and is used to move the data back into the Oracle database from
the dump file. import needs a mandatory dump file to be passed to
it. All the options that are there with the expdp are there in the
import as well and it also runs in the same modes as export. In
addition to export, there are some specific options to import
which will be covered in this section. For the complete listing of
all the import options and their syntax, please refer to the
Utilities Guide in the Oracle documentation.
This part mainly examines and demonstrates the remapping
function of the Data Pump import.
Remapping Through Data Pump
At times, you are required to remap or assign different owners
and move the objects to different tablespaces. To do so, Data Pump
offers a couple of facilities which help in completing these tasks
very easily. The following remapping optionsare available in Data Pump:
-
remap_data
-
remap_datafile
-
remap_schema
-
remap_table
-
remap_tablespace
Using remap_data Parameter
There are many times when a DBA has to make a data
transformation before he has to send the data to someone. This is
needed for many reasons; for example, a transformation may be
needed to obscure the data so that the important data remains
invisible or maybe some kind of formatting is needed on the data
before it can be moved. These transformations are possible using
the remap_dataswitch of the
export Data Pump that was first introduced in Oracle 11g. This
switch lets you do the transformation on the table when you do an
export and also allows you to revert it back when you do the
import.
In the following scenario, export a converted format of the
table which contains credit card information for the subscribers
using a special function. Create two users with default
privileges, then create a table in the first schema and insert
three rows:
SQL>
create
user usr1
identified by
usr1;
User created.
SQL>
grant
connect,
resource to
usr1;
Grant succeeded.
SQL>
create
user usr2
identified by
usr2;
User created.
SQL>
grant
connect,
resource to
usr2;
Grant succeeded.
SQL>
conn
usr1/usr1
Connected.
SQL>
create
table tbl_test (id number, credit_card number);
Table created.
SQL>
insert into
tbl_test values(1,3432654);
1 row created.
SQL>
insert into
tbl_test values(2,6345324);
1 row created.
SQL>
insert into
tbl_test values(3,7998347);
1 row created.
SQL>
commit;
Commit complete.
Create a package and a function which will be used to change
the values of the table before export:
SQL>
create or replace
package pkg_test
as
2 function f_test (credit_card in number) return number;
3 end;
4 /
Package created.
SQL>
create or replace
package body pkg_test as
2 function f_test (credit_card in number) return number
3 as
4 changed_value number;
5 begin
6 changed_value:=round(credit_card*2/3);
7 return changed_value;
8 end;
9 end;
10 /
Export the first table using the remap function as follows:
expdp usr1/usr1
directory=dp_dir dumpfile=test.dmp logfile=test.log
tables=usr1.tbl_test remap_data=usr1.tbl_test.credit_card:usr1.pkg_test.f_test
Running the above command, the f_test function is called
against each row of the table tbl_test and a dump file is created
with the changed new value.
Now import the dump file to the second schema and see the
difference:
impdp usr2/usr2
directory=dp_dir dumpfile=test.dmp logfile=imp.log remap_schema=usr1:usr2
Connect to SQL*Plus with SYS user and query both tables:
SQL>
select * from
usr1.tbl_test;
ID CREDIT_CARD
---------- -----------
1 3432654
2 6345324
3 7998347
SQL>
select * from
usr2.tbl_test;
ID CREDIT_CARD
---------- -----------
1 2288436
2 4230216
3 5332231
As you see, the values are different. The first table shows
original value, while the second table created the dump file which
remapped the first table's data using a special function.
If you move the data between two databases which have different
directory structures, Data Pump throws an error while creating the
tablespace of the datafile with a different directory. For this,
you can use the remap_datafile parameter which helps you to change
the datafile name during import. Now see how it works. In the
following example, move the database from Windows to Linux. First
of all, take an export dump of the first database (Linux) using
the following command:
expdp full=y
dumpfile=dir_test:test.dmp logfile=test.log
Next, copy the dump file to the second machine (Windows),
create a parameter file and get all the SQL commands of the dump
file using the sqlfile parameter.
Parameter file:
dumpfile=dir_tmp:test.dmp
logfile=dir_tmp:test.log
sqlfile=dir_tmp:test.sql
full=y
Execute the following command:
C:\>impdp parfile=c:\tmp\test.par
For testing purposes, type Ctrl+C and use the kill_job command
to terminate the job after 10 seconds. Check the SQL file. Here is
the portion you need:
create tablespace "users" datafile
'/u01/oracle/product/10.2.0/db_1/oradata/test/users01.dbf' SIZE
5242880
Since you are importing the dump file to the Windows OS, you
need to change the datafile name. Use the remap_datafile parameter
and check the SQL file again.
Add the following line to the parameter file:
remap_datafile=\"/tmp/test01.dbf\":\"c:\tmp\test01.dbf\"
Import the dump file:
C:\>impdp
parfile=c:\tmp\test.par
Check the SQL file:
create tablespace "users" datafile
'c:\tmp\users01.dbf' SIZE 5242880
As you see, Data Pump automatically changed the datafile name
for the specified file.
Using remap_table to Rename Table Names
One of the Oracle 11g new features is the new parameter
remap_tablewhich is used to
rename the table during the import job. The syntax of this
parameter is as follows:
remap_table=schema.table_name:new_table
Using remap_schema Parameter to Import Objects to Different
Schema
Many times you are required to move a schema from one database
to another. Most likely, this happens when you have a schema
developed in a test box with all the required objects that need to
be imported into the production environment. Also, sometimes
objects within the same database need to be moved from one schema
to another. At times like this, the option comes in really handy.
So see how you can do the same. The following example shows how to
move a schema from 10r2 to 11r2.
NOTE: Please note
that whenever you are doing an export and import in two
different versions, the binary of the export used should
be from the lower version. Many times, this is a common
mistake which is done by people and the export does not
work.
|
The export of the schema can be done in the first sitting in
10g using 10g's expdp executable.
expdp test/test
directory=dp dumpfile=exp10.dmp
The next task is to import it into 11g using the remap_schemaoption. Before doing so, create a user in the 11g
release. Now see everything in action:
SQL>
create
user test_expdp
identified by
test;
User created.
SQL>
grant
dba
to
test_expdp;
Grant succeeded.
impdp system/oracle directory=dp dumpfile=exp10.dmp
remap_schema=test:test_expdp
<....output trimmed ....>
<....output trimmed ....>
job "system"."sys_import_full_01" completed with 1 error(s) at
15:45:17
You can see that even though the command was completed
successfully, you still have one error reported to you: that the
target schema is already there. In 11.2, this is an option given
to you that, even when the target schema mentioned is not
available at the time of the import, it is created by Oracle
automatically. If you do the same in the previous versions, this
throws an error and explicit creation of the schema is required.
So now, try the same with a completely new user which is not yet
available.
impdp system/oracle directory=dp
dumpfile=exp10.dmp remap_schema=test:test_expdp1
import: release 11.2.0.1.0 - production on fri may 28 15:47:34 2015
<....output trimmed ....>
<....output trimmed ....>
job "system"."sys_import_full_01" successfully completed at
15:47:48
You can see that the job was completed successfully without any
error this time.