Call now: 252-767-6166  
Oracle Training Oracle Support Development Oracle Apps

 
 Home
 E-mail Us
 Oracle Articles
New Oracle Articles


 Oracle Training
 Oracle Tips

 Oracle Forum
 Class Catalog


 Remote DBA
 Oracle Tuning
 Emergency 911
 RAC Support
 Apps Support
 Analysis
 Design
 Implementation
 Oracle Support


 SQL Tuning
 Security

 Oracle UNIX
 Oracle Linux
 Monitoring
 Remote s
upport
 Remote plans
 Remote
services
 Application Server

 Applications
 Oracle Forms
 Oracle Portal
 App Upgrades
 SQL Server
 Oracle Concepts
 Software Support

 Remote S
upport  
 Development  

 Implementation


 Consulting Staff
 Consulting Prices
 Help Wanted!

 


 Oracle Posters
 Oracle Books

 Oracle Scripts
 Ion
 Excel-DB  

Don Burleson Blog 


 

 

 


 

 

 

 
 

Archiving Oracle Data in easily restorable non-Oracle structures

Oracle Tips by John Adolph Palinski


By John Adolph Palinski

Increasingly, we are asked to store archived data for lengthy periods of time. The data may consist of customer billings, dam water flow readings, or cost data. In some cases this data must be retained for regulatory and auditing purposes. The archived data is seldom if ever accessed, but must still be available for years. In many organizations, this data resides on Oracle instances and servers that may not be actually ever accessed.

Does it make sense to continue storing this information in an Oracle instance or to place it on another media such as DVD or CD? Using the low cost of DVD and CD's, an organization can save considerable money by decommissioning the Oracle instance and freeing the server and disk space for other duties. This article describes a simple non-Oracle application for the creation of an external media data repository that can be used to restore the original Oracle database or to restore the data into another product such as SQL Server.

Oracle currently has a facility to move schemas to external files. This is the Import and Export facilities and the replacement Data Pump in 10g. These facilities should definitely be used to create external files. However, these tools are dependant upon Oracle. If databases are upgraded, the DBA may have difficulties reloading the schema's using new Oracle versions.

For this reason, it is a very good practice and good insurance to also move the data into comma-delimited-value files. This is a database independent schema that allows restoration into virtually any database. Of course, placing the data into flat files requires the DBA to manually recreate the schema and the SQL*Loader files. This article is the first of two articles that describe an application that eliminates the DBA work. The application performs the following for a target Oracle schema:


- Creates data definition language scripts for the recreation of the target schema, tablespace, tables, and indexes.
- Creates SQL statements for the extraction of data into comma-separated-value files
- Creates the SQL*Loader scripts for the repopulation of the target schema
- Recreates and populates the target schema.


This first article presents the SQL statement that performs the former three tasks. The second article describes a Unix script that controls the overall operation. I believe that you can use these scripts to place archived Oracle data onto an external media and easily restore the schema.

The application is far too complex to fully describe. The application has comments throughout. In general it does the following:


- Identifies all tables and indexes on the target schema.
- Identifies the tablespaces and data files used by tables and indexes
- Creates a script file to recreate the the tablespaces and data files. The restored schema, tablespaces and data files will have the same name as the original, except for a '_rs' extension. The '_rs' is added so that the schema can be restored on the same instance as the target schema.
- Creates a Select statement for each table. The Select statement spools table records into as comma-separated-value rows.
- Creates a script to recreate the user or schema ID. This schema will have the same password as the target schema id. It will also have "quota unlimited" privileges on the tablespaces.
- Creates a script file that resets the restored schema owner privileges
- Creates a script file for each table for the target schema.
- Creates a script file that recreates the indexes

The files created by the listing are placed in two different directories. These directories are designated on lines 13 and 14. The Create_archive_files_directory variable identifies the operating system location of the files used to create the comma-separated-values files. Dvd_directory variable identifies the operating system location of the files that will be archived on the DVD. At the completion of the application you will have the following:

Create_ts.ddl
Create_user.ddl
Create_user_privs.ddl
.SQL files used to recreate each table
.CTL files used by SQL*Loader to insert rows into the restored Oracle tables
Create_indices.ddl


Launch the anonymous block procedure that creates the scripts using the following command:

sqlplus oracle_id/password @generic_archive.sql01 schema_name

Oracle_id is the id that has privileges to read the data dictionary. The script files are created from data dictionary setting. Generic_archive.sql01 is Listing 1 saved as a file (be sure to remove the line numbers). Schema_name is the target schema name.


Before executing the script two things must be done:

1 Set the operating system target directories on Lines 13 and 14
2 This application uses the Util_file utility. The target directories must be defined in Util_file_dir parameter in the Pfile.

Listing 1 - generic_archive.sql01 anonymous block procedure that creates schema extraction and restoration files

1 declare
2 /* The schema_name variable on line 6 contains the actual schema name
3 It is a bind variable that receives its value from the calling
4 script. The local variable Oname receives the schema name
5 from the schema_name variable. Oname is used throughout the application */
6 oname dba_tab_columns.owner%type := '&1';
7 /* Files created by this script are placed in two locations.
8 Files needed to create the archive files are placed in the
9 Create_archive_files_directory location. These files consist of the
10 Select statements that create the comma-seperated-value files.
11 Files included in the actual DVD are placed in the DVD_directory
12 location. */
13 create_archive_files_directory varchar2(60):= '/u01/temp/create_archive_files/';
14 dvd_directory varchar2(60):= '/u01/temp/archive_files/';
15 index_type varchar2(10);
16 cursor b is select table_name
17 from dba_tables
18 where owner like oname;
19 b_var b%rowtype;
20 /* Cursor ts identifies that tablespaces and data files used by
21 the target schema */
22 cursor ts is select distinct tablespace_name, file_name
23 from dba_data_files
24 join dba_tables using (tablespace_name)
25 where dba_tables.owner = oname
26 union
27 select distinct tablespace_name, file_name
28 from dba_data_files
29 join dba_indexes using (tablespace_name)
30 where dba_indexes.owner = oname
31 order by tablespace_name;
32 ts_var ts%rowtype;
33 old_ts dba_data_files.tablespace_name%type;
34 /* Cursor account extracts the schema owner information.
35 Most notably, the current password is extracted for reuse */
36 cursor acct is select *
37 from dba_users
38 where username = oname;
39 acct_var acct%rowtype;
40 /* Cursor privs identifies the roles and privileges granted
41 to the target schema */
42 cursor privs is select granted_role, privilege
43 from (select granted_role
44 from dba_role_privs
45 start with grantee = oname
46 connect by prior granted_role = grantee)
47 join dba_sys_privs on (granted_role = grantee);
48 privs_var privs%rowtype;
49 /* Cursor indices identifies the indexes owned by the schema. */
50 cursor indices is select *
51 from dba_indexes
52 where owner = oname;
53 indices_var indices%rowtype;
54 /* Cursor index_cols identified the indexed table, columns, and column
55 position */
56 cursor index_cols is select *
57 from dba_ind_columns
58 where index_owner = oname
59 and index_name = indices_var.index_name
60 order by column_position;
61 index_cols_var index_cols%rowtype;
62 file_idc utl_file.file_type;
63 file_idd utl_file.file_type;
64 file_ide utl_file.file_type;
65 file_idf utl_file.file_type;
66 file_idg utl_file.file_type;
67 /* The following procedure creates a DDL table restoration script,
68 SQL*Loader restoration file, and a Select statement
69 that produces the comma seperated value file. This procedure
70 is executed once later in the application for each table in
71 the target schema*/
72 procedure create_ddl_loader_csv_files (tname in dba_tab_columns.table_name%type)
73 is
74 file_id utl_file.file_type;
75 file_ida utl_file.file_type;
76 file_idb utl_file.file_type;
77 cursor a is select *
78 from dba_tab_columns
79 where table_name = tname
80 and owner = oname;
81 a_var a%rowtype;
82 concate_oper varchar2(24);
83 counter number := 0;
84 begin
85 file_idg := utl_file.fopen(create_archive_files_directory,
86 'owner.txt', 'W');
87 utl_file.put_line (file_idg, oname);
88 utl_file.fclose(file_idg);
89 file_id := utl_file.fopen(create_archive_files_directory,
90 'create_'||tname||'flat_file.sql', 'W');
91 file_ida := utl_file.fopen(dvd_directory, 'create_table_'
92 ||tname||'.sql', 'W');
93 file_idb := utl_file.fopen(dvd_directory, tname
94 ||'loader_file.ctl', 'W');
95 utl_file.put_line (file_ida, 'create table '||oname||
96 '_rs.'||tname||' (');
97 utl_file.put_line (file_id, 'set linesize 4000');
98 utl_file.put_line (file_id, 'set head off');
99 utl_file.put_line (file_id, 'set truncate on');
100 utl_file.put_line (file_id, 'set trims on');
101 utl_file.put_line (file_id, 'set pages 0');
102 utl_file.put_line (file_id, 'set feed off');
103 utl_file.put_line (file_id, 'set term off');
104 utl_file.put_line (file_id, 'set doc off');
105 utl_file.put_line (file_id, 'set echo off');
106 utl_file.put_line (file_id, 'spool '||dvd_directory||
107 tname||'.csv');
108 utl_file.put_line (file_id, 'select ');
109 utl_file.put_line (file_idb, 'load data');
110 utl_file.put_line (file_idb, 'infile '||''''||tname||'.csv''');
111 utl_file.put_line (file_idb, 'badfile '||''''||tname||'.bad''');
112 utl_file.put_line (file_idb, 'discardfile '||''''||tname||'.dis''');
113 utl_file.put_line (file_idb, 'into table '||oname||'_rs.'||tname);
114 utl_file.put_line (file_idb, 'fields terminated by ","
115 optionally enclosed by '||'''~+''');
116 utl_file.put_line (file_idb, 'trailing nullcols ');
117 utl_file.put_line (file_idb, '(');
118 open a;
119 fetch a into a_var;
120 while a%found
121 loop
122 if a_var.data_type != 'DATE' then
123 utl_file.put_line (file_id, ' ''~+''
124 ||'||a_var.column_name||'||''~+''');
125 else
126 utl_file.put_line (file_id, ' ''~+''||to_char('||a_var.column_name
127 ||',''DD-MON-YYYY'')||''~+''');
128 end if;
129 if a_var.data_type = 'DATE' then
130 utl_file.put_line (file_ida, a_var.column_name||' '||a_var.data_type);
131 elsif a_var.data_type = 'NUMBER' then
132 if a_var.data_precision is not null then
133 utl_file.put_line (file_ida, a_var.column_name||' '
134 ||a_var.data_type||' ('||a_var.data_precision||
135 ','||a_var.data_scale||')');
136 else
137 utl_file.put_line (file_ida, a_var.column_name||' '||a_var.data_type);
138 end if;
135 else
136 utl_file.put_line (file_ida, a_var.column_name||' '||a_var.data_type
137 ||' ('||a_var.data_length||')');
138 end if;
139 if a_var.nullable = 'N' then
140 utl_file.put_line (file_ida, ' not null ');
141 end if;
142 utl_file.put_line (file_idb, a_var.column_name);
143 fetch a into a_var;
144 if a%found then
145 utl_file.put_line (file_id, '||'',''||');
146 utl_file.put_line (file_ida, ',');
147 utl_file.put_line (file_idb, ',');
148 end if;
149 end loop;
150 close a;
151 utl_file.put_line (file_id, 'from '||oname||'.'||tname||';');
152 utl_file.put_line (file_id, 'spool off');
153 utl_file.put_line (file_ida, ');');
154 utl_file.put_line (file_idb, ')');
155 utl_file.put_line (file_id, 'exit;');
156 utl_file.fclose(file_id);
157 utl_file.put_line (file_ida, 'exit;');
158 utl_file.fclose(file_ida);
159 utl_file.fclose(file_idb);
160 end;
161 /* Completion of the Create_ddl_loader_csv_files procedure
162 Beginning of the actual executable steps*/
163 begin
164 /* Section 1 - Script to create the User, Tablespace and data files*/
165 open ts;
166 fetch ts into ts_var;
167 open acct;
168 fetch acct into acct_var;
169 close acct;
170 file_idc := utl_file.fopen(dvd_directory, 'create_ts.ddl', 'W');
171 file_idd := utl_file.fopen(dvd_directory, 'create_user.ddl', 'W');
172 utl_file.put_line(file_idd, 'create user '||oname||'_rs');
173 utl_file.put_line(file_idd, ' identified by values '''||acct_var.password||'''');
174 utl_file.put_line(file_idd, 'default tablespace '||ts_var.tablespace_name||'_rs');
175 utl_file.put_line(file_idd, 'temporary tablespace temp; ');
176 for z in (select tablespace_name
177 from dba_tables
178 where owner = oname
179 union
180 select tablespace_name
181 from dba_indexes
182 where owner = oname)
183 loop
184 utl_file.put_line(file_idd, 'alter user '||oname||'_rs quota unlimited on '
185 ||z.tablespace_name||'_rs;');
186 end loop;
187 utl_file.put_line (file_idd, 'exit;');
188 utl_file.fclose(file_idd);
188 utl_file.put_line(file_idc, 'create tablespace '||ts_var.tablespace_name||'_rs');
190 utl_file.put_line(file_idc, 'datafile '''||replace(ts_var.file_name, '.dbf',
191 '_rs.dbf')||''''||' size 1000m ');
192 utl_file.put_line(file_idc, 'extent management local uniform size 64k;');
193 old_ts := ts_var.tablespace_name;
194 fetch ts into ts_var;
195 while ts%found
196 loop
197 if ts_var.tablespace_name = old_ts then
198 utl_file.put_line(file_idc, 'alter tablespace '||ts_var.tablespace_name||'_rs');
197 utl_file.put_line(file_idc, 'add datafile '''||
198 replace(ts_var.file_name, '.dbf', '_rs.dbf')||
199 ''' size 1000m;');
200 else
201 utl_file.put_line(file_idc, 'create tablespace '
202 ||ts_var.tablespace_name||'_rs');
203 utl_file.put_line(file_idc, 'datafile '''||
204 replace(ts_var.file_name, '.dbf', '_rs.dbf')
205 ||''''||' size 1000m ');
206 utl_file.put_line(file_idc, 'extent management local uniform size 64k;');
207 old_ts := ts_var.tablespace_name;
208 end if;
209 fetch ts into ts_var;
210 end loop;
211 close ts;
212 utl_file.put_line(file_idc, 'exit; ');
213 utl_file.fclose(file_idc);
214 /* This script creates a user privilege file */
215 file_ide := utl_file.fopen(dvd_directory,
216 'create_user_privs.ddl', 'W');
217 utl_file.put_line(file_ide, 'create role '||oname||'_rs_role;');
218 utl_file.put_line(file_ide, 'grant '||oname||'_rs_role to '||oname||'_rs;');
219 open privs;
220 fetch privs into privs_var;
221 while privs%found
222 loop
223 utl_file.put_line(file_ide, 'grant '||privs_var.privilege||' to '||oname||'_rs_role;');
224 fetch privs into privs_var;
225 end loop;
226 close privs;
227 utl_file.put_line(file_ide, 'exit;');
228 utl_file.fclose(file_ide);
229 /* The following section create the table ddl's, the SQL*Loader load files, and the
230 scripts to create the flat files*/
231 open b;
232 fetch b into b_var;
233 while b%found
234 loop
235 create_ddl_loader_csv_files(b_var.table_name);
236 fetch b into b_var;
237 end loop;
238 close b;
239 /* This script create the indices */
240 file_idf := utl_file.fopen(dvd_directory, 'create_indices.ddl', 'W');
241 open indices;
242 fetch indices into indices_var;
243 while indices%found
244 loop
245 if indices_var.uniqueness = 'UNIQUE' then
246 index_type := 'UNIQUE';
247 else
248 index_type := null;
249 end if;
250 utl_file.put_line(file_idf, 'create '||index_type||' index '
251 ||oname||'_rs.'||indices_var.index_name||' ');
252 utl_file.put_line(file_idf, 'on '||indices_var.table_owner||
253 '_rs.'||indices_var.table_name||' (');
254 open index_cols;
255 fetch index_cols into index_cols_var;
256 while index_cols%found
257 loop
258 utl_file.put_line(file_idf, index_cols_var.column_name);
259 fetch index_cols into index_cols_var;
260 if index_cols%found then
261 utl_file.put_line(file_idf, ', ');
262 end if;
263 end loop;
264 close index_cols;
265 utl_file.put_line(file_idf, ')');
266 utl_file.put_line(file_idf, 'logging ');
267 utl_file.put_line(file_idf, 'tablespace '||indices_var.tablespace_name||'_rs');
268 utl_file.put_line(file_idf, ' pctfree '||indices_var.pct_free);
269 utl_file.put_line(file_idf, ' initrans '||indices_var.ini_trans);
270 utl_file.put_line(file_idf, ' maxtrans '||indices_var.max_trans);
271 utl_file.put_line(file_idf, ' storage (');
272 utl_file.put_line(file_idf, ' initial '||(indices_var.initial_extent/1048576)*100||'K');
273 utl_file.put_line(file_idf, ' minextents '||indices_var.min_extents);
274 utl_file.put_line(file_idf, ' maxextents '||indices_var.max_extents);
275 utl_file.put_line(file_idf, ' pctincrease '||nvl(indices_var.pct_increase,'0'));
276 utl_file.put_line(file_idf, ' freelists '||indices_var.freelists);
277 utl_file.put_line(file_idf, ' freelist groups '||indices_var.freelist_groups);
278 utl_file.put_line(file_idf, ' buffer_pool default) ');
279 utl_file.put_line(file_idf, ' noparallel;');
280 fetch indices into indices_var;
281 end loop;
282 close indices;
283 utl_file.put_line(file_idf, 'exit; ');
284 utl_file.fclose(file_idf);
285 end;
286 /
287 exit;

In the second series article you will see how to add the Generic_archive.sql01 script into a commmand file that creates both the restoration commands and the actual comma-separated-value files. You will also be presented with a command file that restores the schema.

 




This first part of this article presents an anonymous PL/SQL code block called generic_archive.sql01 that creates the data definition language scripts for the target schema, tablespace, tables indexes, the SQL statements to extract the data into comma-separated-value (CSV) files, and the SQL*Loader scripts to repopulate the scripts. In this article you will be presented with a series of command files that perform the following:

- Creates the actual archive file
- Unpacks the archive file
- Restores the schema


It is not necessary for you to understand all of these components, but the discussion is made in the event that you wish to understand the application.

Listing 1 details a Unix shell file that executes the anonymous code block discussed in the first series article. Comparable Windows, Linux, or other operating system files can be created. The shell file is executed by the following command:

./generic_archive.cmd schema_name

Schema_name represents the target schema names. You should enter the actual schema name in place of schema_name. The schema name is the Oracle User ID that owns the to be archived tables and indexes. The first shell file command launches SQL*Plus and executes the anonymous block file discussed in the previous article. The script uses externally identified security (/). / can be replace by another Oracle ID and password. This ID must have access privileges to the target schema. The ${1} variable contains the schema name passed to the shell file from the operating system call. The value is passed to the Generic_archive.sql anonymous code block.

The Generic_archive.sql script creates the schema restoration and loading scripts that will be contained in the archive file. The script also creates a series of files that generates the CSV files. The initial For loop identifies each of the CSV file generation scripts, launches SQL*Plus, and executes the script spooling or transforming the Oracle table data into a CSV formatted data file.

The two CP (copy commands) move several files into the Archive_files directory. The Sqlldrfile.par contains parameters needed for the SQL*Loader files. The Sqlldrfile.par file is depicted in Listing 2. The second CP command moves two files into the Archive_files directory. These files are Restore_ts.cmd and Restore.cmd. They will be discussed later.

The second For loop compresses or zips the archive files. The Tar command places all of the zipped archive files into a single tar file. The final two CP commands move an Install.cmd file and a Readme.doc file into the Archive_files directory. The former file unpacks the archive files prior to restoration. The latter file describes how to restore the schema. At the completion of the Listing 1 script the Archive_files directory will contain 3 files that can be moved to external media. These files are Archive.tar, Install.cmd, and Readme.doc.

Listing 1 - generic_archive.cmd Unix command file that creates a zipped archive file of the target schema

#!/bin/ksh
######################################
#
# Name: generic_archive.cmd
# Purpose: Creates flat files and reload programs for a target schema
#
# Usage: ./generic_archive.cmd schema_owner_name
# Written: John Palinski
# Last Modified 4/16/2005
######################################

# Launches the Generic_archive.sql file creating the schema restoration and loading scripts and the
# files that create the comma-seperated-value files

sqlplus / @generic_archive.sql01 ${1}

# The For loop identifies the CSV file generation files created by the previous statement. The SQL*Plus
# statement within the loop creates the archive CSV file.

for FILENAME in `ls create_archive_files/*.sql`
do
echo $FILENAME
sqlplus / @${FILENAME}
done

# The CP (copy) commands move needed files into the archive directory

cp sqlldrfile.par archive_files/.
cp restore* archive_files/.

# The following For loop compresses or zips the archive files

for FILENAME in `ls archive_files/*`
do
gzip $FILENAME
done

# The following statement places the archive files into a single Tar file

tar -cf ./archive_files/archive.tar ./archive_files/*

# The following CP command move the install.cmd file into the archive directory. This file
# unpacks the archived tar file

rm archive_files/*.gz

cp install.cmd archive_files/.

cp readme.doc archive_files/.

exit


The Sqlldrfile.par file depicted in Listing 2 resides in the set of archive files. The file contains a singe paramter, the Oracle ID and password. The '/' tells Oracle to authenticate the user based upon operating system security. This setting can be replaced by with a specific Oracle ID and password.

Listing 2 - Sqlldrfile.par file

USERID=/

Archive Restoration

The first step in restoration is to review the Readme.doc shown in Listing 3. This document describes the steps needed to unpack and restore the database. There are three steps. These are:


1. Execute the Install.cmd command file. This file shown in Listing 4 unpacks the tar file. Note: This file will untar and unzip the files. The files will be located in a subdirectory of the current directory called archive_files.
2. Modify and execute the Restore_ts.cmd file. This file contains DDL commands to recreate the tablespace. Tablespaces consist of a set of data files that are located on a hard drive. Since the physical file locations in all likelyhood do not match those used on the original schema, the DBA must manually specify the data file locations. The actual modifications are done in the Create_ts.ddl file not the Restore_ts.file.
3. Execute the Restore.cmd file. This file does not require any modification.


Listing 3 - Readme.doc file describing restoration steps

 



DVD Archive Restoration Instructions
Written by John Palinski 3/24/05
Last Modified by John Palinski 3/24/05

The archive files are comma delimited value files. The instructions discuss how to restore the database using these files. The two sets of archive files give the DBA two restoration methods.

The purpose of these instructions is to describe how to restore the schema using the scripts contained on the DVD.

There are several things that must be done.

1. Review the Create_ts.ddl file It creates the needed tablespaces. The location of the datafiles reflects their location at the time the data was moved to the DVD. You must review this location and change appropriately.

2. Log on to an operating system account that can logon on to the target Oracle account using operating system identification (i.e. sqlplus /)

3. The Oracle ID must have SYS_DBA privileges.

You must have the proper permissions to execute the scripts

a. You must have Sys_dba privilege on the server
b. You must be able to log on to Oracle with sysdba privileges

Restoration Steps

1. Unpack and unzip the tar file. Be sure that you have the Unix permission to execute the script. Execute the following:

./install.cmd

2. Set up your Unix environment so that the target Oracle instance is called when executing sqlplus /

3. Edit the Create_ts.ddl file. This file creates the tablespace and datafiles.
Edit the file to make sure the datafile locations are correct.

4. Create the tablespaces. Be sure that you have the Unix permission to execute the script. Execute the following:

./restore_ts.cmd

The above script executes create_user.ddl script.

5. Create the user id, schema, indexes, and load the tables. Be sure that you have the Unix permission to execute the script. Execute the following:

./restore.cmd

6. When step 5 is completed the database should have been restored. Remember that the user id and tablespace will have a suffix of "_RS"

The first restoration task is to unpack the archive file. The Install.cmd file performs this function. The file is one of the three archived files. The actual file is displayed in Listing 4.

Listing 4 - Install.cmd file used to unpack the archive file

#!/bin/ksh
######################################
#
# Name: install.cmd
# Purpose: Untars and unzips the archive tar files. After this procedure
# you can begin actual installation
#
# Usage: ./install.cmd
#
######################################
# The following command extracts the zipped files from the tar file

tar -xf *.tar

# The following For loop unzips each of the zipped archive

files
for FILENAME in `ls archive_files/*.gz`
do
echo $FILENAME
gunzip ${FILENAME}
done
exit


The Restore_ts.cmd command file launches the Create_ts.ddl script in SQL*Plus. The Create_ts.ddl file contains Create and Alter Tablespace commands. Before the schema can be restored you must create tablespaces to hold the schema. Since tablespaces consist of data files located on the operating system, you must edit the Create_ts.ddl file and ensure that proper data file locations are entered. Execute the Restore_ts.cmd script after modification.

Listing 5 - The Restore_ts.cmd file used to create the tablespaces

#!/bin/ksh
######################################
#
# Name: restore_ts.cmd
# Purpose: Restores the archived schema tablespace
# This script should not be executed until the Create_ts.ddl script has
# been reviewed. Make sure the data file locations are correct.
#
# Usage: ./restore_ts.cmd
#
######################################
sqlplus / @create_ts.ddl
exit


The final step is to execute the Restore.cmd command file. This file launches SQL*Plus and SQL*Loader. It creates the schema or user id (create_user.ddl), grants privileges to the user (create_user_privs.ddl), creates the tables (first For Loop), populates the tables (second For Loop), and creates the indexes (create_indices.ddl).

Listing 6 - The Restore.cmd file used to recreate the schema id, tables, and indexes

#!/bin/ksh
######################################
#
# Name: restore.cmd
# Purpose: Recreates and populates the archived schema
# This application follows the restore_ts.cmd application that sets up the
# target tablespaces. This application does the following:
#
# - Creates the schema id
# - Grants the approprate privileges to the schema
# - Creates the tables on the schema
# - Populates the tables with the archived data
# - Indexes the tables
#
# Usage: ./restore.cmd
#
######################################
sqlplus / @create_user.ddl
sqlplus / @create_user_privs.ddl
for FILENAME in `ls create_table*.sql`
do
echo $FILENAME
sqlplus / @${FILENAME}
done
for FILENAME in `ls *.ctl`
do
echo $FILENAME
sqlldr parfile=sqlldrfile.par control=$FILENAME
done
sqlplus / @create_indices.ddl

exit


The archived schema is restored upon the completion of the Restore.cmd command file. You should find these scripts an effective tool for moving data into non-Oracle files.

Setup

Follow these instructions to set up the application.

1. Create an operating system directory.
2. Move the Generic_archive.sql01 file detailed in the first series article into this directory
3. Move the Generic_archive.cmd file (Listing 1) into this directory
4. Move the Sqlldrfile.par file (Listing 2) into this directory
5. Move the Readme.doc file (Listing 3) into this directory.
6. Move the Install.cmd file (Listing 4) into this directory.
7. Move the Restore_ts.cmd file (Listing 5) into this directory.
8. Move the Restore.cmd file (Listing 6) into this directory.
9. Create two sub-directories below the current directory. Call these directories archive_files and create_archive_files.
10. Change the Oracle Utl_file_dir parameters to permit the Utl_file procedure to write into the two directories (archive_files and create_archive_files).
11. Modify the generic_archive.sql01 file. Assign the file path of the the archive_files and create_archive_files directories to the Dvd_directory and Create_archive_files_directory variables.
12. Make sure that you have execute privileges on the generic_archive.cmd, install.cmd, restore_ts.cmd, and restore.cmd files (chmod 775 *.cmd).

Archive Execution Rehash

Follow these steps to create the archive:

1. Navigate to the operating system directory containing the generic_archive.cmd file.
2. Execute the generic_archive.cmd file. Be sure to put the target schema name at the end (./generic_archive.cmd TARGET_SCHEMA_NAME)


Restoration Rehash

Follow these steps to restore the schema:

1. Load the archive.tar, install.cmd, and readme.doc files into a directory. Review the readme.doc.
2. Unpack the tar file, be executing the following: ./install.cmd.
3. The files will be unpacked into a subdirectory called archive_files. Navigate into this directory.
4. Edit the create_ts.ddl file. Make sure the data file locations are correct.
5. Create the tablespaces. Launch the restore_ts.cmd script (./restore_ts.cmd).
6. Restore the schema. Launch restore.cmd (./restore.cmd).
 

 

 
If you like Oracle tuning, see the book "Oracle Tuning: The Definitive Reference", with 950 pages of tuning tips and scripts. 

You can buy it direct from the publisher for 30%-off and get instant access to the code depot of Oracle tuning scripts.


 

 

��  
 
 
Oracle Training at Sea
 
 
 
 
oracle dba poster
 

 
Follow us on Twitter 
 
Oracle performance tuning software 
 
Oracle Linux poster
 
 
 

 

Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.  Feel free to ask questions on our Oracle forum.

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata?  Oracle technology is changing and we strive to update our BC Oracle support information.  If you find an error or have a suggestion for improving our content, we would appreciate your feedback.  Just  e-mail:  

and include the URL for the page.


                    









Burleson Consulting

The Oracle of Database Support

Oracle Performance Tuning

Remote DBA Services


 

Copyright © 1996 -  2020

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.