This is an excerpt from the bestselling book
Oracle Grid & Real Application Clusters. To get immediate
access to the code depot of working RAC scripts, buy it
directly from the publisher and save more than 30%.
In a BFT, there is only one file
that always has a relative file number of 1024. A BFT has a
three piece format, OOOOOOLLLLLLLLLRRR, in which LLLLLLLLL is used
to denote the block number. The concatenation of FFF and
BBBBBB makes the encoded block number.
For BFTs, the only supported way
of getting components of extended rowids is to use the dbms_rowid
The following is an example on
how to use the dbms_rowid package to retrieve rowid information:
FROM hr.employees where rownum < 3;
BIGFILE_ROWID SMALLFILE_ROWID FIRST_NAME
------------- --------------- -------------------- -------------
Data Dictionary Views
A new column is added to both
dba_tablespaces and v$tablespace views to indicate whether a
particular tablespace is bigfile or smallfile:
The following creates a database
with default bigfile tablespace:
SET DEFAULT BIGFILE TABLESPACE
DATAFILE ?/u02/oradata/grid/system01.dbf? SIZE 500 M,
SYSAUX DATA FILE ?/u02/oradata/grid/sysaux01.dbf? SIZE 500 M
DEFAULT TEMPORARY TABLESPACE tbs01
TEMPFILE ?/u02/oradata/grid/temp01.dbf? SIZE 1024 M
UNDO TABLESPACE undo01
DATAFILE ?/u02/oradata/grid/undo01.dbf? SIZE 1024 M;
The following shows how to move
data between smallfile and bigfile tablespaces:
employee MOVE TABLESPACE bigfile_tbs;
The following shows how to
create a BFT and change its size:
BIGFILE TABLESPACE user_tbs
DATAFILE ?/u02/oradata/grid/user_tbs01.dbf? SIZE 1024 M;
TABLESPACE user_tbs RESIZE 10G;
In the previous release of
Oracle server, K and M were used to specify storage size. In this
DDL statement, a user can specify size in gigabytes and terabytes
using G and T respectively.
The following example uses
DBVERIFY utility with bigfile. With SFT, multiple instances of
DBVERIFY can be run in parallel on multiple datafiles to speed up
integrity checking for a tablespace. Integrity checking
parallelism can be achieved with BFTs by starting multiple instances
of DBVERIFY on parts of the single big file.
FILE=bigfile01.dbf START=1 END=10000
$dbv FILE=bigfile01.dbf START=10001
Note: START = Start Block; END =
Data Pump Utilities
Data Pump is a new utility in
Oracle 10g that enables very high-speed metadata and data loading
and unloading from one database to another. The Data Pump Export and
Import utilities are restartable and expdp and impdp have a similar
look and feel to the original Export and Import utilities, exp and
imp. However, they use a new mechanism that provides a dramatic
increase in performance as well as new features and functions.
Data Pump Overview
The Oracle Data Pump utility ?
API is implemented through a PL/SQL package, dbms_datapump.
The Data Pump makes use of the direct path load and the external
table mechanisms for data movement.
The Major functional components
of Data Pump are as follows:
* dbms_datapump: This package
embodies the API for high-speed export and import utilities for bulk
data and metadata movement.
* Direct Path API: This supports
a stream interface in addition to its existing column array
interface. Row data is read or written to dump file sets as a
Direct Path API stream. The Direct Path API also minimizes
data conversion and parsing at both unload and load time.
* dbms_metadata: This package is
used by worker process for all metadata, database object
definitions, loading and unloading. The metadata is extracted
and written to the dump file set as XML documents rather than as SQL
Data Definition Language (DDL) in the original export utility.
The XML format brings great flexibility when creating the DDL during
import time. For example, the DBA can easily change an
object?s ownership and default tablespace during an import
operation. However, the XML format takes up more dump file space
* External Table API: The Data
Pump uses Direct Path API to load and unload when a table?s
structure allows it. However, when Direct Path API cannot be
used, the Data Pump uses External Table API to move data. Oracle
External Table API has two access drivers. The oracle_loader access
driver provides external tables read-only access to SQL Loader
compatible files. The oracle_datapump driver also provides
external tables write and read access to data file set.
The Direct Path API is much
faster than the External Table API. However, Direct Path does
not support the following structures for loading and unloading data:
* Partitioned tables on which
global index exists during a single-partition load.
* LOB column for which domain
* Clustered tables.
* Tables with active triggers.
* Tables with fine-grained
access control enabled in insert and select modes.
* Table that contains BFILE or
opaque type columns.
* Structures on which a
referential integrity constraint is present.
* Table that contains VARRAY
columns with an embedded opaque type.
The Figure 17.1 shows a broad
outline of the utilities and components of Data Pump.
Figure 17.1: Data Pump General