Oracle LogMiner auditing tips
Oracle Database Tips by Donald Burleson
Where can I find tips for using Oracle LogMiner for auditing?
Answer: Oracle LogMiner is a redo log query engine
that is provided with the Oracle database server. LogMiner was introduced in
Oracle 8.1, and later releases of Oracle improved on the performance of the log
miner utility. In Oracle 9i Release 2, the log miner engine is the core of the
log apply service for the logical standby database.
LogMiner utility started as a very simple tool reading Oracle's archived
redo logs, and has evolved over the past several years into a very useful
auditing tool and recovery tool.
- Fine-grained recovery - Oracle LogMiner allows you
to perform fine-grained recovery at the transaction level.
- Auditing - The redo logs provide a complete records
of all database updates, and the archives can provide detailed audit
In many industries, Oracle professionals are required to provide
complete audit trails for all update transactions within their systems. The
Oracle archived redo logs are very valuable tool for doing system auditing
because they contain all of the ?after? images for every change made within
the Oracle database.
From an auditing perspective, the Oracle LogMiner has many significant
enhancements to allow the Oracle professional to very quickly go into their
system and see what changes were made to what tables, who made the changes,
and the time to the changes were made.
Of course, LogMiner does not help with auditing for the viewing of
information, since the redo logs contain only information on DDL and DML.
The most important enhancement to the Oracle LogMiner is the flexible GUI
interface. Using this new GUI interface makes it quite easy to track DDL
changes, and audit specific changes to tables on a user by user basis.
Several new features have been added to LogMiner in Oracle to make it
more flexible and robust:
- Track DDL Statements
- Track Data Dictionary Access
- DDL Dictionary Tracking
- Track Dictionary Staleness
- Identify Log Corruption
- Show Committed Data Only
We also see the following enhancements. A new LogMiner Viewer
GUI in addition to the command line interface:
- The ability to translate DML associated with Index Clusters
- The ability to mine for changes by value
- Support for chained and migrated rows
- Support for direct path inserts
- Ability to use an online dictionary
- Ability to extract the data dictionary into the redo log files to
seamlessly integrate DDL changes
- DDL statement tracking
- Ability to skip log corruptions
- Ability to specify that only committed transactions be displayed
- Generate SQL_REDO and SQL_UNDO with primary key information to help
the DBA undo changes
Log Miner uses redo
entries embedded in online redo log files or archived redo log files and data
dictionary information to build the SQL statement. Log Miner keeps the contents
of the redo log file in the fixed view, v$logmnr_contents.
Understanding Log Miner Technology
Log Miner is a redo log query engine that is provided
with the Oracle database server. Log Miner was introduced in Oracle 8.1, and
later releases of Oracle improved on the performance of the log miner
utility. In Oracle, the log miner engine is the core of the log apply
service for the logical standby database.
Log Miner uses redo entries embedded in online redo log
files or archived redo log files and data dictionary information to build
the SQL statement. Log Miner keeps the contents of the redo log file in the
fixed view, v$logmnr_contents. The SQL interface can be used to query this
view in order to see the SQL statement executed on a database at any point
Together, the column SQL_REDO and the SCN of
v$logmnr_contents, can provide relevant information on the activity within
the database. In addition, this view contains the segment name and owner
which is useful in further identification of the objects being altered.
The redo log file of an Oracle database contains change
vectors in the form of internally generated numerical identifiers. Moreover,
the old and new values of columns are represented as hex bytes. These
internally generated identifiers or hex bytes are not easy to understand.
In order to generate SQL statements in external data
format, the log miner engine uses data dictionary information. This data
dictionary information can be extracted to an external operating system file
or redo logs using the
dbms_logmnr_d.build procedure. As an alternative, the data dictionary
can be read directly from an online catalog. In the latter case, the Log
Miner engine can only provide the SQL statements using the current structure
of objects within the database.
In the SQL apply mode of a Data Guard environment, the
dictionary is built using the dbms_logstdby.build procedure, which stores
the dictionary information to the redo log file. The extraction of
dictionary information from the redo log file for mapping of the internal
identifier is more efficient.
For the best results, a separate tablespace should be
used to create the objects required by the Log Miner utility. By default,
these objects are created in the system tablespace.
This may slow down the overall response time of the primary database.
The SQL interface can be used to query this view in order
to see the SQL statement executed on a database at any point in time.
There are several good books
that discuss Oracle LogMiner:Oracle LogMiner books:
Also, see my notes on LogMiner:
ORA-01341: LogMiner out-of-memory error
Data Guard Physical RAC with logminer
Oracle Data guard LogMiner Dictionary
Oracle10g Data Guard New logminer Features
Oracle LogMiner Switchover Failover
Oracle Data guard Tablespace LogMiner Objects
Oracle Data guard LogMiner Dictionary
Oracle Data guard LogMiner Oracle Net
Oracle Data guard Backup Primary Database LogMiner Dictionary
Oracle Data guard Apply Service for LogMiner
RAC Grid logminer Log Apply Services
RAC LogMiner v$logstdby SQL Apply
Oracle Data guard with logminer
Oracle Data guard Logical LogMiner dictionary
Oracle Auditing -
Oracle logminer audit with HIPAA
Replication & Logminer Training Course
Oracle Data Guard