Call now: 252-767-6166  
Oracle Training Oracle Support Development Oracle Apps

 E-mail Us
 Oracle Articles
New Oracle Articles

 Oracle Training
 Oracle Tips

 Oracle Forum
 Class Catalog

 Remote DBA
 Oracle Tuning
 Emergency 911
 RAC Support
 Apps Support
 Oracle Support

 SQL Tuning

 Oracle UNIX
 Oracle Linux
 Remote s
 Remote plans
 Application Server

 Oracle Forms
 Oracle Portal
 App Upgrades
 SQL Server
 Oracle Concepts
 Software Support

 Remote S


 Consulting Staff
 Consulting Prices
 Help Wanted!


 Oracle Posters
 Oracle Books

 Oracle Scripts

Don Burleson Blog 









Recovering Data Using Flashback

Oracle Forensics tips by Paul Wright

Recovering the data using flashback can be done in a number of ways using either last DROP statement, SCN or the actual timestamp.

FLASHBACK TABLE SQUIRRELPATCH to timestamp to_timestamp ('21/03/2006   18:51:06', 'mm/dd/yyyy hh24:mi:ss');

Using flashback ?AS? query it is quite easy to select a version of data at a certain time as long as it was not too long ago. This is very powerful for a forensics investigator to see a version of the data as of a specific time. The most convenient way to recover data to a recent previous state is using Flashback and the Oracle Recycle bin.

There are problems with accuracy though. Oracle does not actually record a full timeline. Every 5 minutes a new SCN is added and the last one is taken away to give an accuracy of approximately 5 minutes using timestamp. 10g is still more accurate with its time keeping than previous versions which means that the major source of inaccuracy may well be the computer hardware and networked time synchronization issues involving protocols such as NTP. See section 6.7 for more detail on time inaccuracies and their influence on Oracle Forensics.

Also Oracle can only flash back to a point in the past as far as the remaining undo segments allow which is controlled by the redo retention period and is usually about 5 days. This query should help in ascertaining the oldest time that can be the target for flashback.


For recovery to a previous state longer than this we need to use LogMiner (see later). Relational schemas tend towards keeping a single row for each instance of a thing e.g. a single row for an employee in an employees table. This is good for organizing sets of data but not as useful for organizing information about each tuple over time. For instance if the employee left the company and then returned, this data might cause problems.

Data may be truly deleted by a user with the keyword PURGE as below.

DROP TABLE test PURGE; --this will really delete table test.
PURGE RECYCLEBIN; --this will purge the users recyclebin
PURGE TABLE TEST; --this will delete table test from recyclebin
PURGE TABLE "BIN$04LhcpndanfgMAAAAAANPw==$0"; -- purge by new name.
purge index in_test3_03;--you can purge indexes
PURGE TABLESPACE USERS; --purge by tablespace
PURGE TABLESPACE USERS USER SCOTT;--user within tablespace
PURGE DBA_RECYCLEBIN;--purge all objects in recyclebins

Flashback and LogMiner are dependant on the online redo logs and Archived redo logs so attention should be paid to securing these resources and these should be backed up as part of an incident handling process. 

LogMiner used on redo logs can be used to view and recover deleted historical data from the archived redo logs quite effectively.

Using LogMiner to query archived redo logs:

The concise order of events to run LogMiner are as follows:

1.       Switch on supplemental logging (optional)

2.       Specify the redo log file(s) and the path to them

3.       Allocate a Dictionary

4.       Start LogMiner

5.       Read the data about past state and recover the database

6.       Stop LogMiner 

In more detail the above order of events are implemented as follows.

1.  Supplemental logging should be enabled in order to use LogMiner which can be done      with the following command. 

And then check it has worked with the following query.

2.  Specify the location of the online redo logs.

'/export/home/u01/app/oracle/oradata/sales/redo01.log', OPTIONS =>
PL/SQL procedure successfully completed.
'/export/home/u01/app/oracle/oradata/sales/redo02.log', OPTIONS =>
PL/SQL procedure successfully completed.
'/export/home/u01/app/oracle/oradata/sales/redo03.log', OPTIONS =>
PL/SQL procedure successfully completed.

I issued each of these three commands on a single line as I did not have time to experiment with carriage returns, but the character ??? will allow a new line to extend a command over multiple lines. Then we need the command to tell it where the dictionary will be taken from the online database directly.

3.  Start LogMiner with the online data dictionary catalogue.

PL/SQL procedure successfully completed

This means that in this case LogMiner will only work correctly when the database is started and open as we are using the source DB's online dictionary. The problem with using the online catalogue is that only the current version of the db can be queried as the old schemas are lost. Therefore it is advisable if using LogMiner in production circumstances to back up the versions of the schema either in an accompanying flattext file or in the redo logs themselves. LogMiner is now started and ready to query.

4.  Example query run upon the LogMiner view - v$logmnr_contents

This is an example query on the v$logmnr_contents view which represents all the data LogMiner is able to extract from the redo logs.

SQL> select scn,timestamp,username,table_name,operation from v$logmnr_contents;

509304 04-JAN-2005 14:00:57 WRH$_SQLBIND INSERT
509304 04-JAN-2005 14:00:57 WRH$_SQLBIND UPDATE
509304 04-JAN-2005 14:00:57 INTERNAL
509304 04-JAN-2005 14:00:57 WRH$_SQLBIND INSERT
509304 04-JAN-2005 14:00:57 WRH$_SQLBIND UPDATE
509304 04-JAN-2005 14:00:57 INTERNAL
509304 04-JAN-2005 14:00:57 WRH$_SQLBIND INSERT
509304 04-JAN-2005 14:00:57 WRH$_SQLBIND UPDATE

5.  End the LogMiner session


LogMiner will be very useful for querying previous versions of data but it does not actually show the actions that the user took to gain those states. However this information can be gained from DBEXTENDED audit recorded in the redo logs as will be shown later.

This is an excerpt from the book "Oracle Forensics: Oracle Security Best Practices", by Paul M. Wright, the father of Oracle Forensics.



Oracle Training at Sea
oracle dba poster

Follow us on Twitter 
Oracle performance tuning software 
Oracle Linux poster


Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.  Feel free to ask questions on our Oracle forum.

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata?  Oracle technology is changing and we strive to update our BC Oracle support information.  If you find an error or have a suggestion for improving our content, we would appreciate your feedback.  Just  e-mail:  

and include the URL for the page.


Burleson Consulting

The Oracle of Database Support

Oracle Performance Tuning

Remote DBA Services


Copyright © 1996 -  2017

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.

Remote Emergency Support provided by Conversational