Call now: 252-767-6166  
Oracle Training Oracle Support Development Oracle Apps

 E-mail Us
 Oracle Articles
New Oracle Articles

 Oracle Training
 Oracle Tips

 Oracle Forum
 Class Catalog

 Remote DBA
 Oracle Tuning
 Emergency 911
 RAC Support
 Apps Support
 Oracle Support

 SQL Tuning

 Oracle UNIX
 Oracle Linux
 Remote s
 Remote plans
 Application Server

 Oracle Forms
 Oracle Portal
 App Upgrades
 SQL Server
 Oracle Concepts
 Software Support

 Remote S


 Consulting Staff
 Consulting Prices
 Help Wanted!


 Oracle Posters
 Oracle Books

 Oracle Scripts

Don Burleson Blog 









Setting up a Standby Database for a RAC Cluster to a Single-Instance (One Node)

Oracle RAC Cluster Tips by Burleson Consulting

This is an excerpt from the bestselling book Oracle Grid & Real Application Clusters.  To get immediate access to the code depot of working RAC scripts, buy it directly from the publisher and save more than 30%.

Creating a standby database on a single instance is exactly like a normal standby database creation. The steps to create a physical standby database for a RAC environment are similar to those employed for a regular instance:

1. Take a full backup, either hot or cold, of the Oracle Database 10g RAC database. As an alternative, the files can be placed in backup mode and transferred using FTP if the database is smaller than 10 Gigabytes.

2. Create a standby control file from any instance in the RAC system:


3. Restore the database to the standby node and place the standby control file in the appropriate location.

4. Copy any required archive logs to the new node.

5. Configure the log_archive_dest_n parameter for the RAC instance to the proper standby configuration.

6. MOSC NOTE:180031.1: Creating a Data Guard Configuration can be used for appropriate network settings, such as tnsnames.ora and listener.ora.

7. Configure the log transport services. Setting up the log_archive_dest_n parameter on all primary instances does this. All instances must archive the logs to the service of the single node standby system. The standby system creates a pool of archive logs, and on the basis of the SCN, it can determine which archive log from which thread is the next one to apply. So the next free log_archive_dest_n parameter must be set to the standby service. The other settings, such as which process to use for transfer or type of transfer (SYNC or ASYNC), depend on the preferred Protection Mode. The MOSC NOTE:68537.1: init.ora Parameter log_archive_dest_n Reference Note can provide additional information.

8. Now perform startup mount on the standby database and set it to the RECOVER mode.

Configuration When the Standby Database is Also a Cluster (RAC) System

The standby system may also be created in a second RAC environment.  This provides more confidence, scalability, and performance. If two identical systems can be utilized, there should be no performance and availability degradation in the case of switchover or failover. Normally, there is the same number of instances or nodes on the standby system as on the primary system; however, this is not required.

Essentially, the process is the same as with a normal standby database creation. Full backup is taken from the primary database, and the standby control file is created from any of the instances. Next, the standby system is prepared for the database by creating the RAW devices if OCFS is not being used, creating the logical links, configuring the hardware and installing the Oracle software on each node. At this point, the backup of the primary instance can be restored, including the newly created standby control file. Next, the appropriate parameters are set in the PFILE or SPFILE for each instance correctly.  Follow the process from the preceding section, Setting Up a Standby Database For a RAC Cluster To a Single-Instance (One Node) and also configure the network parameters, such as tnsnames.ora and listener.ora, corresponding to the system?s particular requirements and settings.

The greatest difference between the multi-node and single node system is that with the multi-node setup, there are now multiple standby instances and only one of them can perform the recovery.

Every primary instance transports its archive logs to a corresponding standby instance. The standby instance receiving the logs now transfers them to the instance performing the recovery. This is configured by the log_archive_dest_n parameter. It is also suggested that all standby redo logs be copied on the standby database for each standby instance. For clarification, consider the following example:

There is a primary RAC database with two nodes and two instances:  AULT1 and AULT2). There is a standby RAC environment with two nodes and two instances: AULT3 and AULT4. The normal primary RAC database has archive log mode enabled. Each primary instance archives its redo logs on a formatted partition of the shared disk if a cluster file system (CFS) is supported for the platform used. Otherwise, the archive logs are stored on each node?s private disk area. Different formats must be used for naming the archive logs to prevent overwriting. At the very least, the %t in log_archive_format should be used to prevent overwriting. The %t represents the thread number where the log comes from.

Thus, the following settings:

Instance AULT1:

LOG_ARCHIVE_DEST_1=(location=/usr/backup/ault_rac1/archives1) LOG_ARCHIVE_FORMAT=arc_%s_%t

Instance AULT2:

LOG_ARCHIVE_DEST_1=(location=/usr/backup/ault_rac2/archives2) LOG_ARCHIVE_FORMAT=arc_%s_%t

Next, the log transport for each primary RAC instance is added to the corresponding standby RAC instance. As standby redo logs have already been created with maximum performance as the goal, these are the settings:

Instance AULT1:

LOG_ARCHIVE_DEST_1=(location=/usr/backup/ault_rac1/archives1) LOG_ARCHIVE_DEST_2=(SERVICE=ault3 LGWR ...)

Instance AULT2:

LOG_ARCHIVE_DEST_1=(location=/usr/backup/ault_rac2/archives2) LOG_ARCHIVE_DEST_2=(SERVICE=ault4 LGWR ...)

Now, instance AULT4 is designated as the recovering instance.  It can be either AULT3 or AULT4. This means that the archive logs from the instance AULT3 have to be transferred to instance AULT4. Instance AULT4 is also performing the archiving to disk process.  Again, this is on the shared disk that is available for both standby instances. The resulting settings for instance AULT3 and AULT4 are:

Instance AULT3:


Instance AULT4:


Once this is complete, the STARTUP NOMOUNT and ALTER DATABASE MOUNT standby database commands can be issued for all instances and the recovering instance can be put into recovery mode.


This is an excerpt from the bestselling book Oracle Grid & Real Application Clusters, Rampant TechPress, by Mike Ault and Madhu Tumma.

You can buy it direct from the publisher for 30%-off and get instant access to the code depot of Oracle tuning scripts.


Oracle Training at Sea
oracle dba poster

Follow us on Twitter 
Oracle performance tuning software 
Oracle Linux poster


Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals.  Feel free to ask questions on our Oracle forum.

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata?  Oracle technology is changing and we strive to update our BC Oracle support information.  If you find an error or have a suggestion for improving our content, we would appreciate your feedback.  Just  e-mail:  

and include the URL for the page.


Burleson Consulting

The Oracle of Database Support

Oracle Performance Tuning

Remote DBA Services


Copyright © 1996 -  2017

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.

Remote Emergency Support provided by Conversational