Minimizing downtime for Oracle
Oracle Database Tips by Donald Burleson
Non-Grid database with continuous availability requirements
struggle with doing database upgrades without taking the
database down. In Oracle 10g RAC and Grid, Oracle offers a
"rolling upgrade", whereby each node van be upgraded, one at a
time, without any downtime. However, non-RAC database
require a small amount of downtime during release upgrades.
Here are two
approaches for minimizing downtime during an Oracle release
Here are the
details from these authors.
Transportable tablespace minimum downtime upgrade
Here is a great
approach to minimize downtime during an Oracle upgrade.
If you are
staying on the same server, you can upgrade from 9i to 10g with
about 5 minutes of downtime. To do this, you will be using
transportable tablespaces. The only requirement is that all your
tablespaces are locally managed. The idea is simple.
1) Install Oracle 10g to a separate
Oracle Home on the same server
2) Create a 10g database with only the base tablespaces:
SYSTEM, SYSAUX, UNDO, and TEMP
3) On the 9i database, put all your tablespaces into read
only mode (write downtime begins)
4) Perform a transportable tablespace export of all
non-system tablespaces (as a sysdba user)
5) Shut down the 9i database (true downtime begins)
6) Start up the 10g database
7) Perform a transportable tablespace import into the 10g
database (end true downtime)
8) Make all your tablespaces read/write (end write downtime)
Duplicate Disk minimum downtime upgrade approach
Herod T offered
this excellent advice in the
I have done
this for one of our 24*7 systems, and this was my approach -
others will have a different approach. Possibly better. we went
8 to 10g
1.) We rented similar hardware to what the production DB was
working on. We bought a duplicate set of drives.
2.) We got a copy of the production DB running on the rental,
made sure it was all good (days of work).
We made sure that the mount points etc were all the same as on
prod - an exact as possible duplicate.
3.) Had downtime, unmounted the drives (external bays) from the
prod machine, put those drives in place on the rental. Brought
everything back up. Total downtime was 19 minutes and worked
like a charm. put the new drives on the prod box
4.) We upgraded the prod database on the prod machine, tested
did our work, took weeks while everything ran on the rental. We
built refresh scripts that converted data that needed converted,
we also kept the 10g DB as refreshed as possible from the prod
DB, only about 1 hour behind using manual scripts.
5.) we scheduled downtime on the rental, refreshed the data, and
remounted the drives. Took under an hour. Brought the prod
machine back up and decommissioned the rental as soon as we got
user sign off a day or two later.
We took the time to
add 2 CPU's and some RAM to the production machine as well in the
time the rental was running. It was just barely over an hour total
downtime spread weeks apart. management was happy and since we
scheduled the work for 3am our time, nobody really cared from the
users. And those that did - we apologized for the scheduled
Costly as we had to pay for the rental and an entire duplicate set
of disks. But we used the extra disks on other servers once we were
done with them. This was on HPUX
If you like Oracle tuning, you may enjoy my new book "Oracle
Tuning: The Definitive Reference", over 900 pages
of BC's favorite tuning tips & scripts.
You can buy it direct from the publisher for 30%-off and get
instant access to the code depot of Oracle tuning scripts.