|
 |
|
Oracle RAC on a Linux PC tips
Oracle Database Tips by Donald BurlesonDecember 29, 2015
|
For complete
details in installing RAC in a personal computer, see the book
Personal Oracle RAC Clusters. You can buy it directly from Rampant for
30%-off and get instant access to the code depot of RAC PC scripts.
by Donald Burleson
Some time ago I
decided I was going to build myself a Linux Oracle cluster. I did some research
on the internet, and discovered a couple of articles about building my own
Oracle Cluster. One was by John Smiley, another was by Jeffrey Hunter. Both
articles used Linux and ISCSI. Jeffrey Hunter also had a similar article using
firewire in place of ISCSI.
My objectives were as follows:
1. Learn about using Linux operating system.
2. Learn more about Oracle RAC setup.
3. Learn more about open source software in general.
4. Learn about ISCSI storage.
5. Minimize costs while doing (1) through (4) (keep it under $1000).
Fortunately, being a semi-competent computer hobbyist, I had a lot of hardware
lying around the house, including monitors, hard drives, routers, switches, and
10/100 network cards. The only hardware I had to buy were two new motherboards,
memory for same, cases w/power supplies, and gigabit network cards. I eventually
bought some additional hard drives, but I could have made do without them. My
total cost for the project was about $1200 (I failed objective 5).
Challenge number one was setting up an ISCSI storage server. Some research on
the internet discovered an OS called Openfiler, which is open source software (www.openfiler.org),a
single use operating system for sharing storage using multiple protocols,
including ISCSI, as well as SAMBA and NFS. This is the software recommended in
the Hunter article using ISCSI. I took an old motherboard I had, and installed a
10g HDD to hold the software, then attached an external Firewire drive and a
second internal IDE drive. From reading the Openfiler documentation, it had
appeared that Openfiler would not share the OS drive (drive 0), and I quickly
determined that this was an accurate determination. However, I had no problems
at all getting Openfiler to recognize the second internal drive or the external
drive. I was also able to partition them as described in the Hunter and Smiley
articles without difficulty.
The only problem I had with Openfiler was that the particular version I used set
itself up by default to download and install updates from the internet. The
first time it did this, it broke its own installation and I had to reinstall it.
I turned automatic updates off the second time.
The Configuration for the Openfiler service is as follows (I had purchased this
from Tigerdirect a year before because it was on sale, and only cost $99 w/ all
the rebates):
Amd Duron Motherboard
2.6 Ghz Celeron CPU
512M RAM
Liteon DVD Reader
10G Hdd (Drive 0, OS)
80G HDD (Drive 1 on second IDE channel)
250G Maxtor Firewire Drive
The next step was to set up the Linux servers. I had found two AMD 64
motherboards with fan and CPU for less than $100 each from Tigerdirect, so I did
not consider it blowing my budget to go with 64 bit motherboards. The price for
the case with power supply was $29 each from Tigerdirect. I also invested in a
KVM switch, $30 on Ebay. I also had to purchase memory, since Oracle will not
install in less than 1G of RAM. This was my single largest investment, I bought
1.5G RAM for each server, that came to $300 total from Tigerdirect. I had a 60G
and an 80G HDD lying around the house that I used for the internal drives on the
two servers, so there was no additional cost for storage. The motherboards also
had built in Graphics and sound, so no additional money had to be spent there.
The initial hardware configuration for each server was as follows:
RAC1:
60G HDD
Liteon DVD Reader
AMD 64 bit motherboard, w/ factory graphics and sound
Realtek 10/100 wired Network card (on Motherboard).
DLink DWL-G530 wired Gigabit Network Card
1.5G RAM
RAC2:
80G HDD
Liteon DVD Reader
AMD 64 bit motherboard, w/ factory graphics and sound
Realtek 10/100 wired Network card (on Motherboard).
DLink DWL-G530 wired Gigabit Network Card
1.5G RAM
The Realtek NIC's were wired to a 10/100 HUB, that was further bridged to a
wireless router using a Dlink DWL-521 access point. The wireless router connects
to the Internet via a cable modem. The Gigabit cards were wired to a Gigabit
switch along with the Openfiler Server.
So, with the server hardware set up, it was time to move to software. I am a DBA
who came up on the Database side instead of the OS Admin side, so I knew that
the Linux work would be the most challenging. I had read through the Hunter and
Smiley articles, and everything looked to be pretty straight forward. I was
wrong about that.
I started with 64 Bit Fedora Core 6 (FC6-64). It installed smoothly. However, I
quickly discovered that it did not have drivers for the Dlink gigabit cards.
After much research on the internet, I figured out how to link the NIC driver
modules into the Kernel. The full documentation for doing this was not in any
single place, and even those sites that thought they had the full instructions
were missing some key steps (placing files in appropriate locations, for
example). So, after about a month of working on this in my spare time, I was
finally able to get the new Kernel compiled and linked with the Dlink gigabit
network modules. Unfortunately, by the time I was able to link the drivers in, I
had lost track of all the steps I took to get the software working, so
reproducing them here would be almost impossible. I then proceeded to the next
step.
There were two possibilities about the next step, I could either install Oracle,
or set up the ISCSI shares. I decide to go with the ISCSI setup since that
appeared to be the more difficult issue. I discovered that there is an ISCSI
driver supplied with FC-6-64. I activated this, set for discovery, and
eventually managed to get the drives mapped. I had to log into the drives using
my rc.local file because I could never get the auto login to work when ISCSI was
started. Rc.local is a file executed on boot up after the network is started on
Fedora Linux releases (similar to Windows autoexec.bat)..
The next step was to set up the clustered file system. I was able to start
OCFS2Console, it came with FC6. I was also able to mount the drives using the
default options, and propagate the cluster configuration. At that point, I
thought I was ready to install Oracle clusterware. I was still missing a crucial
step here, but I did not realize it.
When I went to install Oracle cluster, I quickly determined that it would not
install on FC6, and I while I felt I could spoof Oracle using the instructions I
found for FC5, I decided to switch to Fedora Core 5, 64 bit (FC5-64), because
there is much more documentation on Oracle and FC5.
So, I installed FC5-64. Installing core 5 went smoothly, and this time instead
of once again struggling with installing the DLink Gigabit cards, I went ahead
and bought the Intel EtherPro cards, and replaced the Dlink cards. The Intel
cards were on sale at Tigerdirect for about $20 each. FC5-64 installed smoothly,
and quickly recognized the Intel Gigabit cards. I once again struggled with the
ISCSI setup, but this time I the autologon worked, and I was ready to go to work
with OCFS2. However, I had read a bit further by this point and determined that
each data drive, as well as the voting and ocrdisks would need to be mounted
with the datavolume option to enable o-direct on the drives. O-direct was
supposed to have replaced raw devices in Fedora 4. In Fedora 5 and after, Raw
devices are unavailable. Guess what. OCFS2 did not recognize the datavolume
option, and when I tried to switch to Raw devices, they were unavailable in FC5.
Perhaps someone more experience in Linux might have been able to find the
appropriate software to enable Raw in FC5, and proceed from that point. Or even
modify OCFS2 to enable o-direct. However, this was a roadblock I could not
bypass. My Linux knowledge was just plain insufficient to resolve this issue.
So, I did what any good Geek would do. I decided to try something else.
Enter SUSE 9. SUSE 9 was a good choice because there was a complete set of
instructions on that software version, as well as a large installed base. SUSE 9
installed smoothly, and ISCSI set up easilty. However, there was a major
insurmountable problem. The keyboard driver in SUSE 9 would not work correctly
through the KVM Switch, and I didn't have room at my desk for multiple
keyboards. The keyboard would work OK initially, but when I opened a terminal
window in the user interface, the repeat was way too quick. I would barely tap a
key, and 6 copies of the letter would show up. Once again, someone who knows the
software better may have been able to fix this problem, however for me, this
made the software almost impossible to use. So, on to Open SUSE 10.2.
Open SUSE 10.2 was probably the simplest to install. It recognized all my
hardware. It ran smoothly, it had Raw device support. The only installation
problem Open SUSE had was in my network card set up. For some reason, it wasn't
designed to expect multiple NIC;s on different networks. It kept on trying to
use the gateway on my internet subnet on my private network. It automatically
carried over gateway and nameserver settings to the private net, and if I
removed them it removed them from the public network settings also. Eventually,
I managed to just live with it by copying any files I needed from the internet
from my dual-NIC windows XP computer. I also managed to get ISCSI working,
though I had to go back to manual log ins in the after.local file, because the
default local boot file runs before Runlevel 5 on SUSE. Once again OCFS2 did not
support the datavolume option, but I was prepared for that. Open Suse 10.2 fully
supports RAW. The Raw devices appeared to set up properly but for some reason,
Oracle Universal Installer did not recognize them as shared between both Rac
systems, even though they had the same name and owner on both. I never did
resolve this problem, nor could anyone on any of the Linux Internet support
Forums help. It may well have been something simple, but for me it was an
insurmountable obstacle which forced me to break down and switch to RHEL-4
Unbreakable Linux.
Ok, I am forced to admit that Unbreakable is a nice product. It installed
smoothly. All my hardware was recognized. Its easy to use, and the desktop is
clean. It recognized both network cards immediately. The first issue I had was
that the default installation did not install Libaio. Libaio is the library for
asynchronous i/o Considering that clustered Oracle requires this, and
Unbreakable Linux is an Oracle product, I cannot figure out why it wasn't
installed by default. In addition, on my first installation attempt, on one Rac
machine it installed the smp kernel, and on the other Rac machine it did not. I
never figured out why it did that either, since I chose the same options on
each. I got around these problems by doing a full installation (install
everything) on each machine. This installed the same kernel and all the
necessary libraries on each box. I turned on the ISCSI initiator(ISCSI-sfnet),
ran OCFS2Console, and it recognized the datavolume option. From there, it was
just a matter of setting up fstab to mount the volumes correctly (they had been
partitioned in earlier OS versions), installing the clusterware, and installing
Oracle. And voila, a personal Oracle RAC. I also bought one year of internet
only support for Unbreakable Linux, this cost me another $99.
So, after this 8 month long voyage of discovery, I can say that I achieved all
of my goals except for the final one, keep the budget under 1000. Of course,
what really killed the budget was when I determined that I needed to upgrade my
personal machine to the fastest 64 bit AMD motherboard available, as well as
putting 4G of RAM on it, and 128M video card. All of this was necessary to
properly manage the two inexpensive Rac servers from my personal computer.
FYI, This was my final network setup:
Private IP: xxx.xxx.2.110, Name: rac1-priv
Public IP: xxx.xxx.1.110, Name: rac1
Virtual IP: xxx.xxx.1.151, Name: rac1-vip
Rac2:
Private IP: xxx.xxx.2.120, Name: rac2-priv
Public IP: xxx.xxx.1.120, Name: rac2
Virtual IP: xxx.xxx.1.152, Name: rac2-vip
Openfiler:
Private IP: xxx.xxx.2.199, Name: openfiler-priv
Public IP: xxx.xxx.1.199, Name: openfiler
(no virtual ip)
Note:
After I successfully installed on Redhat, I went back to Opensuse 10.2 and
managed to get everything to work with RAW, though I did have to create a post
boot script with some built in delays to mount the iscsi volumes and start
Oracle CRS.
Additional Note:
I have since done a similar install of Oracle 11g on Windows 2003 server. I will
be writing that one up soon.
 |
If you like Oracle tuning, you
might enjoy my book "Oracle
Tuning: The Definitive Reference", with 950 pages of tuning tips and
scripts.
You can buy it direct from the publisher for 30%-off and get instant
access to the code depot of Oracle tuning scripts. |
|