With today's cheap hardware costs, it is safe to
assume a virtual server host will have multiple processors and lots
of memory, most likely greater than 4 GB. And since the virtual
server host is the relative center of the universe so to
speak, it therefore cannot offer up to clients anything which it
cannot itself do or access. So you need to go with 64-bit Linux
unless you face some hardware driver issue that prevents this
choice.
There is one more little tweak that should be
applied to any 64-bit Linux server with significant memory and that
is the use of huge pages. This is Linux 2.6 kernel feature simply
utilizes larger than the 4K pages to reduce virtual memory I/O
operations when working with lots of memory. Here are some
documented limits:
Hardware Platform |
Kernel 2.4 |
Kernel 2.6 |
Linux x86
(IA32) |
4MB |
4MB |
Linux
x86-64 (AMD64, EM64T) |
2MB |
2MB |
Linux
Itanium (IA64) |
256MB |
256MB |
IBM Power
Based Linux (PPC64) |
NA |
16MB |
IBM zSeries
Based Linux |
NA |
NA |
IBM S/390
Based Linux |
NA |
NA |
The process to enable huge pages is as follows:
-
X = grep Hugepagesize /proc/meminfo
-
Y = Largest (MB of all client SGAs) * 1024
-
Z = # Huge Pages needed = Y / X
-
Set Huge Page Pool size
edit /etc/sysctl.con
vm.nr_hugepages = Z
To improve I/O for file system requests made by
the hosted clients and/or their databases, Linux offers a little
known and seldom used option that can yield between 50-150%
performance improvements in standard database benchmarks like the
TPC-C by simply changing the /etc/fstab file entries for the Oracle
data file mount points as follows:
What this does is tell the operating system that
it is not necessary to update the last access time for
directories and files under that mount point, which translates into
radically reduced total I/O. Since the host file system is simply a
mechanism to provide abstracted storage to its clients, why spend
I/O resources to update time attributes for files or directories?
Especially when it is fairly unlikely you will ever access them via
the host for any reason other than maybe doing backups.
The one last item to consider for optimizing
Linux on your virtual server host is to compile a monolithic kernel,
which is nothing more than the kernel compiled to only load those
features you must have and not load those that are unnecessary.
However, you better be comfortable with compiling, linking and
installing a new kernel lest you goof up your Linux install. (Not
really, but I want readers to be very sure before going this route!)
It consists of just the following rather easy steps:
-
cd /usr/src/linux or /usr/src/kernels/xxx where
xxx is the kernel source version
-
make mrproper (simply cleans up under that
directory tree)
-
make config or xconfig (if Linux install
supports X-Windows)
-
answer all the questions on what to compile or
load into the resulting kernel and what not to thus reducing its
size, memory footprint and complexity
-
make dep; make clean; make bzImage
-
cp/usr/src/linux/arch/i386/boot/bzImage/vmlinuz-kernel.version.number
-
cp/src/linux/System.map/boot/System.map-
kernel.version.number
-
edit/boot/grub/grub.conf
have
image= point to new kernel version binaries
The only other item that you might consider for
optimization is to recompile and relink the C runtime library with
more optimistic compiler optimization directives. But this step
requires perfect execution since any failure along the way means a
total reinstall of the Linux OS. Therefore, I generally advise most
people not to attempt this step although it has been known to yield
significant performance improvements for those who can successfully
complete it.
This is an excerpt from
Oracle on VMWare:
Expert tips for database virtualization
by Rampant TechPress.