A Comparison of Solid State Drives to Serial ATA (SATA)
Drive Arrays for Use with Oracle
by
Mike Ault
One of the more interesting projects I participated in this
year involved the comparison of a 64-gigabyte solid-state
drive (SSD) from Texas Memory Systems to a serial ATA (SATA)
array. The serial ATA array contained 7 7500 RPM 200
gigabyte drives in a RAID5 array setup. I had read about the
performance possibilities of the SSD and was anxious to prove
or disprove them for myself.
Logically, if you utilize a "memory" disk you are ridding
yourself of the physical limitations of disk drive technology
basically trading in the spinning rust. In a disk drive, you
are battling physics to squeeze more performance out of your
array; there are certain physical properties that just can't
be altered on a whim (to quote Scotty from Star Trek, "Ya
cannot change the laws of physics, Captain!"). Therefore, when
you use standard disk technology, you are limited by the disk
rotational speed and the actuator arm latency. Combined, these
disk latencies result in the difference between "linear access
rate" and "actual" or "random access rate"; usually the
difference is as much as 40 50 percent of linear access
rate. That is, if you can get 100 meg/second of linear access,
you will only get 50 60 meg/second of random access.
Therefore we must limit ourselves to the non-linear access
rate for most Oracle applications.
A
memory-based array has no rotational or actuator arm latency;
linear or non-linear access is identical. Disk access rates
are generally in the range of 0.5 milliseconds to 10
milliseconds. Typical average latencies for mechanical disks
are seven nine microseconds. Typical access rates for memory
are measured not in milliseconds, but in nanoseconds (one
billionth of a second versus one thousandth of a second) so we
are immediately talking a factor of one hundred thousand.
However, due to access speeds of various interfaces, and
processing overhead of the CPU and IO processor, this usually
drops to a fraction of that. For example, a disk-based sort is
usually only 17 37 times slower than a memory-based sort.
All of that being said, what did I find in my tests? The tests
involved setting up a 20-gigabyte (data volume excluded
indexes and support tablespaces and logs) TPCH database and
then running the 22 basic TPCH-generated queries. Of course,
the time to load the database tables using SQLLoader and the
time required to build the indexes to support the queries was
also measured.
The testing was started with a 2-disk SCSI array using 10K RPM
Cheetah technology drives. But a third of the way through the
second run of the queries, the drive array crashed hard and we
had to use an available ATA array. It should be noted that the
load times and index build times, as well as query times, were
equivalent on the SCSI and ATA systems.
The average time to load a table using SSD versus ATA
technology was around 30 percent. That is, the SSD based
system loaded 30 percent faster than the ATA array. The index
builds also showed nearly the same results, 30 percent faster
for the SSD array. While not as spectacular as I hoped, a 30
percent timesaving could be fairly substantial in most
environments. With trepidation, I moved to the SQL SELECT
testing.
The first system to be tested was the SSD array. The TPCH qgen
program was used to create the basic query set; the queries
were placed into a single file and the SET TIMING clause was
added to capture the execution times of each SQL statement to
the spool file which was used to log each run. I loaded up the
script and executed it in the SQLPLUS environment. I have to
admit I was a bit disappointed when the first query took 9
minutes to run. I finished up the seven different runs of the
base queries using different configurations (took a total of
three days, part time) and then cranked up the ATA array. The
query results are shown in the following table:
After the first SQL run on the ATA array, I felt much better.
After 30 hours, I had to secure the first SQL query. Not the
first run, just the first query! In fact, to run the entire
series of 22 SQL statements on the ATA array took several days
for each run. That is correct, one run of the test SQL on the
ATA array took longer than all seven runs on the SSD array.
The results for the ATA runs are shown in the next table.
Notice that the scale on the vertical access of the graph is
logarithmic this means that for each factor of 10, the scale
is the same height; this 1 10 takes up the same vertical
distance as 10 100 and 100 1000, and so on. This use of
the logarithmic scale was required to keep the poorest of the
results from completely masking the reasonable results:

Note that any query that took longer than 30 hours (on the
average) was stopped and the test run resumed on the next
query. I have no idea if some of them would have ever come
back with results on the ATA array.
Over all, based on elapsed time, the SSD array performed the
queries 276 times faster than the ATA array. Note that this
was using the 30 hour limitation on the queries, had I waited
for those long-running queries to complete, the difference may
have been much greater.
The SSD drives where able to complete the entire run of
example TPCH queries with no errors (after some initial
tweaking of temporary table space size). Applying identical
temporary table space sizes and all other settings being the
same (database and OS wise) the scripts had to be modified to
allow release of temporary segments through log-out and log-in
of the database user. If the user wasn't logged off and logged
back in during the run, numerous errors occurred which usually
resulted in ORA-1336 errors and forced disconnection of the
user.
In summary, the SSD technology is not a silver bullet solution
for database performance problems. If you are CPU bound, the
SSD drives may even make performance worse. You need to
carefully evaluate your system to determine its wait profile
and then only consider SSD technology if the majority of your
wait activity is I/O related. However, I suspect that as
memory becomes cheaper and cheaper (a gig costs about $1000.00
at the time of this writing) disks will only be used to
provide a place to store the contents of SSD arrays upon power
failures. Indeed, with the promise of quantum technology
memory right around the proverbial corner, (memory that uses
the quantum state of the electron to store information,
promising a factor of 20 increase of storage capacity in
memory) disks may soon go the way of punch cards, paper tape,
and teletype terminals.

The Ion tool is
the easiest way to analyze STATSPACK disk I/O data in Oracle and Ion
allows you to spot hidden I/O trends.
Market Survey of SSD vendors for
Oracle:
There are many vendors who offer rack-mount solid-state disk that
work with Oracle databases, and the competitive market ensures that
product offerings will continuously improve while prices fall.
SearchStorage notes that SSD is will soon replace platter disks and that
hundreds of SSD vendors may enter the market:
"The number of vendors in this category could rise to several
hundred in the next 3 years as enterprise users become more familiar
with the benefits of this type of storage."
As of January 2015, many of the major hardware vendors (including Sun and
EMC) are replacing slow disks with RAM-based disks, and
Sun announced that all
of their large servers will offer SSD.
Here are the major SSD vendors for Oracle databases
(vendors are listed alphabetically):
2008 rack mount SSD Performance Statistics
SearchStorage has done a comprehensive survey of rack mount SSD
vendors, and lists these SSD rack mount vendors, with this showing the
fastest rack-mount SSD devices:
manufacturer |
model |
technology |
interface |
performance metrics and notes |
IBM |
RamSan-400 |
RAM SSD |
Fibre
Channel
InfiniBand
|
3,000MB/s random
sustained external throughput, 400,000 random IOPS |
Violin Memory |
Violin 1010 |
RAM SSD
|
PCIe
|
1,400MB/s read,
1,00MB/s write with ื4 PCIe, 3 microseconds latency |
Solid Access Technologies |
USSD 200FC |
RAM SSD |
Fibre Channel
SAS
SCSI
|
391MB/s random
sustained read or write per port (full duplex is 719MB/s), with
8 x 4Gbps FC ports aggregated throughput is approx 2,000MB/s,
320,000 IOPS |
Curtis |
HyperXCLR R1000 |
RAM SSD |
Fibre Channel
|
197MB/s sustained
R/W transfer rate, 35,000 IOPS |
Choosing the right SSD for Oracle
When evaluating SSD for Oracle databases you need
to consider performance (throughput and response time), reliability (Mean Time Between failures) and
TCO (total cost of ownership). Most SSD vendors will provide a
test RAM disk array for benchmark testing so that you can choose the
vendor who offers the best price/performance ratio.
Burleson Consulting does not partner with any SSD vendors and we
provide independent advice in this constantly-changing market. BC
was one of the earliest adopters of SSD for Oracle and we have been
deploying SSD on Oracle database since 2005 and we have experienced SSD
experts to help any Oracle shop evaluate whether SSD
is right for your application. BC experts can also help you choose
the SSD that is best for your database. Just
call 800-766-1884 or e-mail.:
for
SSD support details. DRAM SSD
vs. Flash SSD
With all
the talk about the Oracle flash cache, it is important to note that there
are two types of SSD, and only DRAM SSD is suitable for Oracle database
storage. The flash type SSD suffers from serious shortcomings, namely
a degradation of access speed over time. At first, Flash SSD is 5
times faster than a platter disk, but after some usage the average read time
becomes far slower than a hard drive. For Oracle, only rack-mounted
DRAM SSD is acceptable for good performance:
|
Avg. Read speed
|
Avg. write speed
|
Platter disk
|
10.0 ms.
|
7.0 ms.
|
DRAM SSD
|
0.4 ms.
|
0.4 ms.
|
Flash SSD
|
1.7 ms.
|
94.5 ms.
|
|

|