I spent the weekend running benchmarks on some hardware that destined to sit in one of our customer’s datacenters (running our software of course!)
The particular hardware is a PowerEdge R710 configured as such:

Dual Quad Core Xeon L5520 @ 2.27 Ghz
24 GB RAM
8 15K SAS RAID10


The server also had some external storage connected to it in the form of a PowerVault MD1000 with 15 500GB SATA-II drives.
The MD1000 is connected to the server via a PERC6E controller (LSI rebranded 1078 based RAID card). My tests were centered around this device; the best way to configure it for maximum performance.

I tested following combinations:

HW RAID10 with 512K Stripe
HW MIRRORING with LVM stripping (tested 64K and 512K stripes)
HW MIRRORING with SW RAID0 (tested 64K and 512K stripes)

In my testing, I found that the wider stripe width beat out the smaller width, at least for the test workload. So that’s all I’ll be sharing – unless asked otherwise 

Filesystem: ext3
Mount options: noatime,commit=60,nobh
FS options: -E stride=128
Testing tool: iozone
Block size: 16KB
Number of Workers: 8

Initial write
Rewrite
Read
Re-read
Reverse Read
Stride read
Random read
Mixed workload
Random write
HW RAID 10
307354
208220
108436
108399
13119
3285
10546
241289
234513
HW RAID1 + SW RAID0
265198
281627
418292
417891
77717
12666
11962
226480
212766
HW RAID1 + LVM STRIPE
317807
270980
419086
422016
83142
14910
11989
224055
212362


So as you can see, HW RAID10 (at least on the PERC6E) really has poor read performance, which does not make sense. In RAID10, reads can occur across all drives so I’m a bit baffled by this, but from now on, I’ll be using this combination of HW mirroring and some either striping or SW RAID0 as it really is more performant.

-Errol