I would try to steer as far away from software RAIDs as possible if you're looking for performance, especially for write performance. This might not apply as much when you're using software RAID 0, but it seems to me that it would make recovering your data that much harder if something goes wrong.
You may want to have two RAID sets anyways, one set up for performance and the other for capacity. You could shove all your non-performance-sensitive stuff in one huge RAID 5 and put your few performance-critical things into a RAID 10 array or something.
I would invest in a couple of battery-backup units for the RAID cache (little modules that cost about $120 each). That's what we do when performance and data integrity are both important. In that case, you might also want to disable the drive cache in the controller settings, but keep the controller cache enabled. You don't really need to worry too much about this if you have a UPS, but if ensuring every write is somewhere safe before telling the application that it is written are absolutely essential for your data integrity requirements, then you should do it (also, if this is that important, you need to do some tweaking on your virtual iron machine and the guest operating systems, too). But remember, you should always have a plan about what to do in case your RAID system completely fails (like, tape-backup or a physically isolated and time-delayed disk backup).
I'm using Virtual Iron with Open-E and just configured a SAN based on 7200 RPM 1TB drives. We are using 12 drives in a RAID 10 on a 12 port Areca controller. We can more than saturate a gigabit link. The biggest reason we went with that is because the array can rebuild without any downtime. And we are mirroring this to another identical SAN box offsite.
Remember to regular scrub your RAID arrays! Usually, you have to manually initiate it from the RAID card web gui, but for the Areca 1680 series, you can schedule RAID set scrubbing.
Scrubbing is ESPECIALLY important for RAID 5 arrays, otherwise you very likely will have a double-drive failure, which is, of course, fatal for RAID 5.
Robotbeat - does RAID scrubbing effect performance at all? Should I only do it on a weekend or can I run it anytime? I have an Areca RAID Controller as well. I never even knew about this.
It greatly affects performance, so only run it on the weekend. You don't have to unmount anything. I didn't really realize this until I started thinking about the Bit Error Rate/Nonrecoverable Read Errors per Bits Read ratings of different hard drives. I ran the "Volume Set Functions->Check Volume Set" RAID scrubbing utility on a couple of Areca RAID systems we have. They are 10-drive RAID 6 raid sets (8+2), so rebuilding is usually pretty safe, but I figured I should try it anyway. No errors were found. Either that means that there were no errors found on all the drives or that the verify operation was able to correct any data corruption. Both of these systems are about two years old.
The Areca RAID manual suggests running the volume set verify once a week. In the older non-SAS RAID cards, you have to manually do this check, but in the 1680 series SAS RAID cards, there's a new option under "Volume Set Functions" to "Schedule Volume Check." I haven't started this on our systems in the field that have this newer card, yet.
The only thing about verifying so often is that it might wear out your disks a little faster than otherwise, although your data is safer. On my 10-drive RAID 6 sets with 500GB Seagate Barracudas, verifying took about 3 hours with about 500MB/s of verifying (including the parity data), which is about 50MB/s per drive, about as fast as is realistic for these drives to go sustained.
Do all RAID cards have the option for RAID Scrubbing? I have an HP server with a HP P800 controller in it and I don't think it has an GUI or WebGUI to do anything on.
One thing that I am still uncertain of and would like some info on. What RAID levels need scrubbing? Does it apply to all RAID levels? I have never looked into it in depth and currently my google fu is failing me for finding info on scrubbing any RAID level other than 5/6.
Well... You may have errors on any sort of RAID array, or on JBOD disks. It's just that there's nothing you can do about it with RAID 0 or JBOD, besides just going to a backup. I'm pretty sure you can do a volume set verify on any RAID level that has redundancy (RAID 1, 3, 4, 5, 6, 10, etc).