Visit Open-E website
Page 1 of 2 12 LastLast
Results 1 to 10 of 15

Thread: RAID Level recommendation?

  1. #1

    Default RAID Level recommendation?

    Hi Guys,

    I was wondering what do you think about wich would be the best way to configure a server with two Intel SRCS28X Raid controllers (8 ports each) and 16x 500GB drives, which will be running Open-e on top.

    Storage capacity in not that important to me as anything over 2TB will be more than enough, the idea of having so many drives is to improve performance without compromising redundancy...

    Notes:
    - The SRCS28X cards have a 2TB limit per each logical drive.
    - I have 2x spare drives on my desk so having hot spares is not that important.

    These are the options:
    1 - 1x RAID 50 on each controler made of 2x 3 drive RAID 5 and a RAID 0 on top, plus two hot spares (this is due to the 2TB limit, 4 drive RAID 5s would be too big). Then RAID 0 on top of the two RAID 50 using Open-e Software RAID.

    2- 1x RAID 10 on each controller made of 4x 2 drive RAID 1 and a RAID 0 on top. Then RAID 0 on top of the two RAID 10 using Open-e Software RAID.

    3- 4x RAID 1 on each controller made of 2 drive each. Then RAID 0 on top of the eight RAID 1 using Open-e Software RAID.

    All 3 conf end with 4TB usable space and are more less very redundant and fast... but which one would be better?, in case of going with RAID 10, do you think there will be an advantage of the intermedium hardware level on conf 2 over conf 3, or will it cause trouble instead?

    Bu the way, this is to hold 10 servers with Virtual Iron with different functions with FC.

    Thanks in advance!

  2. #2

    Lightbulb

    Well, if you are running lots of virtual machines and don't care too much about wasting capacity, I would focus on seperating your sequential accessing luns from your random access luns. Each sequential-access-pattern lun should get its only small raidset, probably just RAID 1 unless you need higher than about 50MB/s (if you need higher performance, you'll need to use a RAID 10 or something).

    Then, if you have a few virtual machines that have mostly a random access pattern, then put them all in one big raidset. Use RAID 10 if you can afford the space, but with lots of the hardware raid controllers (esp. Areca and other things that use the IOP chip), RAID 5 or RAID 50 or even RAID 6 aren't really going to cause you to lose too much performance.

  3. #3

    Default

    Quote Originally Posted by Robotbeat
    Well, if you are running lots of virtual machines and don't care too much about wasting capacity, I would focus on seperating your sequential accessing luns from your random access luns. Each sequential-access-pattern lun should get its only small raidset, probably just RAID 1 unless you need higher than about 50MB/s (if you need higher performance, you'll need to use a RAID 10 or something).

    Then, if you have a few virtual machines that have mostly a random access pattern, then put them all in one big raidset. Use RAID 10 if you can afford the space, but with lots of the hardware raid controllers (esp. Areca and other things that use the IOP chip), RAID 5 or RAID 50 or even RAID 6 aren't really going to cause you to lose too much performance.
    Hi Robotbeat, Thanks a lot for the advice!

    I'm learnig as I go and you seem to have a lot of experience here so please correct me as necessary...

    This storage server and the VM servers are going to be used to consolidate all out internal IT servers... so a couple of domain controllers, one file server (arounf 80GB of users data), one exchange server, one database server with a couple of small databases, one BES and 4 or 5 other different application servers... So if I understand the concept right, most of them if not all should have a random access pattern.

    So that is why I was going with the 1 big raidset as you sugest. This is my logic: if I am going to use 1 big raidset for all VMs, I need it to be very fast and the only way to combine both RAID controllers performanse is to use Open-E Software RAID 0 on top of whatever hardware RAID level I decide to use on them, that is why I propossed those 3 configs... either a 50 on each controller, a 10 on each controller or just a bunch of 1s...

    I am leaning towards options 2 or 3 but this is where I don't have a clue of what would be better... if having a Software 0 over a Hardware 10 or a Software 0 over several Harware 1.

    BTW, the controllers I have (Intel SRCS28X which is the same as the LSI Megaraid 300-8X) do not support RAID 6 so that is not an option.

  4. #4

    Lightbulb

    I did find an Intel performance-optimization document for your intel controller:
    http://support.intel.com/support/mot.../cs-020782.htm

    I would try to steer as far away from software RAIDs as possible if you're looking for performance, especially for write performance. This might not apply as much when you're using software RAID 0, but it seems to me that it would make recovering your data that much harder if something goes wrong.

    You may want to have two RAID sets anyways, one set up for performance and the other for capacity. You could shove all your non-performance-sensitive stuff in one huge RAID 5 and put your few performance-critical things into a RAID 10 array or something.

    I would invest in a couple of battery-backup units for the RAID cache (little modules that cost about $120 each). That's what we do when performance and data integrity are both important. In that case, you might also want to disable the drive cache in the controller settings, but keep the controller cache enabled. You don't really need to worry too much about this if you have a UPS, but if ensuring every write is somewhere safe before telling the application that it is written are absolutely essential for your data integrity requirements, then you should do it (also, if this is that important, you need to do some tweaking on your virtual iron machine and the guest operating systems, too). But remember, you should always have a plan about what to do in case your RAID system completely fails (like, tape-backup or a physically isolated and time-delayed disk backup).

  5. #5

    Default

    I'm using Virtual Iron with Open-E and just configured a SAN based on 7200 RPM 1TB drives. We are using 12 drives in a RAID 10 on a 12 port Areca controller. We can more than saturate a gigabit link. The biggest reason we went with that is because the array can rebuild without any downtime. And we are mirroring this to another identical SAN box offsite.

  6. #6

    Lightbulb

    Remember to regular scrub your RAID arrays! Usually, you have to manually initiate it from the RAID card web gui, but for the Areca 1680 series, you can schedule RAID set scrubbing.

    Scrubbing is ESPECIALLY important for RAID 5 arrays, otherwise you very likely will have a double-drive failure, which is, of course, fatal for RAID 5.

    Areca recommends scrubbing at least once a week.

  7. #7

    Default

    Robotbeat - does RAID scrubbing effect performance at all? Should I only do it on a weekend or can I run it anytime? I have an Areca RAID Controller as well. I never even knew about this.

  8. #8

    Lightbulb

    It greatly affects performance, so only run it on the weekend. You don't have to unmount anything. I didn't really realize this until I started thinking about the Bit Error Rate/Nonrecoverable Read Errors per Bits Read ratings of different hard drives. I ran the "Volume Set Functions->Check Volume Set" RAID scrubbing utility on a couple of Areca RAID systems we have. They are 10-drive RAID 6 raid sets (8+2), so rebuilding is usually pretty safe, but I figured I should try it anyway. No errors were found. Either that means that there were no errors found on all the drives or that the verify operation was able to correct any data corruption. Both of these systems are about two years old.

    The Areca RAID manual suggests running the volume set verify once a week. In the older non-SAS RAID cards, you have to manually do this check, but in the 1680 series SAS RAID cards, there's a new option under "Volume Set Functions" to "Schedule Volume Check." I haven't started this on our systems in the field that have this newer card, yet.

    The only thing about verifying so often is that it might wear out your disks a little faster than otherwise, although your data is safer. On my 10-drive RAID 6 sets with 500GB Seagate Barracudas, verifying took about 3 hours with about 500MB/s of verifying (including the parity data), which is about 50MB/s per drive, about as fast as is realistic for these drives to go sustained.

  9. #9
    Join Date
    Apr 2009
    Posts
    62

    Default

    Do all RAID cards have the option for RAID Scrubbing? I have an HP server with a HP P800 controller in it and I don't think it has an GUI or WebGUI to do anything on.

  10. #10

    Default

    One thing that I am still uncertain of and would like some info on. What RAID levels need scrubbing? Does it apply to all RAID levels? I have never looked into it in depth and currently my google fu is failing me for finding info on scrubbing any RAID level other than 5/6.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •