Visit Open-E website
Results 1 to 5 of 5

Thread: RAID5 or RAID6

  1. #1
    Join Date
    Jul 2008
    austria, vienna

    Default RAID5 or RAID6


    this is not a specific open-e/DSS question, but I think there are some people who can help ...

    We evaluate a DSS with Areca Raid Controller ARC-1261 ML 16xSATA II. What's the better choice: 1x fat RAID6 (with 15 HDDs + 1 Hotspare) or 2x RAID5 (1x8 + 1x7 + 1 Hotspare).

    Which config has better performance, better reliability, better rebuild time in case of failure?



  2. #2


    Our 24 disk "1 x fat RAID6" rebuilds something between 14h and 53h depending on your settings for write cache and background task priority.

    As a RAID6 it performs quite well, although there are some remaining issues with NFS mounting and web administration - but that is no issue to RAID5, RAID6 (small or fat) - at least I hope.

    But probably other colleagues could state their configuration and rebuild times, so we get an overview about different configurations.


  3. #3
    Join Date
    Jan 2008


    G'day Lukas,
    I am partway through testing an Adaptec 5405 and 12xSeagate ES.2 1TB Disk configuration.
    But preliminary observations are:
    The more disks in a raid, the longer the rebuild time, and therefore the more chance of a total failure if a second drive goes or you have a "Bit Error Failure".
    So because of this I would not use anything other than Raid6. It is just too risky that a second error while the Raid5 is rebuilding will result in lost data.

    Our rebuild times are:
    Raid5 x 3 =10hrs ,Est Throughput=415Mb/s
    Raid6 x 4 =10hrs ,Est Throughput=424Mb/s
    Raid6 x 5 =9hrs ,Est Throughput=583Mb/s
    Raid6 x 5 =11hrs ,Est Throughput=489Mb/s
    Raid6 x 6 =10hrs ,Est Throughput=888Mb/s
    Raid6 x 11 =17hrs ,Est Throughput=1118Mb/s
    Raid60 x 10 =11hrs ,Est Throughput=953Mb/s
    Raid10 x 4 =4hrs ,Est Throughput=1016Mb/s
    Notes: The "x ??" figure is the number of 1TB drives, and the Est throughput is the Array Size in MB divided by the time to rebuild.

    Data testing of the Raid6x5 Disk arrays (hdparm and mounted iSCSI) gives a write figure of 260Mb/s.

    Further good info is at Storage Advisors and here

    While we still have real world throughput testing to complete I think our final configuration will be to have a RAID6 x4 and a Raid6 x 6, with 1 hot spare and one "Simple" disk for "Temp" files etc.
    As Raid 6 "loses" capacity of arrays with an odd number of disks, we will go for a 6-4(5.4TB) vs 5-5(4.5TB) configuration.
    I feel more comfortable having two arrays than one large one and it reduces the risk of failure.

    Also, just while I think of it, the Online Capacity Expansion feature(OCE) of a lot of controllers has it's disadvantages. On the Adaptec to go from a 5 to a 6 Disk RAID6 it takes about 4 days, and while you can still use the Volumes, the throughput is 10% (approx 20Mb/s) during the process.

  4. #4
    Join Date
    Jan 2008


    Also, in case anyone wonders why the anomaly of rebuild speed of the second Raid6 x 5, this was becuase we ran two rebuilds at once to see what affect it had. Also, Disk Cache is off, and Rebuild Rate is set to High for all.

  5. #5


    Hi there!

    We have the following storage servers in our environment:

    Type1: Xeon with RAM etc. and Areca 1680 with 16 x 300GB SAS 15k
    Type2: Xeon with RAM etc. and Areca 1680 with 16 x 1TB SATA 5.4-7.2k (green)

    We are running Raid 6 with NO hot spare and have 1/16th better
    performance and 1/16 more capacity than with hot spare.
    We have several disks of each type in our stock and we can react
    within 10 Minutes in case of a failure.

    Our rebuild times are on SAS around 12 hours and on SATA between 36 and 50 hours.


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts