Visit Open-E website
Results 1 to 3 of 3

Thread: Software Raid Super Slow or other

  1. #1

    Default Software Raid Super Slow or other

    How slow should software raid be compared to Hardware raid controller on average?

    I am creating a replication data server and I can't beleive how slow it is to copy files to the the drives. I also have a virtual machine on a raid 10 (4drives) 7200 rpm Sata and it runs super slow.

    I have jumbo frames enabled on the ethernet cards with bonding enabled (2 Intel 1000 nics).

    Is there something i should check for or is it normal that it is that slow.

    Server Specs
    dual Opterons dual core each 2GHz
    12 GB Ecc Ram 400DDR
    Onboard Gigabit Ethernet
    PCIx Intel Pro 1000 MT dual Nic (bonding)
    4 Seagate 1TB Drives Raid 10
    4 WD Caviar 80GB Raid 10

    I am running software raid because I am out of expansion slots and the board does not have PCIx slots only pci. The Intel Nic can be used on PCI or PCIx.

  2. #2
    Join Date
    Aug 2009
    Location
    Lincoln, UK
    Posts
    42

    Question Curious

    Hi there,

    Perhaps I have mis-read your question, but I don't believe Open-E yet supports software RAID10? Unless this has changed in the latest release?

    Do you actually mean that your primary server uses hardware RAID10 and you are performing replication to a software RAID1 or RAID0 array you have on your secondary system?

    Four SATA disks in RAID0 will easily saturate a single channel of ethernet. I have practicle results showing that 150MB/s write and 125MB/s read are achievable with low cost hardware.

    Do beware that bonding only helps when you have multiple clients accessing the SAN at the same time, it doesn't increase the bandwidth for any single "TCP connection", such as that created for volume replication. Therefore your theoretical maximum transfer rate will be that of a single channel of ethernet (~113-125MB/s).

    Do also note that regular PCI slots have a maximum throughput of around 133MB/s (32bit * 33MHz / 8bits) Therefore if you hang your NIC's and SATA controller off the same PCI bus you will have to share that bandwidth out amongst all the attached hardware. PCI-X is a much better choice since it's throughput is much higher.

    I would connect your primary and secondary server together with a single NIC at both ends and cross-over cable between them. This will help give you a baseline for performance comparisons, then add the complication of Bonding later if your system really needs it.

    Also, I wouldn't enable JUMBO frames until you have this problem resolved.

    Are you mixing NIC types in your BOND?

    Bets regards

    TFZ
    If it can go wrong, it generally will!

  3. #3

    Default

    I installed the x64 version of Openfiler 2.3, and I created a software RAID 5 array with 4 1TB Hitachi drives, and have a gigabit ethernet connection throughout my house. However, I'm trying to transfer a couple of gigs of movies to my new NAS, and my transfer speed from my Vista box is about 10 MB/s. I know there's some overhead for software NAS, but this seems pretty excessive.

    I have my volume shared out through SMB/CIFS, the filesystem is XFS, and I've got a good processor in there (AMD X2 5050e), so I'm not sure what the problem is. When I look at my Openfiler system, it looks like everything's being cached in memory.


    __________________________________________________ _____________


    Want to get-on Google's first page and loads of traffic to your website? Hire a SEO Specialist from SKG Technologies seo pecialist

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •