Visit Open-E website
Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: Expected speed for DSS setup

  1. #1

    Default Expected speed for DSS setup


    Is there any tool within DSS that alows me to check the array performance?

    What transfer rates should I expect from a dss using:

    a 3ware 9650SE SATA-II RAID
    with 3 Hitachi HDS721010KLA330 in raid 5 mode?

    How can I test this?

  2. #2


    Try server statistics in status, hardware
    Your performance will vary depending upon your configuration, cpu,memory, network.

  3. #3


    why dont you try IO Meter.. its a free tool!!!!

    Have you tried it? you didnt like results????

  4. #4


    I hope you will have more speed than i get. Have an Areca 1261ML with 5*1000 GB in Raid 5. Writespeed is 7-8 MB/s (in the best periods)

    I`m very frustrated. Have corrected the drivers for the NIC with speed=1000 and neg=32

    My server is an Supermicro with Supermicro X7DWE mainboard

    There is a bottleneck in my system, but i dont know where it is.

    Kurt H

  5. #5


    I kinda was looking for a value measured inside the box, independent of network and other issues.. something similar to hdparm in a unix shell.

    I wanted to know how for can this array go in a local read.

    I could boot with a Linux live CD and perform the test, but I wanted to know that value from the dss enviroment.

  6. #6


    I tryed iozone over NFS over GbE and I got about 90MB/s in the best reads.

    Also did a

    # time dd if=/dev/zero of=file4 bs=1024 count=10000000
    10000000+0 records in
    10000000+0 records out

    real 2m45.938s
    user 0m5.430s
    sys 0m31.710s

    which indicates over 100MB/s in over NFS

    But I expected above 500MB/s from the disk and controller specs..

  7. #7


    Are you using 1G network?? because if you do, then you are getting a good performance...
    If not, tell us what adapter, firmware, network??? give us more details...

    In 10G networks you will get over 500M only

  8. #8


    Yes Joey I'm using a 1G network

    The dss server has a
    Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) (rev 01)
    connected to a 3com 4200G series switch
    The controller on the server side is a Intel eepro 1000

    But when I first posted I was refering to a way of measuring the internal raid speed
    The statistics graphs don't help me much in this case. I guess open-e could include the result of some benchmark tool (even hdparm) in future releases

  9. #9


    I don't know the Areca controller
    But all my servers are SuperMicro and I'm happy with them.

    I'm not running the nss on a supermicro tho, as I bought a ready to run solution and it was not based on supermicro.

    Have you tried using software raid just to rule out controller issues?
    If you believe it is the NIC try to benchmark it with netcat

    something like
    cat /dev/zero | nc otherhost:5555
    and on the other host
    nc -l -p 5555 | cat > file

    I believe this will test the NIC troughput independent of the disk performance

    also try an hdparm -t /dev/sda (or whatever) to check the disk performance

  10. #10
    Join Date
    Jan 2008


    Quote Originally Posted by HangaS
    I tryed iozone over NFS over GbE and I got about 90MB/s in the best reads.
    which indicates over 100MB/s in over NFS

    But I expected above 500MB/s from the disk and controller specs..
    Megabits (Mb/s) or Megabytes(MB/s)?

    If the Hitachi 7K1000 can get about 80MB/s, then 100MB/s in Raid5 is not too shabby....
    So I am not sure why you thought you would get even 500Mbit/s...

    I agree though that it would be nice to to have a Util on the DSS that could test the raw I/O performance.

    What are the servers you are connecting to the NAS with?

    If you are running Windows try using the Intel NAS Perf tester:

    And if you want to test your GB Network, take a look at netspeed.exe from OptimumX
    (Assuming you have another Windows Box to test against.)

    So far I think almost all of our speed issues we have narrowed down to the server NICs not auto negotiating properly.

    Rgds Ben

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts