Visit Open-E website
Results 1 to 10 of 11

Thread: Are my iscsi performance results normal?

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Join Date
    Jan 2008
    Posts
    86

    Default

    Quote Originally Posted by zeki893
    How did you resolve your driver issue?

    is sda the usb module?


    hdparm -t /dev/sda
    *-----------------------------------------------------------------------------*


    /dev/sda:
    Timing buffered disk reads: 58 MB in 3.02 seconds = 19.20 MB/sec

    *-----------------------------------------------------------------------------*
    hdparm -t /dev/sdb
    *-----------------------------------------------------------------------------*


    /dev/sdb:
    Timing buffered disk reads: 498 MB in 3.00 seconds = 165.88 MB/sec

    *-----------------------------------------------------------------------------*
    hdparm -t /dev/sdc
    *-----------------------------------------------------------------------------*


    /dev/sdc:
    Timing buffered disk reads: 174 MB in 3.01 seconds = 57.82 MB/sec

    *
    We created a support request for it, and received a custom update to try. This seemed to work well until we updated to the current version and speeds are now back to <10Mb/s.

    RE Your HDParm, are your two MD1000's identical? or do they have different numbers of drives? I would have expected your figures to be almost the same. (assuming theer was nothing else happening on the arrays at the same time.

    Rgds Ben

  2. #2

    Default

    Where can I which device these device names are?
    I actually have another local raid5 volume
    so sdb i think might be the local raid5 volume on a perc5 with 6x1tb drives
    and sdc is the 2xmd1000 30x1tb drives raid6

    i have made some changes though since my last post. i now have each md1000 with raid5 15x1tb, instead of one 30x1tb raid6

  3. #3

    Default

    here is an updated hdparm

    hdparm -t /dev/sda
    *-----------------------------------------------------------------------------*


    /dev/sda:
    Timing buffered disk reads: 58 MB in 3.03 seconds = 19.16 MB/sec

    *-----------------------------------------------------------------------------*


    hdparm -t /dev/sdb
    *-----------------------------------------------------------------------------*


    /dev/sdb:
    Timing buffered disk reads: 492 MB in 3.00 seconds = 163.82 MB/sec

    *-----------------------------------------------------------------------------*


    hdparm -t /dev/sdc
    *-----------------------------------------------------------------------------*


    /dev/sdc:
    Timing buffered disk reads: 304 MB in 3.01 seconds = 101.09 MB/sec

    *-----------------------------------------------------------------------------*


    hdparm -t /dev/sdd
    *-----------------------------------------------------------------------------*


    /dev/sdd:
    Timing buffered disk reads: 264 MB in 3.00 seconds = 87.86 MB/sec

    *

  4. #4
    Join Date
    Jan 2008
    Posts
    86

    Default

    Quote Originally Posted by zeki893
    /dev/sdb: Timing buffered disk reads: 492 MB in 3.00 seconds = 163.82 MB/sec
    /dev/sdc: Timing buffered disk reads: 304 MB in 3.01 seconds = 101.09 MB/sec
    /dev/sdd: Timing buffered disk reads: 264 MB in 3.00 seconds = 87.86 MB/sec
    *
    G'day,
    Well I've not used the MD1000's, but I am surprised that there is a 15% difference between the two. But at least they are close. Also I am surprised to see such a drop in performance between Raid5 and Raid6. Logic would say that the added spindles would have given the 30x1TB an edge, although perhaps there is an issue with the split between the 2xMD1000's.
    Anyway, the performance here does seem low, I would have expected better.
    A post here implies speeds of 600MB/s are possible once the Read Ahead of the kernel is tweaked.
    Now, before I continue I wish to say that I HAVE NOT tried this, and I am NOT a storage guru!
    But in the Console Tools there is a Read Ahead option, which I believe to be the same as the above. So setting it to 8192 or 16384 may improve things?
    But I would get confirmation from an expert before you put it into production....

    RE The naming, I can't help here, not sure how DSS decides or how you can confirm it.
    But I think it's pretty clear at least that c/d are the MD1000's. Since the sdb has been consistent.

    Rgds Ben.

  5. #5

    Default

    is there any update on this topic?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •