Visit Open-E website
Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: Are my iscsi performance results normal?

  1. #1

    Default Are my iscsi performance results normal?

    Here is my setup
    Dell 2950 Perc6/e
    2 dell md1000s with 30x seagate 1tb drives raid6 512k chunk size

    on open-e it is 1tb iscsi file io initialized target with wb

    i've been running some benchmarks and here are my results

    /dev/sdb1:
    Timing cached reads: 3392 MB in 2.00 seconds = 1696.67 MB/sec
    Timing buffered disk reads: 80 MB in 3.14 seconds = 25.45 MB/sec



    <TABLE ALIGN=center BORDER=3 CELLPADDING=2 CELLSPACING=1>
    <TR><TD COLSPAN=2 class="header"></TD>
    <TD COLSPAN=6 class="header"><FONT SIZE=+2><B>Sequential Output</B></FONT></TD>
    <TD COLSPAN=4 class="header"><FONT SIZE=+2><B>Sequential Input</B></FONT></TD>
    <TD COLSPAN=2 ROWSPAN=2 class="header"><FONT SIZE=+2><B>Random<BR>Seeks</B></FONT></TD>
    <TD COLSPAN=1 class="header"></TD>
    <TD COLSPAN=6 class="header"><FONT SIZE=+2><B>Sequential Create</B></FONT></TD>
    <TD COLSPAN=6 class="header"><FONT SIZE=+2><B>Random Create</B></FONT></TD>
    </tr>
    <TR><TD></TD><TD>Size:Chunk Size</TD><TD COLSPAN=2>Per Char</TD><TD COLSPAN=2>Block</TD><TD COLSPAN=2>Rewrite</TD><TD COLSPAN=2>Per Char</TD><TD COLSPAN=2>Block</TD><TD>Num Files</TD><TD COLSPAN=2>Create</TD><TD COLSPAN=2>Read</TD><TD COLSPAN=2>Delete</TD><TD COLSPAN=2>Create</TD><TD COLSPAN=2>Read</TD><TD COLSPAN=2>Delete</TD></TR><TR><TD COLSPAN=2></TD><TD class="ksec"><FONT SIZE=-2>K/sec</FONT></TD><TD class="ksec"><FONT SIZE=-2>% CPU</FONT></TD><TD class="ksec"><FONT SIZE=-2>K/sec</FONT></TD><TD class="ksec"><FONT SIZE=-2>% CPU</FONT></TD><TD class="ksec"><FONT SIZE=-2>K/sec</FONT></TD><TD class="ksec"><FONT SIZE=-2>% CPU</FONT></TD><TD class="ksec"><FONT SIZE=-2>K/sec</FONT></TD><TD class="ksec"><FONT SIZE=-2>% CPU</FONT></TD><TD class="ksec"><FONT SIZE=-2>K/sec</FONT></TD><TD class="ksec"><FONT SIZE=-2>% CPU</FONT></TD><TD class="ksec"><FONT SIZE=-2>/ sec</FONT></TD><TD class="ksec"><FONT SIZE=-2>% CPU</FONT></TD><TD></TD><TD class="ksec"><FONT SIZE=-2>/ sec</FONT></TD><TD class="ksec"><FONT SIZE=-2>% CPU</FONT></TD><TD class="ksec"><FONT SIZE=-2>/ sec</FONT></TD><TD class="ksec"><FONT SIZE=-2>% CPU</FONT></TD><TD class="ksec"><FONT SIZE=-2>/ sec</FONT></TD><TD class="ksec"><FONT SIZE=-2>% CPU</FONT></TD><TD class="ksec"><FONT SIZE=-2>/ sec</FONT></TD><TD class="ksec"><FONT SIZE=-2>% CPU</FONT></TD><TD class="ksec"><FONT SIZE=-2>/ sec</FONT></TD><TD class="ksec"><FONT SIZE=-2>% CPU</FONT></TD><TD class="ksec"><FONT SIZE=-2>/ sec</FONT></TD><TD class="ksec"><FONT SIZE=-2>% CPU</FONT></TD></TR>
    <TR><TD class="rowheader"><FONT SIZE=+1><B>do246-322.krypt.com</B></FONT></TD><TD class="size">8000M</TD><TD>39062</TD><TD>91</TD><TD>83639</TD><TD>32</TD><TD>22921</TD><TD>10</TD><TD>19826</TD><TD>46</TD><TD>42551</TD><TD>6</TD><TD>2629.9</TD><TD>6</TD><TD>1024</TD><TD>24834</TD><TD>68</TD><TD>292277</TD><TD>99</TD><TD>4471</TD><TD>11</TD><TD>24385</TD><TD>68</TD><TD>96204</TD><TD>86</TD><TD>2363</TD><TD>7</TD></TR>
    </TABLE>

  2. #2

    Default

    my bonnie++ results
    http://98.126.34.42/disk.htm

  3. #3

    Default

    i forgot to mention i'm using balance-rr bonded with 2x Intel Corporation PRO/1000 PT Dual Port Server Adapter and 2xBroadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12)

    on the router i have a netiron mlx configured a link aggregation with the 4 ports.
    i see about 200-300Mbps on each port when i run the bonnie++ with a max at around 750Mbps so i think that translates to about 93MBps

    i ran hd tach and i'm averaging about 45MBps.

    When i have bonnie++ running on one target and hd tach on another i see the performance on hd tach drop to average 10-20MBps

    So are these results I'm getting normal? Is this the hardware limit of my setup?

    I just want to make sure I'm not underperforming. If I am, what I can do to improve its performance?

  4. #4
    Join Date
    Jan 2008
    Posts
    86

    Default Balance Mode

    Quote Originally Posted by zeki893
    i forgot to mention i'm using balance-rr bonded with 2x Intel Corporation PRO/1000 PT Dual Port Server Adapter and 2xBroadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12)

    on the router i have a netiron mlx configured a link aggregation with the 4 ports.
    i see about 200-300Mbps on each port when i run the bonnie++ with a max at around 750Mbps so i think that translates to about 93MBps

    i ran hd tach and i'm averaging about 45MBps.

    When i have bonnie++ running on one target and hd tach on another i see the performance on hd tach drop to average 10-20MBps

    So are these results I'm getting normal? Is this the hardware limit of my setup?

    I just want to make sure I'm not underperforming. If I am, what I can do to improve its performance?
    G'day,
    One way to review the "Raw" speed of your array, is to download the logs and look for the tests.log, in there you should see a hdparm test, what is the result?
    On one system we have 12x 1TB Seagates on an Adaptec 5405 all in a Raid6 and the raw speed was 135.01 MB/sec, yet the Network speed was <10MB/s (fault was with the Intel drivers in DSS)
    Also, with balance-rr, as far as I know this is switch independent(as opposed to 802.3ad), so I don't know the affect if you also bond the switch ports?
    Also, when you combine cards with different chipsets into a bonded interface you can get issues with incompatibilities there too. (Fault tolerance works, but not load balance.)
    Try putting all the traffic through a single card and see if it makes a difference, as we had a problem with the Intel Pro1000's try using just one and compare with the Broadcom.

    Rgds Ben

  5. #5

    Default

    How did you resolve your driver issue?

    is sda the usb module?


    hdparm -t /dev/sda
    *-----------------------------------------------------------------------------*


    /dev/sda:
    Timing buffered disk reads: 58 MB in 3.02 seconds = 19.20 MB/sec

    *-----------------------------------------------------------------------------*
    hdparm -t /dev/sdb
    *-----------------------------------------------------------------------------*


    /dev/sdb:
    Timing buffered disk reads: 498 MB in 3.00 seconds = 165.88 MB/sec

    *-----------------------------------------------------------------------------*
    hdparm -t /dev/sdc
    *-----------------------------------------------------------------------------*


    /dev/sdc:
    Timing buffered disk reads: 174 MB in 3.01 seconds = 57.82 MB/sec

    *

  6. #6

    Default

    Dear zeki893,

    Yes, the sda is the module. So your read performance is 165.88MB/s

    /dev/sdb:
    Timing buffered disk reads: 498 MB in 3.00 seconds = 165.88 MB/sec


    Regards,
    SJ

  7. #7
    Join Date
    Jan 2008
    Posts
    86

    Default

    Quote Originally Posted by zeki893
    How did you resolve your driver issue?

    is sda the usb module?


    hdparm -t /dev/sda
    *-----------------------------------------------------------------------------*


    /dev/sda:
    Timing buffered disk reads: 58 MB in 3.02 seconds = 19.20 MB/sec

    *-----------------------------------------------------------------------------*
    hdparm -t /dev/sdb
    *-----------------------------------------------------------------------------*


    /dev/sdb:
    Timing buffered disk reads: 498 MB in 3.00 seconds = 165.88 MB/sec

    *-----------------------------------------------------------------------------*
    hdparm -t /dev/sdc
    *-----------------------------------------------------------------------------*


    /dev/sdc:
    Timing buffered disk reads: 174 MB in 3.01 seconds = 57.82 MB/sec

    *
    We created a support request for it, and received a custom update to try. This seemed to work well until we updated to the current version and speeds are now back to <10Mb/s.

    RE Your HDParm, are your two MD1000's identical? or do they have different numbers of drives? I would have expected your figures to be almost the same. (assuming theer was nothing else happening on the arrays at the same time.

    Rgds Ben

  8. #8

    Default

    Where can I which device these device names are?
    I actually have another local raid5 volume
    so sdb i think might be the local raid5 volume on a perc5 with 6x1tb drives
    and sdc is the 2xmd1000 30x1tb drives raid6

    i have made some changes though since my last post. i now have each md1000 with raid5 15x1tb, instead of one 30x1tb raid6

  9. #9

    Default

    here is an updated hdparm

    hdparm -t /dev/sda
    *-----------------------------------------------------------------------------*


    /dev/sda:
    Timing buffered disk reads: 58 MB in 3.03 seconds = 19.16 MB/sec

    *-----------------------------------------------------------------------------*


    hdparm -t /dev/sdb
    *-----------------------------------------------------------------------------*


    /dev/sdb:
    Timing buffered disk reads: 492 MB in 3.00 seconds = 163.82 MB/sec

    *-----------------------------------------------------------------------------*


    hdparm -t /dev/sdc
    *-----------------------------------------------------------------------------*


    /dev/sdc:
    Timing buffered disk reads: 304 MB in 3.01 seconds = 101.09 MB/sec

    *-----------------------------------------------------------------------------*


    hdparm -t /dev/sdd
    *-----------------------------------------------------------------------------*


    /dev/sdd:
    Timing buffered disk reads: 264 MB in 3.00 seconds = 87.86 MB/sec

    *

  10. #10
    Join Date
    Jan 2008
    Posts
    86

    Default

    Quote Originally Posted by zeki893
    /dev/sdb: Timing buffered disk reads: 492 MB in 3.00 seconds = 163.82 MB/sec
    /dev/sdc: Timing buffered disk reads: 304 MB in 3.01 seconds = 101.09 MB/sec
    /dev/sdd: Timing buffered disk reads: 264 MB in 3.00 seconds = 87.86 MB/sec
    *
    G'day,
    Well I've not used the MD1000's, but I am surprised that there is a 15% difference between the two. But at least they are close. Also I am surprised to see such a drop in performance between Raid5 and Raid6. Logic would say that the added spindles would have given the 30x1TB an edge, although perhaps there is an issue with the split between the 2xMD1000's.
    Anyway, the performance here does seem low, I would have expected better.
    A post here implies speeds of 600MB/s are possible once the Read Ahead of the kernel is tweaked.
    Now, before I continue I wish to say that I HAVE NOT tried this, and I am NOT a storage guru!
    But in the Console Tools there is a Read Ahead option, which I believe to be the same as the above. So setting it to 8192 or 16384 may improve things?
    But I would get confirmation from an expert before you put it into production....

    RE The naming, I can't help here, not sure how DSS decides or how you can confirm it.
    But I think it's pretty clear at least that c/d are the MD1000's. Since the sdb has been consistent.

    Rgds Ben.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •