Visit Open-E website
Page 1 of 2 12 LastLast
Results 1 to 10 of 16

Thread: Slow speed with Open-E and vSphere ESXi 4

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Default Slow speed with Open-E and vSphere ESXi 4

    Hi,

    I've opened a ticket with Open-E already, but I thought I would post this here as well:

    I'm getting really slow IOMeter scores from VMs running on ESXi 4, like 8MB/s with my synchronous mirror task running (set up as per the PDF how-to) and 22MB/s when it is stopped.

    I've tweaked both VMWare's and Open-E's iSCSI parameters (Burstlength, etc.) and it appears to have no effect.

    The network is fine, iPerf between VMs on different ESXi hosts shows 940Mbits/second, so I'd expect to see IOMeter speeds of around 80MB/s at least.

    I've tried with bonded NICs, I've done the MPIO setup in VMWare as per the other How-To.

    I haven't tried anything with jumbo frames or flow control because AFAIK those are small tweaks which will only have a small effect and I need to boost the speed to 4X what it is now.

    I don't think that it is a physical disk problem, I have a RAID 5 array of IBM X25 SSD drives, should be smoking fast. I'm going to do some testing on a single SSD drive of the different i/o types (file/block) to see if that makes any difference.

    Any suggestions greatly appreciated. I need this up and running in an acceptable configuration last week Unfortunately there was some problem with my second ESXi host (turned out just to be bad bios settings) which already set me back a week already...

    I'm going to do some more testing base

  2. #2

    Default

    Ok, I did some more detailed tests with IOMeter with specific reads/writes/sizes/randomness on both my RAID 5 array and a simple File IO volume.

    Here is what I got for numbers for the RAID 5 array (256K stripe size):
    Sequential access:
    Transfer Size, Write Speed, Read Speed
    512b, 6, 20
    4k, 44, 80
    16k, 108, 112
    32k, 113, 112
    128k, 115, 115
    256k, 112, 112

    Random access:
    Transfer Size, Write Speed, Read Speed
    512b, 1.4, 20
    4k, 10.8, 80
    16k, 36.5, 111
    32k, 55.8, 112
    128k, 64, 115
    256k, 51, 111

    Previously I was doing the "all in one test" and testing windows file copy speeds, so I didn't see that in some cases I am getting acceptable speeds. But now I want to know how can I get these speeds to translate into fast file copy speeds in my Windows 2003 guest OS. Is this now a VMWare tweaking issue? Does it have anything to do with my stripe size? Or the VMFS partition properties (block size, etc)?

    Thanks for any suggestions

    Hugh

  3. #3

    Default

    try tweaking the stripe size it may increase perforamnce
    how much ram u got ?

  4. #4

    Default

    OK, i figured out how to get fast speeds with Open-E and VMWare VSphere ESXi 4 and thought I would share:

    I'm not going to go over the basic setup stuff, but this assumes you can setup your volume/lun and connect esxi to it using 2 paths.

    Step1: configure the Open-E iSCSI target TCP/IP settings
    most of this i got from http://forum.open-e.com/showthread.php?t=1542
    On the open-E box, hit Ctrl-Alt-W, log in to the console, go to tuning options, iSCSI daemon options, target options, <your iscsi target>.
    Here are the settings I made:
    MaxRecvDataSegmentLength=65536
    MaxBurstLength=1047552 (the recommended 16776192 knocked out my connection)
    MaxXmitDataSegmentLength=65536
    FirstBurstLength=523776
    MaxOutstandingR2T=8
    InitialR2T=No
    ImmediateData=Yes

    Step 2: configure the VMWare iSCSI connector TCP/IP settings
    Once you have connected vmware to the open-e lun
    From the console, or via esxcli, or via ssh, edit etc/vmware/vmkiscsid/iscsid.conf using VI editor.
    Here are the settings I made, some of these you can make in VIClient under properties of iSCSI storage adapter, advanced settings. I made those first, then checked this file to make sure they are all set:
    discovery.sendtargets.iscsi.MaxRecvDataSegmentLeng ht = 65536
    node.session.iscsi.FirstBurstLength = 523776
    node.session.iscsi.MaxBurstLength = 1047552
    node.conn[0].iscsi.MaxRecvDataSegmentLength = 65536
    node.session.iscsi.InitialR2T = No
    node.session.iscsi.ImmedateData = Yes
    node.conn[0].tcp.window_size = 65536

    Step 3: configure Multipathing to use Round Robin
    http://www.kb.open-e.com/download/92/

    Step 4: fix IOPS
    At this point I was getting pretty good speeds, about 135 MB/s, but I was supposed to have 2XGB ethernet speed, with 2 paths configured in VMWare and two NICs bonded in Open-E... Then I found this thread: http://communities.vmware.com/thread...5&amp;tstart=0
    And in there someone links to this article:
    http://virtualgeek.typepad.com/virtu...e-vsphere.html
    Where the key part of the article (for me anyways) was doing this:
    You can reduce the number of commands issued down a particular path before moving on to the next path all the way to 1, thus ensuring that each subsequent command is sent down a different path. In a Dell/EqualLogic configuration, Eric has recommended a value of 3.

    You can make this change by using this command:

    esxcli --server <servername> nmp roundrobin setconfig --device <lun ID> --iops <IOOperationLimit_value> --type iops
    so I ran that (the lun ID was the thing that starts with eui.0000blah blah blah), setting my IOPS to 1 (from the default 1000) and got up to 225MB/s speed reading and writing sequentially. Random writes are still a bit slow, 58MB/s, but maybe this is RAID5 related.

    Now when I turn replication on, both sequential and random writes go down to 10MB/s, so I need to sort that out still, but hopefully this is helpful for anyone having a hard time setting up ESXi with Open-E.

    Cheers,

    Hugh

  5. #5

    Default

    There is a pdf I beleive on open-e's website that has the iops / startup script edits of /etc/rc.local too..

    I have a question tho... Where did you come up with this:

    MaxBurstLength=1047552

    I've seen only one post on the net with this setting and its in these forums.. Everywhere else I read that this number "should be set to multiples of PAGE_SIZE" - which is 4kB (4096).
    See FAQ Here: http://www.forum.open-e.com/faq.php?...q=headerdigest


    I just really want to make sure this number has some meaning rather than being something just tossed out there or even a typo.

    I'm also guessing that you set FirstBurstLength to 1/2 the value of MaxBurstLength.. Also what is the reasoning for this? I'm just trying to see why these settings are what they are.

    Thanks!

    - D2G

  6. #6

    Default

    Ok - My conversion may be wrong - should the PAGE size be 512 instead of 4096? - Then the math would work -

  7. #7

    Default

    I have been working on the open-e test version for about 2 weeks now and I am unable to get better write performance than 20 MBs using a vmmachine running iometer using a 64KB Sequential Write. Reading using the same values is ok running around 100MBs.

    There is no other traffic on the switch and the ESX Servers are also empty all tuning options seem to have no affect on the write values. Even configuring the underlying drives for raid0 has very little improvement. The underlying Hardware is a tyan 2882 dual cpu board with both cpu’s 8G ram and 2 3Ware 9550sxu-9lp controllers and 16 drives all enterprise sata drives from WD and Segate.

    Is this really all the performance I can expect from the system or is something just wrong with my setup?

    Thanks for your input.

  8. #8

    Default

    What version of the DSS V6 arr you running?

    Do you have the 3Ware set for "Performance" using the Write Back Cache?

    Do you have the Write Back set for the LUN in the Target?

    We see that you sent in for a support Ticket but you did not provide any logs to help you. Please send in the Log file from the GUI.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  9. #9

    Default

    I am running the version dss6.0up45_b4622.oe_i I do have the 3 ware cards set to performance also. However How do I set up write back for the lun? On the card it is set is that what you mean?

    I also sent in the Log files, it took me a bit since I completely wiped the configuration of the drives to start over with raid 10 again.

    Thanks for the Response.

  10. #10

    Default

    For the iSCSI Target / LUN set the WB "Write Back" option.

    Check that your RAID 10 is set for the followin on your controller as this is what I know they recommend.


    Stripe Size: 256KB
    Read Cache: Intelligent (default) good for streaming
    Read Cache: Basic good for random small block I/O
    Write Cache: On (default)
    Disk Cache: Enabled (default)
    Storsave: Perform
    Autoverify: Off
    Rapid Recovery: Disable
    Rebuildmode: Low Latency
    Verifymode: Low Latency

    For LSI on a RAID 10 (also for RAID 0 & 1)

    Stripe Size: 256KB
    Read Policy: Always Read Ahead
    Write Policy: Write Thru (For streaming sequential performance)
    Write Back (For transactional/random tests)
    IO Policy: Direct IO
    Disk Cache Policy: Enabled
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •