Visit Open-E website
Page 1 of 2 12 LastLast
Results 1 to 10 of 16

Thread: Slow speed with Open-E and vSphere ESXi 4

  1. #1

    Default Slow speed with Open-E and vSphere ESXi 4


    I've opened a ticket with Open-E already, but I thought I would post this here as well:

    I'm getting really slow IOMeter scores from VMs running on ESXi 4, like 8MB/s with my synchronous mirror task running (set up as per the PDF how-to) and 22MB/s when it is stopped.

    I've tweaked both VMWare's and Open-E's iSCSI parameters (Burstlength, etc.) and it appears to have no effect.

    The network is fine, iPerf between VMs on different ESXi hosts shows 940Mbits/second, so I'd expect to see IOMeter speeds of around 80MB/s at least.

    I've tried with bonded NICs, I've done the MPIO setup in VMWare as per the other How-To.

    I haven't tried anything with jumbo frames or flow control because AFAIK those are small tweaks which will only have a small effect and I need to boost the speed to 4X what it is now.

    I don't think that it is a physical disk problem, I have a RAID 5 array of IBM X25 SSD drives, should be smoking fast. I'm going to do some testing on a single SSD drive of the different i/o types (file/block) to see if that makes any difference.

    Any suggestions greatly appreciated. I need this up and running in an acceptable configuration last week Unfortunately there was some problem with my second ESXi host (turned out just to be bad bios settings) which already set me back a week already...

    I'm going to do some more testing base

  2. #2


    Ok, I did some more detailed tests with IOMeter with specific reads/writes/sizes/randomness on both my RAID 5 array and a simple File IO volume.

    Here is what I got for numbers for the RAID 5 array (256K stripe size):
    Sequential access:
    Transfer Size, Write Speed, Read Speed
    512b, 6, 20
    4k, 44, 80
    16k, 108, 112
    32k, 113, 112
    128k, 115, 115
    256k, 112, 112

    Random access:
    Transfer Size, Write Speed, Read Speed
    512b, 1.4, 20
    4k, 10.8, 80
    16k, 36.5, 111
    32k, 55.8, 112
    128k, 64, 115
    256k, 51, 111

    Previously I was doing the "all in one test" and testing windows file copy speeds, so I didn't see that in some cases I am getting acceptable speeds. But now I want to know how can I get these speeds to translate into fast file copy speeds in my Windows 2003 guest OS. Is this now a VMWare tweaking issue? Does it have anything to do with my stripe size? Or the VMFS partition properties (block size, etc)?

    Thanks for any suggestions


  3. #3


    try tweaking the stripe size it may increase perforamnce
    how much ram u got ?

  4. #4


    OK, i figured out how to get fast speeds with Open-E and VMWare VSphere ESXi 4 and thought I would share:

    I'm not going to go over the basic setup stuff, but this assumes you can setup your volume/lun and connect esxi to it using 2 paths.

    Step1: configure the Open-E iSCSI target TCP/IP settings
    most of this i got from
    On the open-E box, hit Ctrl-Alt-W, log in to the console, go to tuning options, iSCSI daemon options, target options, <your iscsi target>.
    Here are the settings I made:
    MaxBurstLength=1047552 (the recommended 16776192 knocked out my connection)

    Step 2: configure the VMWare iSCSI connector TCP/IP settings
    Once you have connected vmware to the open-e lun
    From the console, or via esxcli, or via ssh, edit etc/vmware/vmkiscsid/iscsid.conf using VI editor.
    Here are the settings I made, some of these you can make in VIClient under properties of iSCSI storage adapter, advanced settings. I made those first, then checked this file to make sure they are all set:
    discovery.sendtargets.iscsi.MaxRecvDataSegmentLeng ht = 65536
    node.session.iscsi.FirstBurstLength = 523776
    node.session.iscsi.MaxBurstLength = 1047552
    node.conn[0].iscsi.MaxRecvDataSegmentLength = 65536
    node.session.iscsi.InitialR2T = No
    node.session.iscsi.ImmedateData = Yes
    node.conn[0].tcp.window_size = 65536

    Step 3: configure Multipathing to use Round Robin

    Step 4: fix IOPS
    At this point I was getting pretty good speeds, about 135 MB/s, but I was supposed to have 2XGB ethernet speed, with 2 paths configured in VMWare and two NICs bonded in Open-E... Then I found this thread:;tstart=0
    And in there someone links to this article:
    Where the key part of the article (for me anyways) was doing this:
    You can reduce the number of commands issued down a particular path before moving on to the next path all the way to 1, thus ensuring that each subsequent command is sent down a different path. In a Dell/EqualLogic configuration, Eric has recommended a value of 3.

    You can make this change by using this command:

    esxcli --server <servername> nmp roundrobin setconfig --device <lun ID> --iops <IOOperationLimit_value> --type iops
    so I ran that (the lun ID was the thing that starts with eui.0000blah blah blah), setting my IOPS to 1 (from the default 1000) and got up to 225MB/s speed reading and writing sequentially. Random writes are still a bit slow, 58MB/s, but maybe this is RAID5 related.

    Now when I turn replication on, both sequential and random writes go down to 10MB/s, so I need to sort that out still, but hopefully this is helpful for anyone having a hard time setting up ESXi with Open-E.



  5. #5


    There is a pdf I beleive on open-e's website that has the iops / startup script edits of /etc/rc.local too..

    I have a question tho... Where did you come up with this:


    I've seen only one post on the net with this setting and its in these forums.. Everywhere else I read that this number "should be set to multiples of PAGE_SIZE" - which is 4kB (4096).
    See FAQ Here:

    I just really want to make sure this number has some meaning rather than being something just tossed out there or even a typo.

    I'm also guessing that you set FirstBurstLength to 1/2 the value of MaxBurstLength.. Also what is the reasoning for this? I'm just trying to see why these settings are what they are.


    - D2G

  6. #6


    Ok - My conversion may be wrong - should the PAGE size be 512 instead of 4096? - Then the math would work -

  7. #7


    TBH, this is the first time I've ever played around with TCP settings other than in Windows.

    I got that number from the forum suggestion as you said. I had the same problem as the poster, if I used the 16XXXXX value, it knocked out my connection to my target, so I used the suggested value.

  8. #8


    Update: I've done more iometer testing with RAID 10 arrays (using 6 intel x-25M SSDs) using 512k, 256k and 128k stripe sizes. The best results I got were at 128K stripe size.

    In IOMeter, I got +200 MB/s transfer speeds for 16k and 32k both random and sequential. I used 8 workers and 64 oustanding ops.

    I pretty much always got terrible results the lower I go, is that normal? Like 6MB/s writing (both random and sequential) at 512b.

    Everything with the VMs seems pretty fast now, I can copy a large file from one folder to another at about 35MB/s, but any time I try to clone a VM it is incredibly slow, like 1MB/s. I haven't had the patience to let it complete yet. Anyone have any suggestions?

  9. #9


    I'm getting great speeds with IOMeter right now with the following settings:
    MaxRecvDataSegmentLength 65536
    MaxBurstLength 524288
    MaxXmitDataSegmentLength 65536
    FirstBurstLength 262144
    MaxOutstandingR2T 8
    InitialR2T No
    ImmediateData Yes

    My VMWare paths are configured for roundrobin, IOPS of 1.

    The speeds I'm getting are:
    Size Seq Write Seq Read Seq 50% Rnd Wr Rnd Read
    512b 6.9 35 16.8 15 35
    4K 99 174 147 108 174
    16k 222 223 300 222 222
    32K 224 208 307 222 210

    So I'm totally happy with that so far, but still I max out at 1.5MB/s read and 1.5MB/s write when cloning a machine. I've checked the switches for collisions or something, and can't find anything. And the 50/50 speeds look nice and fast, so I have no idea what is going on there.

  10. #10


    Fixed the table so it was a bit more readable:
    Size______Seq Write_Seq Read_Seq 50%__Rnd Wr___Rnd Read

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts