Visit Open-E website
Results 1 to 10 of 13

Thread: VMWare vSphere Recommended settings.

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Join Date
    Jan 2008
    Posts
    86

    Default VMWare vSphere Recommended settings.

    G'day All,
    We are in the process of updating clients from ESX 3.5 to vSphere 4.0.
    So far we have used the local storage for anything critical, but our next batch of clients needs the central storage provided by DSS.

    I've read through the posts here and in the VMware forums, and it would seem that there are a number of settings that need tweaking etc. plus the main post is running to 8 pages now, and quite a few of the posts refer to V6 Beta versions. Therefore a I thought a summary list would be helpful.

    So is there anyone who is running a fairly standard config of 2 or 3 vSphere servers connecting to a shared iSCSI vmfs on DSS v6(Adaptec SAS), in a GBe LAN, (no jumbo frames). VM's range from File storage, email to Terminal servers, so a real mix, with 50 staff.
    I'm not too concerned if IO throughput is reduced, just really do not want to see any timeouts, or lost LUNs etc. and would rather start from a position of strength and improve performance, than struggle "trying" things out....

    These are the settings/values that have been listed, are these still the reccomended?
    OR should we only set these if we get issues?

    MaxRecvDataSegmentLength = 65536
    MaxXmitDataSegmentLength = 65536
    MaxBurstLength = 16776192
    MaxOutstandingR2T = 8
    InitialR2T = No
    ImmediateData = Yes

    What about iETD vs SCST?

    Writeback caching? (Note: I turn it off for replication by default anyway)

    Bonding?


    Rgds Ben

  2. #2

    Default

    Push.
    Also interested.

    No reply since June ?
    Great...


    Regards
    Ralph

  3. #3
    Join Date
    Jan 2008
    Posts
    86

    Default

    G'day Ralph,
    Yep, I was surprised as well, normally this forum is pretty good that way. There are a couple of KB entries, but nothing that really ties it all together.
    So we are in the process of doing the testing now. I'll update this thread if I find anything that works (or doesn't) for us.

    Rgds Ben.

  4. #4

    Default

    Ben, thanks for the update.
    Looking forward.... :-)

    Regards,
    Ralph

  5. #5

    Default

    Hi, just wanted to jump in on this thread... I've got two vSphere 4.0 hosts and two DSS v6 boxes running in an iSCSI failover configuration. I used the defaults out of the box but this weekend played around with the iSCSI target tweaks listed above.

    My experience was that the tweaks worked, with the exception of MaxBurstLength = 16776192. When I set that, I lost contact with my luns... the effect was that inside my VM's, basically all disk access froze. Luckily I still had console access and when I set the MaxBurstLength back to the default of 262144 and reset the iSCSI connections my disks came back.

    I tried a MaxBurstLength of 1047552 and that worked. I then tried a MaxBurstLength of
    2097152 and the luns froze again, so I reset it back to 1047552 and ran Sunday and today with that value.

    The changes that I saw were marginal - I ran a 32k 100% write test using iometer and where with the default settings I got 170 iops, with the new tweaks I got 195 iops... so I guess that's about 10% better in my unscientific testing.

    Has anyone else tried the MaxBurstLength setting, and what's your experience with it?

  6. #6

    Default

    Sorry, just to answer the other questions:

    What about iETD vs SCST? SCST

    Writeback caching? Off due to the iSCSI failover replication

    Bonding? Yes, using bonding on both iSCSI channel and replication channel (and mgmt, for what it's worth). I have two built-in gig ports and a quad-port intel NIC, so three pairs of network interfaces.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •