Visit Open-E website
Results 1 to 3 of 3

Thread: DRBD tuning with LSI & XenServer cluster

  1. #1

    Default DRBD tuning with LSI & XenServer cluster

    Hi there,

    does anybody can recommend settings for the DRBD tuning options in Open-E as it is shown at http://www.drbd.org/users-guide/s-th...t-tuning.html?

    We have
    * 2 x TAROX ParX iSCSI Server
    * first node:
    - LSI MegaRAID SAS 84016E
    - 6 x 476940MB SATA ST3500514NS disks + 4 x 953869MB SATA ST31000524NS disks as RAID 6
    - 1 x Intel Corporation Gigabit ET Dual Port Server Adapter
    - 1 x Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) (rev 01)
    - 4GB RAM
    * second node:
    - LSI Logic / Symbios Logic LSI MegaSAS 9260 (rev 05)
    - 6 x 953869MB disks SATA ST31000524NS as RAID6
    - 1 x Intel Corporation Gigabit ET Dual Port Server Adapter
    - 1 x Intel Corporation 82575EB Gigabit Network Connection (rev 02)
    - 4GB RAM

    Setup:
    - 1 NIC is for Management LAN traffic
    - 1 NIC for direct replication
    - 2 NICs for MPIO iSCSI traffic in fully mashed switching network through HP 4204vl switches
    - 2TB volume Block IO iSCSI used as XenServer SR
    - synchronous replication
    - iSCSI failover

    My questions:
    Do you have any recommendations on the settings that are in the tuning guide?
    - max-buffers & max-max-epoch-size is often set to 8000 for "high-performance" RAID controllers. Do you think the LSI controllers are in that league?
    - unplug-watermark is very depending on the behaviour of the SCSI controller. Any experiences with LSIs?
    - sndbuf-size should be increased at least from 128k to 512k but I read about values up to 2M. Any experiences? Is sndbuf-size = 0 (auto sizing) working well?

    Additionally I am thinking about setting the MTU of the replication interfaces to jumbo frames with 9000MTU but in a first test it seems to be even less performant?


    Appreciating any kind of help, input, recommendation and sharing of experiences.

    Thanks,

    Matthias

  2. #2
    Join Date
    Aug 2008
    Posts
    236

    Default

    For LAN replication over GBE or better, the defaults and Open-E are already optimized. HOWEVER.. you may not have enough memory. IMO, at least 8GB is a good starting point depending upon your workload ESPECIALLY when replication is involved.

    Jumbo frames isn't just about throughput. It's to reduce CPU utilization. Larger frames means less interrupts being generated. What kind of performance degradation did you see when enabling them?

  3. #3

    Default

    Hi enealDC,
    good hint on the memory, but at the moment of the 4 GB less then 1 GB is used...so I dont think about a bottleneck in this area...?

    On jumbos I get a decrease of throughput of about 15%...and we have already really little throughput. On the replication interface (which is dedicated!) I only have a max of 100Mbit in peaks...
    So my performance of the iSCSI disks using the storage is between 4000 and 7000 K / sec.

    Matthias

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •