Visit Open-E website
Results 1 to 3 of 3

Thread: MPIO + VMware 5.1 performance

  1. #1

    Default MPIO + VMware 5.1 performance

    Hi All,

    I'm currently in the process of moving from Openfiler to Open E and am trialing the product to get an idea of how to set it up etc.

    Our solution includes 3x Hosts each with 3x 1GBe NICs for MPIO and one storage server with 6x 1GBe NICs bonded into 3 interfaces. Everything is VLANed off so the traffic is correctly segmented and it runs on its own switch for iSCSI traffic.

    My issue at the moment is that no matter what I do (so far atleast) is that the performance on my VM's is around approx 100MBs. I can clearly see that MPIO is working by monitoring the host VM using the ESXTOP command and watching the interfaces and the traffic is spread over them evenly but only sits around 300-400Mbits over two interfaces (I'm only testing with two paths).

    VMware seems to be setup correctly as it was working fine with Openfiler.

    I've also got the MTU set to 9000 on open e and VMware set to MPIO round robin with IOPs set to 1.

    So I was wondering, is there a way to monitor the spread of traffic from the open e server without an external tool? And is there anything that I need to do to allow MPIO to work correctly with Open E. I've watched the videos and have copied the setup .

    Any help would be appreciated.

    Thanks,

    Ben

  2. #2

    Default

    Ben,

    Not sure if this is what you are looking for but if you look in Status > Statistics > Network you will find info on traffic across the various network interfaces.

    Hope this helps.

    Todd

  3. #3
    Join Date
    Aug 2008
    Posts
    236

    Default

    What is the quantity and type of disk that you are using?
    Before you get into testing with ESX, I'd test performance using Ubuntu or another recent Linux distro.
    Here is a suitable multipath.conf entry for Open-E devices:

    device {
    vendor "SCST_[FB]IO"
    product ".*"
    path_grouping_policy multibus
    failback immediate
    getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
    rr_min_io 100
    no_path_retry queue
    path_checker tur
    features "1 queue_if_no_path"
    }


    }

    I'd use fio for testing sequential i/o and remove the virtualization layer. If you have at least 8 disks on the back end, you should be getting between 250 and 3xxMB~ sequential throughput with iSCSI MPIO properly configured.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •