Visit Open-E website
Results 1 to 7 of 7

Thread: Throughput

  1. #1

    Default Throughput

    Hi,

    How to measure throughput between open-e and xenserver to check that lacp works correct. I configured bond (802.3ad) interface on open-e and XenServer (active-active),but under virtual machine I checked disk read/write and I had only:
    read - 121 MB/s
    write - 85.8 MB/s

  2. #2
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    For proper LACP bonding, both ends need to support the 802.3ad protocol. And even then speeds are not simply added together. For example, a 2 NIC LAG will not provide 2GB in IO, but more likely provide 1GB RX, and 1GB TX(provided it is multiple TCP connections).
    A better configuration for Xen would be MPIO. This can provide additional bandwidth and redundancy.
    This can help: http://www.ieee802.org/3/hssg/public...er_01_0407.pdf
    Last edited by Gr-R; 09-25-2013 at 04:35 PM.

  3. #3

    Default

    Hi Gr-R,

    MPIO, you have mean multipathing, without any port link aggregation?

  4. #4

    Default

    This is correct to use MPIO as MPIO provides multiple streams for I/O where a bond only provides a single stream. Also MPIO provides path failover as well.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  5. #5
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    Yes, MPIO will give you the aggregation you are looking for as well as redundancy. This can be done with multiple single paths, or multiple bonds.

  6. #6

    Default

    Hi,
    I read this article http://blog.open-e.com/bonding-versus-mpio-explained/ and as I good understood, when I have 2x1GbE NIC on server and storage DSS and run multipathing on my XenServer I should have bandwith ~2Gbit/s and theoretical throughput 200MB/s, correct?

  7. #7
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    Quote Originally Posted by forall View Post
    Hi,
    I read this article http://blog.open-e.com/bonding-versus-mpio-explained/ and as I good understood, when I have 2x1GbE NIC on server and storage DSS and run multipathing on my XenServer I should have bandwith ~2Gbit/s and theoretical throughput 200MB/s, correct?
    This is correct.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •