Visit Open-E website
Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: Channel bonding using 802.3ad

  1. #1
    Join Date
    Nov 2008
    Location
    Hamburg, Germany
    Posts
    102

    Default Channel bonding using 802.3ad

    Hi all,

    I have setup channel bonding using LACP/802.3ad. While the setup itself seems to work, I don't get any speed-bump. All transfers still at max. 1GB/114 MB/sec.

    Shouldn't LACP provide a higher troughput as well?

    Cheers,
    budy
    There's no OS like OS X!

  2. #2

    Default

    HI Budy

    Try sending lots of data, with bonding you may not notice any improvements until you send a lot of data.

    make sure the switch will support the 802.3ad bond

  3. #3

    Default

    Bonding will not increase the speed but widen the pipe. So when multiple connections are concurrently active the performance of the NIC's bonding will not be as congested as with no bonding.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  4. #4
    Join Date
    Nov 2008
    Location
    Hamburg, Germany
    Posts
    102

    Default

    Hmm, I did send lots of data. Actually I tried two of these:

    dd if=/dev/zero of=/volume/testfile bs=64k count=16384

    The switch supports LACP and I have verified that the trunks got configured correctly, but the overall throughput didn't exceed 114 MB/sec.

    On the other end I have an Apple Xserve that also is setup to build a LACP bond - and it's unfortuanetly the only component in the puzzle, that I am not sure about. But as I said, the switch reports that there're two trunks configured and that are my Open-E box and my Xserve.

    But from your reply I assume that LACP should provide a speed increase, so I will check my setup again.

    Thanks,
    budy
    There's no OS like OS X!

  5. #5
    Join Date
    Nov 2008
    Location
    Hamburg, Germany
    Posts
    102

    Default

    Okay, so it seems that channel bonding, won't increase the speed of a single connection, will it?
    So, in other words, is it possible to have a single transfer exceeding the 1GB limit at all?

    I want to backup from my iSCSI snapshots to two LTPO-4 drives and each drive is capable od transferring up to 120 MB/sec, so is there any solution to this at all, or would I need to throw in a 10GbE card?

    Cheers,
    budy
    There's no OS like OS X!

  6. #6

    Default

    If you have a 10GbE NIC your transfers should be much higher than a 1GbE. If using the 1GbE NIC's you can also try to use the Jumbo Frame option from the Console screen by entering CTRL + ALT + W then Tuning options there you will see the Jumbo Frame Config settings.

    Refer to the specifications of your NIC, typically they are 9000 but can be 9014 also keep in mind to set the host and the switch for these frame settings as well.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  7. #7
    Join Date
    Nov 2008
    Location
    Hamburg, Germany
    Posts
    102

    Default

    Hi Todd,

    I see, thanks.

    Is there any bonding mode at all that could speed up the transfers between two hosts at all? Not necessarily due to a higher link rate, but due to the fact that we do have more bandwidth at hand? I think that, of any mode could do this, balance-rr would be the one to go with.

    Thanks,
    budy
    There's no OS like OS X!

  8. #8

    Default

    From what a 1GbE NIC can do vs a 10GbE NIC the performance of a bond will only widen the pipe size. If you are using iSCSI with Microsoft you can use the MPIO option. Link below provides more information on this subject.

    How to configure MPIO with Microsoft Initiator?

    http://kb.open-e.com/entry/57/

    Download the PDF below.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  9. #9
    Join Date
    Aug 2008
    Posts
    236

    Default

    LACP and Cisco PAGP use an XOR on the source/destination of the mac or IP address to determine how traffic should be balanced. You generally will only get see the benefits when you have many traffic flows between many sources and destinations.

  10. #10
    Join Date
    Nov 2008
    Location
    Hamburg, Germany
    Posts
    102

    Default

    Okay, I have read that GlobalSAN shall also be able to do MPIO, although it is not mentioned as a feature in the manual, but I'll try.

    If that doesn't work out, I will have to go hunting for some 10 GbE cards - fortuanetly there are a couple for OS X. As for Open-E, I guess that the Intel Pro/10GbE will be supported?

    Cheers,
    budy
    There's no OS like OS X!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •