Visit Open-E website
Page 1 of 2 12 LastLast
Results 1 to 10 of 28

Thread: Bonding 802.3ad betting performance ?

Thread has average rating 4.00 / 5.00 based on 1 votes.
Thread has been visited 118694 times.

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Join Date
    Oct 2008
    Posts
    69

    Default

    I'll try another switch then.

    I don't want to connect directly ESX to OpenE because i have two OpenE server and i want to use Ip Failover and HA.

    Here some bench with CrystalDiskMark 2.2 in a Windows 2008 Server VM on my ESXi (MPIO on 2 path RR Mode from Vmware) and Balance-ALB for OpenE. Jumbo Frame are up everywhere (Switch, OpenE, Vmware)

    Without synchronous Replication Tasks (Link between OpenE are only 1Gbit):

    Sequential Read : 101.344 MB/s
    Sequential Write : 97.572 MB/s
    Random Read 512KB : 95.121 MB/s
    Random Write 512KB : 86.132 MB/s
    Random Read 4KB : 14.091 MB/s
    Random Write 4KB : 8.158 MB/s

    Test Size : 100 MB

    With synchronous Replication Tasks (Link between OpenE are only 1Gbit):

    Sequential Read : 102.383 MB/s
    Sequential Write : 80.400 MB/s
    Random Read 512KB : 97.688 MB/s
    Random Write 512KB : 67.150 MB/s
    Random Read 4KB : 13.703 MB/s
    Random Write 4KB : 3.384 MB/s

    Test Size : 100 MB


    I tried other bonding on OpenE without success :
    - Balance-RR give me only 50MB/s in Read Performance for better Write...
    - 802.3ad is not working

    For me a good setup will be something like :
    - ESXi using MPIO and 2 NIC connected to an ISCSI Switch
    - OpenE connected to the ISCSI switch with 802.3ad mode 2 NIC
    - Replication between OpenE on a dedicated LAN with 802.3ad mode and 2 NIC (may be 2 direct cable in 802.3ad?)

    Is this possible ?

  2. #2
    Join Date
    Nov 2008
    Location
    Hamburg, Germany
    Posts
    102

    Default

    802.3ad bonding has been discussed several times on this forum. As nsc said, you will not get any performance boost for a single host connection via 802.3ad, since the switch will just route all the traffic from one IP on the left side, to the other IP on the right side through the same connections. In case the link where the actual traffic goes through fails, it will re-route to the remaining link(s).

    If you need higher single-connections speeds, you will have to go with 10GbE. Otherwise you will have to have multiple servers (likely with odd and even IPs) connected to the switch and your Open-E DSS to gain a overall throughput boost. One connection cannot exceed 1Gbis/s with 802.3ad bonding.

    Cheers,
    budy
    There's no OS like OS X!

  3. #3
    Join Date
    Oct 2008
    Posts
    69

    Default

    Hi Buddy,

    Have you any info about the cost a switch 10GB and 3 Network Cards 10GB ?

    Both compatible with OpenE of course ;-)

    Thanks

  4. #4

    Default

    hi,
    we have succesfully setup our systems with a 10GB CX4 Network
    dual port Intel 10GB CX4 PCI-E V2 ~ 600€
    6 Port HP 10GB CX4 switch ~ 4000 €

    CX4 is actually the cheapest way to implement 10GB.

    greetings
    roger

  5. #5
    Join Date
    Oct 2008
    Posts
    69

    Default

    Thanks a lot for this info

  6. #6

    Default

    Quote Originally Posted by nsc
    I'll try another switch then.

    I don't want to connect directly ESX to OpenE because i have two OpenE server and i want to use Ip Failover and HA.

    Here some bench with CrystalDiskMark 2.2 in a Windows 2008 Server VM on my ESXi (MPIO on 2 path RR Mode from Vmware) and Balance-ALB for OpenE. Jumbo Frame are up everywhere (Switch, OpenE, Vmware)

    Without synchronous Replication Tasks (Link between OpenE are only 1Gbit):

    Sequential Read : 101.344 MB/s
    Sequential Write : 97.572 MB/s
    Random Read 512KB : 95.121 MB/s
    Random Write 512KB : 86.132 MB/s
    Random Read 4KB : 14.091 MB/s
    Random Write 4KB : 8.158 MB/s

    Test Size : 100 MB

    With synchronous Replication Tasks (Link between OpenE are only 1Gbit):

    Sequential Read : 102.383 MB/s
    Sequential Write : 80.400 MB/s
    Random Read 512KB : 97.688 MB/s
    Random Write 512KB : 67.150 MB/s
    Random Read 4KB : 13.703 MB/s
    Random Write 4KB : 3.384 MB/s

    Test Size : 100 MB


    I tried other bonding on OpenE without success :
    - Balance-RR give me only 50MB/s in Read Performance for better Write...
    - 802.3ad is not working

    For me a good setup will be something like :
    - ESXi using MPIO and 2 NIC connected to an ISCSI Switch
    - OpenE connected to the ISCSI switch with 802.3ad mode 2 NIC
    - Replication between OpenE on a dedicated LAN with 802.3ad mode and 2 NIC (may be 2 direct cable in 802.3ad?)

    Is this possible ?

    100MB/s thats low for rr-mpio, im getting around 170-180MB/s read and about 100MB/s write (replication over 1gbit), my setup: xen, 2 gbit switches and 2 dss boxes

  7. #7
    Join Date
    Oct 2008
    Posts
    69

    Default

    what is your bond config in OpenE ?

    I tried Balance RR, Balance-ALB & 802.3ad.

    In Balance RR i get better write but lower Read...

    May be Xen do better...

  8. #8
    Join Date
    Feb 2009
    Posts
    142

    Default

    We have had real good luck with Supermicro Dual Port 10g low profile cards. About $450.00 for the model AOC-STG-I2. Based on Intel 10g chip and fully supported by Open-E. Haven't found a less expensive supported dual port 10g card.

  9. #9
    Join Date
    Oct 2008
    Posts
    69

    Default

    But CX4 is limited to 15m right ? Looks like it will be too short for me (my two san are not on the same floor).

    On 10GBase-T it's 100m with copper but i can't find a switch 10GBase-T (8 ports?) and NIC 10GBase-T compatible with OpenE.

  10. #10

    Default

    you can buy extender for cx4 from intel
    with intel connect cable you can go up to 100m

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •