Visit Open-E website
Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 28

Thread: Bonding 802.3ad betting performance ?

Thread has average rating 4.00 / 5.00 based on 1 votes.
Thread has been visited 36466 times.
  1. #11
    Join Date
    Oct 2008
    Posts
    69

    Default

    But CX4 is limited to 15m right ? Looks like it will be too short for me (my two san are not on the same floor).

    On 10GBase-T it's 100m with copper but i can't find a switch 10GBase-T (8 ports?) and NIC 10GBase-T compatible with OpenE.

  2. #12

    Default

    you can buy extender for cx4 from intel
    with intel connect cable you can go up to 100m

  3. #13

    Default

    Quote Originally Posted by nsc
    what is your bond config in OpenE ?

    I tried Balance RR, Balance-ALB & 802.3ad.

    In Balance RR i get better write but lower Read...

    May be Xen do better...
    in my case replication goes trough 2x gbit balance rr bonding

    bonding shouldnt impact read performance at all, since only writes are being replicated to the slave, read operations are being serverd direct from your master box, so if you are not getting any better then 1ge read performance then there is something wrong with your esx - dss setup.

    and btw bonding will _not_ increase your replication/write throughput - you will need to change to infiniband or 10ge

  4. #14
    Join Date
    Oct 2008
    Posts
    69

    Default

    I aggree with the fact that replication must not impact read performance.

    My ESXi 4 is in MPIO support RR Vmware : i see activity on my two ETH for iISCSI traffic (but may it's not working...)

    So what is your teaming mode in DSS to get 170Mb / s in read ?

    I tried Balance-RR (lower read performance) and Balance ALB.

    What is yours ?

  5. #15

    Default

    Quote Originally Posted by nsc
    I aggree with the fact that replication must not impact read performance.

    My ESXi 4 is in MPIO support RR Vmware : i see activity on my two ETH for iISCSI traffic (but may it's not working...)

    So what is your teaming mode in DSS to get 170Mb / s in read ?

    I tried Balance-RR (lower read performance) and Balance ALB.

    What is yours ?
    bonding between client and dss? what for? mpio does it for me.
    for an example: dss with 2x ge ports,
    port A with 10.0.0.1/24 connected to switch A
    port B with 10.0.1.1/24 connected to switch B
    now client (xen in this case) also have 2ge ports, which are connected to switch A and B, this way each of the port is seperate from the other one and in its own network, for the 'balancing' i setup a iscsi connection to each of the network (10.0.0.1 and 10.0.1.1) and put mpio over it to balance the io requests (change path every 4io requests) - this way im getting about 170MB/s read

  6. #16
    Join Date
    Oct 2008
    Posts
    69

    Default

    ok red now i have understand

    I want to have HA Open-e so i have only one Virtual IP for link between client & OpenE... I can't use Mpio on IP for the link... (Or may be DSS V6 can give me more than one Virtual IP?

    I'll try your setup anyway.

    But with your setup you dont have automatic failover for your san right ?

  7. #17
    Join Date
    Jul 2008
    Location
    austria, vienna
    Posts
    137

    Default

    Quote Originally Posted by nsc
    ok red now i have understand

    I want to have HA Open-e so i have only one Virtual IP for link between client & OpenE... I can't use Mpio on IP for the link... (Or may be DSS V6 can give me more than one Virtual IP?
    Yes, you can have more than one vIP - I use MPIO with 2 vIPs.
    regards,
    Lukas

    descience.NET
    Dr. Lukas Pfeiffer
    A-1140 Wien
    Austria
    www.dotnethost.at

    DSS v6 b4550 iSCSI autofailover with Windows 2008 R2 failover cluster (having still some issues with autofailover).

    2 DSS: 3HE Supermicro X7DBE BIOS 2.1a, Areca ARC-1261ML FW 1.48, 8x WD RE3 1TB, 1x Intel PRO1000MT Dualport, 1x Intel PRO1000PT Dualport.

    2 Windows Nodes: Intel SR1500 + Intel SR1560, Dual XEON E54xx, 32 GB RAM, 6 NICs. Windows Server 2008 R2.

  8. #18
    Join Date
    Oct 2008
    Posts
    69

    Default

    i removed my bond for iscsi (eth3 / eth5)

    on eth3 : 10.0.0.1 Jumbo Frame 9000
    on eth5 : 10.0.0.3 Jumbo Frame 9000

    on esxi host i add 10.0.0.3 for iscsi

    now i have my 4 paths to access my iscsi volume

    Round Robin Vmware is active

    No perfomance improvement with CrystalDiskMark

    I'm using OpenE V5 (not V6) may it's a limitation with IET / SCST option for ISCSI ?

    I'm planning to move to V6...

  9. #19

    Default

    i had to tweak mpio settings to get any increase in performance at all - 4 IOs per path instead of default 1000

  10. #20
    Join Date
    Oct 2008
    Posts
    69

    Default

    i dont find this kind of option in Vmware.

    Following a blog message, i just have to activate Round Robin option.

    I'll wait for V6 then...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •