Visit Open-E website
Results 1 to 10 of 28

Thread: Bonding 802.3ad betting performance ?

Thread has average rating 4.00 / 5.00 based on 1 votes.
Thread has been visited 49653 times.

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Join Date
    Oct 2008
    Posts
    69

    Default

    I aggree with the fact that replication must not impact read performance.

    My ESXi 4 is in MPIO support RR Vmware : i see activity on my two ETH for iISCSI traffic (but may it's not working...)

    So what is your teaming mode in DSS to get 170Mb / s in read ?

    I tried Balance-RR (lower read performance) and Balance ALB.

    What is yours ?

  2. #2

    Default

    Quote Originally Posted by nsc
    I aggree with the fact that replication must not impact read performance.

    My ESXi 4 is in MPIO support RR Vmware : i see activity on my two ETH for iISCSI traffic (but may it's not working...)

    So what is your teaming mode in DSS to get 170Mb / s in read ?

    I tried Balance-RR (lower read performance) and Balance ALB.

    What is yours ?
    bonding between client and dss? what for? mpio does it for me.
    for an example: dss with 2x ge ports,
    port A with 10.0.0.1/24 connected to switch A
    port B with 10.0.1.1/24 connected to switch B
    now client (xen in this case) also have 2ge ports, which are connected to switch A and B, this way each of the port is seperate from the other one and in its own network, for the 'balancing' i setup a iscsi connection to each of the network (10.0.0.1 and 10.0.1.1) and put mpio over it to balance the io requests (change path every 4io requests) - this way im getting about 170MB/s read

  3. #3
    Join Date
    Oct 2008
    Posts
    69

    Default

    ok red now i have understand

    I want to have HA Open-e so i have only one Virtual IP for link between client & OpenE... I can't use Mpio on IP for the link... (Or may be DSS V6 can give me more than one Virtual IP?

    I'll try your setup anyway.

    But with your setup you dont have automatic failover for your san right ?

  4. #4
    Join Date
    Jul 2008
    Location
    austria, vienna
    Posts
    137

    Default

    Quote Originally Posted by nsc
    ok red now i have understand

    I want to have HA Open-e so i have only one Virtual IP for link between client & OpenE... I can't use Mpio on IP for the link... (Or may be DSS V6 can give me more than one Virtual IP?
    Yes, you can have more than one vIP - I use MPIO with 2 vIPs.
    regards,
    Lukas

    descience.NET
    Dr. Lukas Pfeiffer
    A-1140 Wien
    Austria
    www.dotnethost.at

    DSS v6 b4550 iSCSI autofailover with Windows 2008 R2 failover cluster (having still some issues with autofailover).

    2 DSS: 3HE Supermicro X7DBE BIOS 2.1a, Areca ARC-1261ML FW 1.48, 8x WD RE3 1TB, 1x Intel PRO1000MT Dualport, 1x Intel PRO1000PT Dualport.

    2 Windows Nodes: Intel SR1500 + Intel SR1560, Dual XEON E54xx, 32 GB RAM, 6 NICs. Windows Server 2008 R2.

  5. #5
    Join Date
    Oct 2008
    Posts
    69

    Default

    i removed my bond for iscsi (eth3 / eth5)

    on eth3 : 10.0.0.1 Jumbo Frame 9000
    on eth5 : 10.0.0.3 Jumbo Frame 9000

    on esxi host i add 10.0.0.3 for iscsi

    now i have my 4 paths to access my iscsi volume

    Round Robin Vmware is active

    No perfomance improvement with CrystalDiskMark

    I'm using OpenE V5 (not V6) may it's a limitation with IET / SCST option for ISCSI ?

    I'm planning to move to V6...

  6. #6

    Default

    i had to tweak mpio settings to get any increase in performance at all - 4 IOs per path instead of default 1000

  7. #7
    Join Date
    Oct 2008
    Posts
    69

    Default

    i dont find this kind of option in Vmware.

    Following a blog message, i just have to activate Round Robin option.

    I'll wait for V6 then...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •