Visit Open-E website
Page 3 of 3 FirstFirst 123
Results 21 to 28 of 28

Thread: Bonding 802.3ad betting performance ?

Thread has average rating 4.00 / 5.00 based on 1 votes.
Thread has been visited 36472 times.
  1. #21
    Join Date
    Oct 2007
    Location
    Toronto, Canada
    Posts
    108

    Default

    Quote Originally Posted by rogerk
    you can buy extender for cx4 from intel
    with intel connect cable you can go up to 100m
    To be clear it is not an "extender" but cables which use ultra high quality cables combined with an amplifier built-in to the CX4 connector which allow for the signal distance to be extended to 100m.

  2. #22

    Default Old thread revived

    I have a similar issue.
    I have a Supermicro server with a 3ware Inc 9650SE SATA-II RAID PCIe with 8x SATA 1TB Drives setup in RAID6 with 2 x onboard 1Gb interfaces and 2 x dual 1Gb interfaces.
    eth0: MGMT
    eth1: Internet facing interface

    Bond0: 172.16.31.200 (eth2-5 bonded in 802.3ad)
    eth2: 192.168.1.1
    eth3: 192.168.2.1
    eth4: 192.168.3.1
    eth5: 192.168.4.1

    I then have 2 Supermicro 1U servers with 4 interfaces, they are as follows:
    eth0: MGMT
    eth1: Internet and Dot1q trunking
    eth2 & eth3 are setup in a bond and connectet back to iSCSI on IP address 172.16.31.200

    I have a linux and windows VM on each XenServer, the windows system running CrystalDiskMark 3.0 gets the following results:
    Read:
    Seq: 94.95
    512K: 89.05
    4K: 16.53
    4K QD32: 67.09

    Write:
    Seq: 75.67
    512K: 43.64
    4K: 3.130
    4K QD32: 3.461

    Everything runs fine, though these slowish speeds are a bit of a concern. My issue is every now and then I have a problem where sites on the Linux VM's fail to load for 10-15 seconds then they all load fine. During this time if I have a file open via CLI using vi or nano and I try save it, it also waits for 10-15 seconds and then the file saves and I see my bash shell, at the same time all sites start loading.

    I know this is not an internet problem as the hosts are all pingable at the time and I can still press enter and see my bash shell new line, it's just read/write seems to hang for 10-20 seconds from time to time.

    I read in this thread that some people use something called mpio (never seen or heard of it) and connect to the multiple interfaces IP's rather than a single Bond0 IP address.

    As this is a production system in a datacenter I'm reluctant to change too much incase I break it. Can someone with knowledge about iSCSI with opene & xenserver please make suggestions.

    Thanks in advance
    Barry Murphy

  3. #23

    Default

    I have seen where the iSCSI initiators will drop the connections due to the fact of a slow response time from the Target or the speed, and if your doing some replications what is the speed of the volume replication link (please dont share this link w/ other services).

    Or RAID could be doing something on the backend like RAID health check or disk or BBU check, slows the performance and if every one is hammering it could cause a delay but we would have to see the logs or check the 3Ware logs.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  4. #24
    Join Date
    Oct 2008
    Posts
    69

    Default

    hi,

    i found my old post, finally i found the trick in vmware thant i neved posted back here...

    In fact, we just have to change the round robin policy (iops options). I assume that you have MPIO working (multiple connections from 1 host to 1 lun)
    First, activate round robin in vmware (Manage Path on Datastore) then connect to ESX Host with SSH

    list vmfs disk on esx host :
    Code:
    ls /vmfs/devices/disks/eui*
    /vmfs/devices/disks/eui.65304753474c3552
    /vmfs/devices/disks/eui.65304753474c3552:1
    get current config:
    Code:
    /usr/sbin/esxcli nmp roundrobin getconfig --device eui.65304753474c3552
    Byte Limit: 10485760
    Device: eui.65304753474c3552
    I/O Operation Limit: 1000
    Limit Type: Default
    Use Active Unoptimized Paths: false
    modify iops option
    Code:
    /usr/sbin/esxcli nmp roundrobin setconfig --type "iops" --iops=1 --device eui.65304753474c3552
    verify option
    Code:
    /usr/sbin/esxcli nmp roundrobin getconfig --device eui.65304753474c3552
    Byte Limit: 10485760
    Device: eui.65304753474c3552
    I/O Operation Limit: 1
    Limit Type: Default
    Use Active Unoptimized Paths: false
    with this config done on all my Host for all my datastore i get 170 - 180Mps in read performance with 2X1Gbit.

    Now my last question, can i have the same config with iscsi failover, i just need 2 virtual ip

    Thanks

    nsc

  5. #25

    Default

    This is correct that you will need 2 VIP's to work with the MPIO.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  6. #26

    Default

    I also find some information in here, thank all

  7. #27

    Unhappy

    Hello have you any info about the cost a switch 10GB and 3 Network Cards 10GB ?

    Both compatible with OpenE of course ;-)

    i also agree this thanks ....

    ____________________________________

    Plc Training In Chennai

  8. #28
    Join Date
    Aug 2010
    Posts
    404

    Default

    You can find many brands that we support for NIC cards and switchs such as:
    Chelsio Communications, Intel, Myricom, NetEffect, Neterion, NetXen, SMC Networks, ... and many more.

    Please visit our website at:
    http://www.open-e.com/service-and-su...tibility-list/

    to find compatibility list ( if you have a brand/model that is not listed, please ask our support team to help you more about that ).

    For the prices, you can check many websites that sell switch/NIC online or contact a reseller near you, or if some or our forum reader help with that

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •