Visit Open-E website
Results 1 to 10 of 22

Thread: Question about bonding

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Default

    Quote Originally Posted by cphastings
    Hey webguyz - are you running File or Block I/O under Hyper-V? I am running Block but I was curious what you were doing.
    I am curious about this as well as I am running hyper-v.

  2. #2
    Join Date
    Feb 2009
    Posts
    142

    Default

    Doing File I/O but without write caching because of replication for autofailover. Took a perfomance hit when I had to turn it off. I tried it with Block I/O and couldn't see that much difference in performance.. Nothing scientific, just using HDtune Pro and the results were close in File I/O and Block I/O. I had write cache on the volumes at the time I was testing.

  3. #3

    Default

    Hey dweberwr

    Link aggregation is still bonding, I have also heard it called LUG.
    so depending on who you are talking to, when you add 2 or more nic ports together it's still bonding.

    802.3ad is a mode of bonding, it has been knows to provide the best performance with DSS, but all this depends on your network.

    other bonding modes are:
    active-backup
    broadcast
    balance-xor
    balance-tlb
    balance-rr

    I agree with webguyz about bonding, It does not make the nics faster it increases the throughput.

  4. #4

    Default

    So in my scenario then, where I want to have all 8 gig ports "bonded" using 802.3ad,lacp, will that create a single logical interface to give me the most throughput.?

  5. #5
    Join Date
    Feb 2009
    Posts
    142

    Default

    lets try explaining it another way.

    Lets say one of your servers is talking to the DSS and no others. The very most throughput you will get is 1 gigabit. Lets say a second server starts talking as well, now you have 2 nics talking, but each only gives you a max of 1 gigabit. A 3rd server starts talking and now you have 3 gig of data flowing, but the most anyone of them will give you is 1 gigabit. Think of a 2 lane highway as opposed to a 6 lane highway. All the cars go the same speed (55mp or 1 gigabit), but more of them can get from point A to B simultaneously as opposed to the 2 lane highway (where cars still only go 55mph or 1 gig) but less of them can get from point A to B at the same time.

  6. #6
    Join Date
    Aug 2008
    Posts
    236

    Default

    Quote Originally Posted by dweberwr
    So in my scenario then, where I want to have all 8 gig ports "bonded" using 802.3ad,lacp, will that create a single logical interface to give me the most throughput.?
    any type of bonding (lacp, or 802.3 specifically) will use some type of xor/xand that only works best when you have:

    lots of sources and lots of destinations
    lots of sources and a single destination
    lots of different types of traffic

    the algorithm (depending upon the switch) uses the least significant bits of either the source/dest mac or the source/dest ip to determine which interface to use. Some can even use TCP ports.

    If your goal is to increase I/Os and throughput, I'd suggest you look into MPIO. I have several VI clusters and this has worked very well for us. We see roughly a 30 - 40% improvement across two interfaces.

  7. #7

    Default

    Quote Originally Posted by enealDC
    any type of bonding (lacp, or 802.3 specifically) will use some type of xor/xand that only works best when you have:

    lots of sources and lots of destinations
    lots of sources and a single destination
    lots of different types of traffic

    the algorithm (depending upon the switch) uses the least significant bits of either the source/dest mac or the source/dest ip to determine which interface to use. Some can even use TCP ports.

    If your goal is to increase I/Os and throughput, I'd suggest you look into MPIO. I have several VI clusters and this has worked very well for us. We see roughly a 30 - 40% improvement across two interfaces.
    The main function of this san will be to host virtual machines on 3 or 4 host servers. It will also be used to host disk to disk backups and long term file storage. My primary concern is having enough performance to run the virtual machines, about 25-30 when all said and done. So to get the best performance I want to do what then?

  8. #8

    Default

    I should clarify my last question, what specific setting in DSS do I want then, to provide the best throughput and performance? I was thinking about it last night at home and finally understood some concepts and examples that were mentioned here.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •