Visit Open-E website
Page 1 of 3 123 LastLast
Results 1 to 10 of 22

Thread: Question about bonding

  1. #1

    Default Question about bonding

    I am hoping somebody can clear up my confusion. I am new to iscsi and understaning the whole deal about bonding nic's, so bear with me in my explanatio. I am looking at building a iscsi san box with 8 gbe ports on it. I will have at least 4 servers connecting to this san, maybe 2 more. My thoughts were to put a quad port card in all of the servers, connected to a dedicated gigabit switch for the san. What I was wanting to do was bond all 8 ports on the san and have each servers quad card bonded as well. At this time, I am not doing high availability or clustering. Eachserver will be connecting to its own target. Will this work to get the maximum performance out of the san and servers?

  2. #2
    Join Date
    Aug 2008
    Posts
    236

    Default

    You have some flexibility here.
    A couple of things you should know. Bond's provide great fault tolerance but suck at providing additional performance. I would suggest you look into multipathing your targets across two or more interfaces.
    IMO you need two switches and you don't really need more than two interfaces per initiator.

  3. #3
    Join Date
    Feb 2009
    Posts
    142

    Default

    The hardest thing to wrap your head around bonding is that is doesn't multiply your bandwidth. If you have 4 gigabit cards bonded together you will NOT get 4 gigabit of bandwidth. You will get load balancing across all 4 cards, but your performance will never be more then 1 gigabit, but you will have simultaneous traffic across all 4 channels.

    Multipathing allows simultaneous access across 2 channels (or more) if you use Least Queue method of MPIO so it appears your getting more bandwidth but your really splitting the load

    We have dual 10g cards in our 2 DSS's and just a single dual ethernet card dedicated for ISCSI in each server doing MPIO. We have 6 servers right now and never really strain anything and get good performance under Hyper-V

  4. #4

    Default

    Hey webguyz - are you running File or Block I/O under Hyper-V? I am running Block but I was curious what you were doing.

  5. #5

    Default

    Ok, perhaps bonding wasn't the correct word. Seems like every manufacturer has a different definition bonding... So what I really am looking for is link aggregation. That would be 802.3ad?

  6. #6

    Default

    Quote Originally Posted by cphastings
    Hey webguyz - are you running File or Block I/O under Hyper-V? I am running Block but I was curious what you were doing.
    I am curious about this as well as I am running hyper-v.

  7. #7
    Join Date
    Feb 2009
    Posts
    142

    Default

    Doing File I/O but without write caching because of replication for autofailover. Took a perfomance hit when I had to turn it off. I tried it with Block I/O and couldn't see that much difference in performance.. Nothing scientific, just using HDtune Pro and the results were close in File I/O and Block I/O. I had write cache on the volumes at the time I was testing.

  8. #8

    Default

    Hey dweberwr

    Link aggregation is still bonding, I have also heard it called LUG.
    so depending on who you are talking to, when you add 2 or more nic ports together it's still bonding.

    802.3ad is a mode of bonding, it has been knows to provide the best performance with DSS, but all this depends on your network.

    other bonding modes are:
    active-backup
    broadcast
    balance-xor
    balance-tlb
    balance-rr

    I agree with webguyz about bonding, It does not make the nics faster it increases the throughput.

  9. #9

    Default

    So in my scenario then, where I want to have all 8 gig ports "bonded" using 802.3ad,lacp, will that create a single logical interface to give me the most throughput.?

  10. #10
    Join Date
    Feb 2009
    Posts
    142

    Default

    lets try explaining it another way.

    Lets say one of your servers is talking to the DSS and no others. The very most throughput you will get is 1 gigabit. Lets say a second server starts talking as well, now you have 2 nics talking, but each only gives you a max of 1 gigabit. A 3rd server starts talking and now you have 3 gig of data flowing, but the most anyone of them will give you is 1 gigabit. Think of a 2 lane highway as opposed to a 6 lane highway. All the cars go the same speed (55mp or 1 gigabit), but more of them can get from point A to B simultaneously as opposed to the 2 lane highway (where cars still only go 55mph or 1 gig) but less of them can get from point A to B at the same time.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •