Visit Open-E website
Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 22

Thread: Question about bonding

  1. #11
    Join Date
    Aug 2008
    Posts
    236

    Default

    Quote Originally Posted by dweberwr
    So in my scenario then, where I want to have all 8 gig ports "bonded" using 802.3ad,lacp, will that create a single logical interface to give me the most throughput.?
    any type of bonding (lacp, or 802.3 specifically) will use some type of xor/xand that only works best when you have:

    lots of sources and lots of destinations
    lots of sources and a single destination
    lots of different types of traffic

    the algorithm (depending upon the switch) uses the least significant bits of either the source/dest mac or the source/dest ip to determine which interface to use. Some can even use TCP ports.

    If your goal is to increase I/Os and throughput, I'd suggest you look into MPIO. I have several VI clusters and this has worked very well for us. We see roughly a 30 - 40% improvement across two interfaces.

  2. #12

    Default

    Quote Originally Posted by enealDC
    any type of bonding (lacp, or 802.3 specifically) will use some type of xor/xand that only works best when you have:

    lots of sources and lots of destinations
    lots of sources and a single destination
    lots of different types of traffic

    the algorithm (depending upon the switch) uses the least significant bits of either the source/dest mac or the source/dest ip to determine which interface to use. Some can even use TCP ports.

    If your goal is to increase I/Os and throughput, I'd suggest you look into MPIO. I have several VI clusters and this has worked very well for us. We see roughly a 30 - 40% improvement across two interfaces.
    The main function of this san will be to host virtual machines on 3 or 4 host servers. It will also be used to host disk to disk backups and long term file storage. My primary concern is having enough performance to run the virtual machines, about 25-30 when all said and done. So to get the best performance I want to do what then?

  3. #13

    Default

    I should clarify my last question, what specific setting in DSS do I want then, to provide the best throughput and performance? I was thinking about it last night at home and finally understood some concepts and examples that were mentioned here.

  4. #14
    Join Date
    Feb 2009
    Posts
    142

    Default

    If it were me I would probably try creating 2 802.3ad bonds of 4 gigabit ports each to create 2 channels so I could use MPIO on the servers.

    On the servers use MPIO. Not sure if MPIO will work with bonded NICs or not at the server. We just have dual gig cards so MPIO is simple.

  5. #15

    Default

    Quote Originally Posted by webguyz
    If it were me I would probably try creating 2 802.3ad bonds of 4 gigabit ports each to create 2 channels so I could use MPIO on the servers.

    On the servers use MPIO. Not sure if MPIO will work with bonded NICs or not at the server. We just have dual gig cards so MPIO is simple.
    So then in this example, esentially having 2 4gb ports on the dss box, am I limited in how many servers can connect back to the box? I'm guessing not since it sounds like it is just 2 separate , channels if you will.

  6. #16

    Default

    Hey dweberwr

    The best thing to do is to run some performance test, with different settings (bonds).
    Depending on your network (switch, nic, routers etc) your results will vary.

  7. #17

    Default

    I don't actually have a dss box, Going to be eventually, in the process of trying to get all my ducks in a row before it gets here.

  8. #18
    Join Date
    Feb 2009
    Posts
    142

    Default

    Quote Originally Posted by dweberwr
    I don't actually have a dss box, Going to be eventually, in the process of trying to get all my ducks in a row before it gets here.
    In that case I would suggest a dual port 10G card in the DSS instead of messing around with all those bonded ports. Supermicro has a nice one for less then $500. I think the quad port gigabit Intels are aound $400 each.

  9. #19

    Default

    Quote Originally Posted by webguyz
    In that case I would suggest a dual port 10G card in the DSS instead of messing around with all those bonded ports. Supermicro has a nice one for less then $500. I think the quad port gigabit Intels are aound $400 each.
    Do you mean 10 g on just the dss or on the servers too? I can't afford a switch with nothing but 10 g ports on it.

  10. #20
    Join Date
    Feb 2009
    Posts
    142

    Default

    I would put the dual 10g card in the DSS and use MPIO with dual gigabit adapters in the servers.

    A Dell 6224 with four 10G CX4 uplink ports costs $1,800.00 ($1,600.00 with just 2 CX4 ports) and you can use the 24 gigabit ports for 12 MPIO server links. I would dedicate this switch to doing ISCSI only.

    Couple this with the SuperMicro AOC-STG-i2 dual 10g card (less the $500.00) and you have a cost effective ISCSI SAN infrastructure that should be able to support several virtualized servers and up to 2 DSS's if your doing failover.

    SAN's are not cheap, but ultimately the way to go.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •