Visit Open-E website
Results 1 to 10 of 22

Thread: Question about bonding

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Default

    Quote Originally Posted by enealDC
    any type of bonding (lacp, or 802.3 specifically) will use some type of xor/xand that only works best when you have:

    lots of sources and lots of destinations
    lots of sources and a single destination
    lots of different types of traffic

    the algorithm (depending upon the switch) uses the least significant bits of either the source/dest mac or the source/dest ip to determine which interface to use. Some can even use TCP ports.

    If your goal is to increase I/Os and throughput, I'd suggest you look into MPIO. I have several VI clusters and this has worked very well for us. We see roughly a 30 - 40% improvement across two interfaces.
    The main function of this san will be to host virtual machines on 3 or 4 host servers. It will also be used to host disk to disk backups and long term file storage. My primary concern is having enough performance to run the virtual machines, about 25-30 when all said and done. So to get the best performance I want to do what then?

  2. #2

    Default

    I should clarify my last question, what specific setting in DSS do I want then, to provide the best throughput and performance? I was thinking about it last night at home and finally understood some concepts and examples that were mentioned here.

  3. #3
    Join Date
    Feb 2009
    Posts
    142

    Default

    If it were me I would probably try creating 2 802.3ad bonds of 4 gigabit ports each to create 2 channels so I could use MPIO on the servers.

    On the servers use MPIO. Not sure if MPIO will work with bonded NICs or not at the server. We just have dual gig cards so MPIO is simple.

  4. #4

    Default

    Quote Originally Posted by webguyz
    If it were me I would probably try creating 2 802.3ad bonds of 4 gigabit ports each to create 2 channels so I could use MPIO on the servers.

    On the servers use MPIO. Not sure if MPIO will work with bonded NICs or not at the server. We just have dual gig cards so MPIO is simple.
    So then in this example, esentially having 2 4gb ports on the dss box, am I limited in how many servers can connect back to the box? I'm guessing not since it sounds like it is just 2 separate , channels if you will.

  5. #5

    Default

    Hey dweberwr

    The best thing to do is to run some performance test, with different settings (bonds).
    Depending on your network (switch, nic, routers etc) your results will vary.

  6. #6

    Default

    I don't actually have a dss box, Going to be eventually, in the process of trying to get all my ducks in a row before it gets here.

  7. #7
    Join Date
    Feb 2009
    Posts
    142

    Default

    Quote Originally Posted by dweberwr
    I don't actually have a dss box, Going to be eventually, in the process of trying to get all my ducks in a row before it gets here.
    In that case I would suggest a dual port 10G card in the DSS instead of messing around with all those bonded ports. Supermicro has a nice one for less then $500. I think the quad port gigabit Intels are aound $400 each.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •