Visit Open-E website
Results 1 to 7 of 7

Thread: Correct MPIO from host to failover IP

  1. #1
    Join Date
    Jan 2011
    Location
    Calgary
    Posts
    11

    Default Correct MPIO from host to failover IP

    Hello!
    We have been using Open E DSS V6 on a few of our clients already.
    The problem we are trying to solve concerns how to correctly utilize MPIO from the host side to be able to get better throuhput and redundancy to SANs configured in a failover.
    Scenario:
    2 x Open-E SAN systems in failover connected to two switches (non-stacked but connected via 1Gb connection between them) via dual 10Gb connections (bonded on the SANs).
    1 x MS Windows 2008 R2 hyper-v host with two dual 1Gb NICs intended for iSCSI connectivity (so four 1Gb connections going to same switches, two into each switch).
    2 x switches dedicated to iSCSI capable of mixed 10Gb/1Gb connectivity - not stacked.

    Before anyone starts posting links to the MPIO how-to for Windows Server 2008, lets clear the field by saying the SANs present only one iSCSI target to the 'front-end' - one LUN only.
    Document clearly outlines creating multiple connections to multiple iscsi targets - not just one.
    So, any advice on how to efficiently connect four 1Gb NICs from one hyper-V host to same iSCSI target (failover IP) on the SAN??
    We are so far unsuccessful at arriving at a good solution here and are worried that in the end we can only successfully utilize one NIC connection from host to SAN.

    Please advise!
    To summarize the problem:
    How to correctly configure iSCSI connection to open (in failover with one target, one lun) from multiple NICs on the host such that all host NICs are actually used optimally?

    I am aeternally grateful for any constructive input on this!

    Darko

  2. #2
    Join Date
    Aug 2008
    Posts
    236

    Default

    To take advantage of iSCSI MPIO you need multiple distinct physical paths (e.g. NICs) that are IP'd in different subnets.
    I don't think you detailed the number of connections on your SAN host,but if it's greater than two, just IP them uniquely (again in different subnets) and add both portal when configuring the Window iSCSI initiator and you'll have two luns in Windows Disk Mgmt *IF* multipathing has not been configured.

  3. #3
    Join Date
    Aug 2010
    Posts
    404

    Default

    Quote Originally Posted by enealDC
    To take advantage of iSCSI MPIO you need multiple distinct physical paths (e.g. NICs) that are IP'd in different subnets.
    I don't think you detailed the number of connections on your SAN host,but if it's greater than two, just IP them uniquely (again in different subnets) and add both portal when configuring the Window iSCSI initiator and you'll have two luns in Windows Disk Mgmt *IF* multipathing has not been configured.

    You can find more details about MPIO at:
    http://kb.open-e.com/How-to-configur...tiator_57.html

    And
    http://www.open-e.com/library/webcasts-and-videos/

  4. #4
    Join Date
    Jan 2011
    Location
    Calgary
    Posts
    11

    Default

    So, from what I understand watching videos.. I need to create more than one virtual IP on the SAN side.
    My host has four available 1Gb connections to reach the SANs.
    MY SANs have each:
    2 x 10Gb connections - for ISCSI, so far I had them bonded into one balance-rr,
    - now I am thinking I need to break that to get each one a VIP

    2 x 10Gb - for volume replication, bonded in balance-rr
    1 x 1Gb NIC for LAN access and failover comm.
    1 x 1Gb NIC for failover comm (X-over cable)

    I can connect only two 10Gb NICs per SAN to the switches as they have each only two 10Gb modules each.. this lets me connect two SANs to two switches with total of four 10Gb connections - I won't be able to change that.

    Any idea?

  5. #5
    Join Date
    Aug 2008
    Posts
    236

    Default

    the max number of paths on your initiator(s) will be determined by the maximum number of paths you have available on the target.
    if you've dedicated two 10GBE nics for iSCSI - then I'd break the rr bond and have a virtual for each NIC. So you'll need a total of 6 IPs across two subnets, two of the IPs being the VIPs.

  6. #6
    Join Date
    Jan 2011
    Location
    Calgary
    Posts
    11

    Default

    Hi!
    Thanks for the advice.
    I managed to get two VIPs on the SAN and two distinct connections to it from the host with MPIO successfully.
    Now, I have two 1Gb NICs on the host leftover not being used and have tried to team adapters such that I get two teams on the host to connect to two VIPs with MPIO..
    however that config is real buggy and host server has issues running this.

    Any idea if teaming is not recommended fro host-side connectivity to SANs via iSCSI or am I not configuring this right?

    I would like to be able to use 4 NIcs from the host to two VIPs on SANs.

    Thanks for all your help,

    D

  7. #7
    Join Date
    Aug 2008
    Posts
    236

    Default

    Teaming/Bonding really only works to provide redundancy UNLESS you have a very large number of client devices OR unless you IP each host so that the XOR operation results in a different patch being chosen.
    I would suggest that you go ahead and use all 4 NICS as 4 distinct paths. In theory, you will then have an aggregate of 400MB of bandwidth (so long as your backend storage and replication can sustain it). In this scenario, if you have at least two nodes with two nics each, you can have two dedicated paths per host.
    If you don't want to go this route, then the alternative method is to just setup each NIC pair as an active/passive bond.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •