Originally Posted by enealDC
You can find more details about MPIO at:
http://kb.open-e.com/How-to-configur...tiator_57.html
And
http://www.open-e.com/library/webcasts-and-videos/
Originally Posted by enealDC
You can find more details about MPIO at:
http://kb.open-e.com/How-to-configur...tiator_57.html
And
http://www.open-e.com/library/webcasts-and-videos/
So, from what I understand watching videos.. I need to create more than one virtual IP on the SAN side.
My host has four available 1Gb connections to reach the SANs.
MY SANs have each:
2 x 10Gb connections - for ISCSI, so far I had them bonded into one balance-rr,
- now I am thinking I need to break that to get each one a VIP
2 x 10Gb - for volume replication, bonded in balance-rr
1 x 1Gb NIC for LAN access and failover comm.
1 x 1Gb NIC for failover comm (X-over cable)
I can connect only two 10Gb NICs per SAN to the switches as they have each only two 10Gb modules each.. this lets me connect two SANs to two switches with total of four 10Gb connections - I won't be able to change that.
Any idea?
the max number of paths on your initiator(s) will be determined by the maximum number of paths you have available on the target.
if you've dedicated two 10GBE nics for iSCSI - then I'd break the rr bond and have a virtual for each NIC. So you'll need a total of 6 IPs across two subnets, two of the IPs being the VIPs.
Hi!
Thanks for the advice.
I managed to get two VIPs on the SAN and two distinct connections to it from the host with MPIO successfully.
Now, I have two 1Gb NICs on the host leftover not being used and have tried to team adapters such that I get two teams on the host to connect to two VIPs with MPIO..
however that config is real buggy and host server has issues running this.
Any idea if teaming is not recommended fro host-side connectivity to SANs via iSCSI or am I not configuring this right?
I would like to be able to use 4 NIcs from the host to two VIPs on SANs.
Thanks for all your help,
D
Teaming/Bonding really only works to provide redundancy UNLESS you have a very large number of client devices OR unless you IP each host so that the XOR operation results in a different patch being chosen.
I would suggest that you go ahead and use all 4 NICS as 4 distinct paths. In theory, you will then have an aggregate of 400MB of bandwidth (so long as your backend storage and replication can sustain it). In this scenario, if you have at least two nodes with two nics each, you can have two dedicated paths per host.
If you don't want to go this route, then the alternative method is to just setup each NIC pair as an active/passive bond.