-
Bonding question
I would just like some clarity, in the active - active iscsi setup the recommendation is to bond nic's, does the balance-alb give bandwidth aggregation and failover?
I have two dual port 10GB cards in each server for iscsi and 1 10GB card for replication (direct connection). On nodeA can i bond the ports on card one and plug them into switch one into trunk ports (LAG Group ) and bond ports on card two and plug them into switch two trunk ports (LAG Group) and do the same for nodeB. Or do i need to plug one port on each card into each switch and trunk the two switches together with a LAG group.
For the exsi servers we will use MPIO
Thanks
-
To be honest I would not do any bonding (yes I know that guide that your looking at but there is another that is no bonds), I would use MPIO and dedicate the 1 x 10GbE NIC for the Volume Replication and the other 2 x 10GbE NICs for the Virtual IPs (keep them on separate networks or you will have routing issues).
-
ok so i will have 4 virtual ip's of 10GB and have a direct 10GB for replication. I will put them on all different subnets, what about using vlan's
-
Hi Todd,
So each of my servers has 6 x 10GB ports
I would have 4 x 10Gb ports on each server going to 2 x 10GB switches given me four virtual IP addresses is that correct which will giving me 2 VIP used for resources on node A and 2 VIP for resources on node B. Then i will enable MPIO on the VMware hosts that have 4x10GB Ports
I would then have 1 x 10GB port on each server for replication (direct link)
Does this sound correct
Gavin
-
With 4 10G ports available, I would use two as multipathing for VMware.
The other two I would use as rr-bond for replication.
Sven
-
This is correct - I see your OECE payed off :) We should make this a test question on the exam.
-
Yes, but I have 6 x 10GB ports so I am using 4 for VMware multi path and 2 x Replication RR :cool:
-
With six 10G links, you should use 3 for VMware and 3 as rr-bond for replication.
-
Hi,
What i was hoping to do was that I have 4 vmware hosts split into two clusters (2 hosts each) So I am wanting to configure open-e Active - Active to use 2x10GB cards for cluster1 iscsi targets and 2x 10GB cards for cluster2 iscsi targets and 2 10xGB for replication. Does this make sense
Gavin