We have 2 equivalent servers with DSS V6.
Both servers work in failover configuration as iSCSI storage for our VMware cluster and storage must be available all the time.

DSS1 as passive:
eth1:192.168.0.129 management
eth2:192.168.101.129 MPIO1 (virtual 20.0.1.1)
eth3:192.168.102.129 MPIO2 (virtual 20.0.2.1)
eth4:192.168.110.129 direct link to DSS2

DSS2 as active:
eth1:192.168.0.128 management
eth2:192.168.101.128 MPIO1 (virtual 20.0.1.1)
eth3:192.168.102.128 MPIO2 (virtual 20.0.2.1)
eth4:192.168.110.128 direct link to DSS1

VMhost1:
vSwitch1: eth1
vmk0:192.168.101.3 Ping Node
vmk1: 20.0.1.3 MPIO1 (iSCSI 20.0.1.1)
vSwitch2: eth2
vmk0:192.168.102.3 Ping Node
vmk1: 20.0.2.3 MPIO2 (iSCSI 20.0.2.1)

VMhost2:
vSwitch1: eth1
vmk0:192.168.101.4 Ping Node
vmk1: 20.0.1.4 MPIO1 (iSCSI 20.0.1.1)
vSwitch2: eth2
vmk0:192.168.102.4 Ping Node
vmk1: 20.0.2.4 MPIO2 (iSCSI 20.0.2.1)

VMhost3:
vSwitch1: eth1
vmk0:192.168.101.5 Ping Node
vmk1: 20.0.1.5 MPIO1 (iSCSI 20.0.1.1)
vSwitch2: eth2
vmk0:192.168.102.5 Ping Node
vmk1: 20.0.2.5 MPIO2 (iSCSI 20.0.2.1)

We are planning to change DSSs eth4 1Gbe cards (as direct linked for failover /Volume replication) with two 10Gbe network cards (Intel AT2)

So, how to lead out all the procedure, without disconnect storage from VMware hosts?

My idea is:
step 1: On primary DSS server
- manual Failover,
- power off and add 10Gbe card,
- power on (with the old direct connection on eth4),
- synchronization & switch to active.
All the time, VMhosts see virtual IP addresses

step 2: On secondary DSS server
- power off and add 10Gbe card,
- power on (with the old direct connection on eth4),
- synchronization. (Still don’t know how to, exactly? Does any webinar or “HowTo” show this procedure? I can remember only testing failover with primary server.)

step 3: Prepare DSSs & VMs and stop Failover:
- on primary DSS server add Ping Node addresses to Target Allow Access,
- on all VMs add real eth addresses of primary DSS server to all iSCSI paths,
- on primary DSS stop Failover Manager. Now, virtual IP addresses are disabled and connection to VMs is with real eth addresses,
- disable Failover functionality on both DSSs.

step 4: Prepare DSSs to Failover:
- disable “old” direct connected cards from Auxiliary connection
- configure 10Gb cards to direct connect to each other
- define all procedure to start Failover manager
- start Failover
- test iSCSI paths on VMs – virtual IPs are alive again
- delete paths to real addresses on primary DSS
- delete Ping Node addresses from Target Allow Access

Is there any not-so-complicated procedure? Switching off the whole system can kill my ego, so it is NOT the sollution.