That was my doubt, too. Just because of this, I suggested so long procedure. Let me explain it with other words:

1. Manual Failover, power off and phisically install new card on Primary Node. After restart of machine, there is still old setup/configuration, only new eth interface is added, but it is unconnected and passive. Now, we can Synch Volumes from Secondary to Primary server anfd Failback them. All the time, iSCSI is active on Virtual IP addresses.

2. Power off Secondary Node, it is “virtualy offline” anyway. After installing new card and switching machine on, there is still old setup/configuration, so we can join to our Failover system. (I suppose that pressing on Start in Failover Box on Secondary Node is enough?).

3. We can’t change any network settings for interfaces configured in Failover, so we must stop Failover, to transfer Volume Replication to other interfacs (10Gbe).
But, deactivating Failover service will cause the virtual IP address to be deactivated and connections to that address will be lost.
So, we have to add new Round Robin paths to all iSCSI devices on all VMhosts to communicate with phisical addreses of Primary Node. Of course, adresses of VMhosts must be added to Target Allow Access on Primary server (if they are not allready empty). After all seting we shall inspect VMhosts if new paths ara active, together with paths using Virtual IP addreses.
At this point, we can stop Failover services. Virtual IP addresses are disabled but connection to VMs is established with real eth addresses of Primary server.

4. On both servers disable Failover functionality, setup new eth interfaces with provided addresses and connect them directly. (We can use old interfaces, wich are now free, as a third MPIO link.).
Then follows the setting of both servers to Failover, again.