Visit Open-E website
Results 1 to 3 of 3

Thread: Change 1Gbe to 10Gbe with no breaking storage connection

  1. #1
    Join Date
    Mar 2008
    Location
    Ljubljana
    Posts
    41

    Question Change 1Gbe to 10Gbe with no breaking storage connection

    We have 2 equivalent servers with DSS V6.
    Both servers work in failover configuration as iSCSI storage for our VMware cluster and storage must be available all the time.

    DSS1 as passive:
    eth1:192.168.0.129 management
    eth2:192.168.101.129 MPIO1 (virtual 20.0.1.1)
    eth3:192.168.102.129 MPIO2 (virtual 20.0.2.1)
    eth4:192.168.110.129 direct link to DSS2

    DSS2 as active:
    eth1:192.168.0.128 management
    eth2:192.168.101.128 MPIO1 (virtual 20.0.1.1)
    eth3:192.168.102.128 MPIO2 (virtual 20.0.2.1)
    eth4:192.168.110.128 direct link to DSS1

    VMhost1:
    vSwitch1: eth1
    vmk0:192.168.101.3 Ping Node
    vmk1: 20.0.1.3 MPIO1 (iSCSI 20.0.1.1)
    vSwitch2: eth2
    vmk0:192.168.102.3 Ping Node
    vmk1: 20.0.2.3 MPIO2 (iSCSI 20.0.2.1)

    VMhost2:
    vSwitch1: eth1
    vmk0:192.168.101.4 Ping Node
    vmk1: 20.0.1.4 MPIO1 (iSCSI 20.0.1.1)
    vSwitch2: eth2
    vmk0:192.168.102.4 Ping Node
    vmk1: 20.0.2.4 MPIO2 (iSCSI 20.0.2.1)

    VMhost3:
    vSwitch1: eth1
    vmk0:192.168.101.5 Ping Node
    vmk1: 20.0.1.5 MPIO1 (iSCSI 20.0.1.1)
    vSwitch2: eth2
    vmk0:192.168.102.5 Ping Node
    vmk1: 20.0.2.5 MPIO2 (iSCSI 20.0.2.1)

    We are planning to change DSSs eth4 1Gbe cards (as direct linked for failover /Volume replication) with two 10Gbe network cards (Intel AT2)

    So, how to lead out all the procedure, without disconnect storage from VMware hosts?

    My idea is:
    step 1: On primary DSS server
    - manual Failover,
    - power off and add 10Gbe card,
    - power on (with the old direct connection on eth4),
    - synchronization & switch to active.
    All the time, VMhosts see virtual IP addresses

    step 2: On secondary DSS server
    - power off and add 10Gbe card,
    - power on (with the old direct connection on eth4),
    - synchronization. (Still don’t know how to, exactly? Does any webinar or “HowTo” show this procedure? I can remember only testing failover with primary server.)

    step 3: Prepare DSSs & VMs and stop Failover:
    - on primary DSS server add Ping Node addresses to Target Allow Access,
    - on all VMs add real eth addresses of primary DSS server to all iSCSI paths,
    - on primary DSS stop Failover Manager. Now, virtual IP addresses are disabled and connection to VMs is with real eth addresses,
    - disable Failover functionality on both DSSs.

    step 4: Prepare DSSs to Failover:
    - disable “old” direct connected cards from Auxiliary connection
    - configure 10Gb cards to direct connect to each other
    - define all procedure to start Failover manager
    - start Failover
    - test iSCSI paths on VMs – virtual IPs are alive again
    - delete paths to real addresses on primary DSS
    - delete Ping Node addresses from Target Allow Access

    Is there any not-so-complicated procedure? Switching off the whole system can kill my ego, so it is NOT the sollution.

  2. #2

    Default Testing Change 1Gbe to 10Gbe

    perhaps you could test a similar scenario in vmware. Create a vswtich with no real nics on it and test a similar scenario.
    One thing that would worry me about this is preventing the nic cards from chaning from eth0 to eth1 when you replace the 1Gbe with a 10Gbe.
    In short, I don't know if it would work, but you could test it this way.

  3. #3
    Join Date
    Mar 2008
    Location
    Ljubljana
    Posts
    41

    Default

    That was my doubt, too. Just because of this, I suggested so long procedure. Let me explain it with other words:

    1. Manual Failover, power off and phisically install new card on Primary Node. After restart of machine, there is still old setup/configuration, only new eth interface is added, but it is unconnected and passive. Now, we can Synch Volumes from Secondary to Primary server anfd Failback them. All the time, iSCSI is active on Virtual IP addresses.

    2. Power off Secondary Node, it is “virtualy offline” anyway. After installing new card and switching machine on, there is still old setup/configuration, so we can join to our Failover system. (I suppose that pressing on Start in Failover Box on Secondary Node is enough?).

    3. We can’t change any network settings for interfaces configured in Failover, so we must stop Failover, to transfer Volume Replication to other interfacs (10Gbe).
    But, deactivating Failover service will cause the virtual IP address to be deactivated and connections to that address will be lost.
    So, we have to add new Round Robin paths to all iSCSI devices on all VMhosts to communicate with phisical addreses of Primary Node. Of course, adresses of VMhosts must be added to Target Allow Access on Primary server (if they are not allready empty). After all seting we shall inspect VMhosts if new paths ara active, together with paths using Virtual IP addreses.
    At this point, we can stop Failover services. Virtual IP addresses are disabled but connection to VMs is established with real eth addresses of Primary server.

    4. On both servers disable Failover functionality, setup new eth interfaces with provided addresses and connect them directly. (We can use old interfaces, wich are now free, as a third MPIO link.).
    Then follows the setting of both servers to Failover, again.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •