Visit Open-E website
Results 1 to 6 of 6

Thread: Migration Windows Storage Server to DSS

  1. #1
    Join Date
    Aug 2011
    Location
    Germany
    Posts
    22

    Default Migration Windows Storage Server to DSS

    Hello,

    current environment: Windows Storage Server, iSCSI attached to VMware ESX.
    I want to migrate to an active/passive DSS V6 iSCSI storage, primary node is a new Server, secondary node will be the Windows Storage Server (reinstalled with Open-E).

    My migration plan:

    Install DSS V6 to the new server, attach it as iSCSI standalone to ESX.
    Move all Virtal Machines from the WSS-iSCSI to the DSS-iSCSI.
    Remove WSS-iSCSI from ESX.
    Reinstall WSS with DSS.
    Start Volume Replication and iSCSI Failover.

    Is this ok?
    Any other ideas?
    Do you see any problems starting iSCSI failover from an running standalone DSS?



    Before I do this in our productive environment I will run a test. But first I like to know if this is a good way or if there is a better way.

    Maybe I will use V7 active/active - don't know at the moment.



    Best regards,
    Manuel
    There are only 10 types of people in the world:
    Those who understand binary, and those who don't

  2. #2
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    Your plan will work, with one issue that may come up.
    When starting failover, you will need to change the iscsi path to the Virtual IP you set up in failover.
    Follow the setup video on our site for esx and failover
    http://www.open-e.com/service-and-su...ts-and-videos/

    Also, you will have better IO with MPIO

  3. #3
    Join Date
    Aug 2011
    Location
    Germany
    Posts
    22

    Default

    Hi,

    I will use two NIC als balance-rr bond on DSS side and two NIC with teaming on ESX side. All connections 1gbit.
    With this setup I'm running up to 10 VM's including AD, Fileserver, ... for max 100 users without IO problems.

    What do you think, how much is the difference if I use MPIO?


    best regards, Manuel
    There are only 10 types of people in the world:
    Those who understand binary, and those who don't

  4. #4
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    To get aggregate throughput with a balance rr bond, you would need a switch in between that can do port trunking.
    With MPIO, a switch is not required for aggregation.

  5. #5
    Join Date
    Aug 2011
    Location
    Germany
    Posts
    22

    Default

    Yes, I know and I'm using link aggregation.
    If I switch to MPIO with two servers I cannot configure the Failover Manager?
    I think I will use the balance-rr bond because it was working in the past.

    One last question:
    After I setup the second server I start the Volume Replication. As soon as I press START it goes to Consistent. Why?
    Same as soon as I start the Failover Manager it goes to Consistens, too. Where can I check the initial replication?
    Or is the 1000gbit connection for replication so fast that I don't see it? (I have ~5gb data on the test server)



    Manuel
    There are only 10 types of people in the world:
    Those who understand binary, and those who don't

  6. #6
    Join Date
    Aug 2011
    Location
    Germany
    Posts
    22

    Default

    Looks like that the replication is fast enough and I didn't see it... ;-)

    The migration in my test lab was running without downtime. Let's see how it works with the productive system and with 485 GB of data on it.

    Manuel
    There are only 10 types of people in the world:
    Those who understand binary, and those who don't

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •