Visit Open-E website
Results 1 to 5 of 5

Thread: Howto reactive seconda without outage: Secondary failed with three failed hard drives

  1. #1

    Default Howto reactive seconda without outage: Secondary failed with three failed hard drives

    Hello,

    after a planned power outage and a restart of a open-E DSS6 cluster (with primary & secondary raid) the primary came up with no problems however the secondary had initially two failed hds and during the resync (at 94% ) a third one died. So the raid6 entered the failed state. The primary is working fine.

    The first thing of course is to replace disks then rebuild the raid6 volume set, define a volume group and its logical volume drives just as it was before and then finally define volume replication tasks. All this should not be a problem I guess.

    Next I would try to re-activate and re-integrate the "repaired" secondary passive raid just by clicking on "Start" in menu Setup->Failover of the secondary system.

    Is this the right way to go and should this work or is it impossible to bring back a failed secondary without restarting failover as a whole on both primary and secondary (causing a loss of all ISCSI connections)?

    Anything else to watch out for?

    Thanks for any hints
    Rainer

  2. #2

    Default

    You will need to make sure that the replication task will be deleted and recreated again and that it is synced from the Primary side. You will need to start the Failover again though so be aware of this. In DSS V7 you dont have to and you can HOT add to the cluster without any downtime.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  3. #3

    Default

    Quote Originally Posted by To-M View Post
    You will need to make sure that the replication task will be deleted and recreated again and that it is synced from the Primary side. You will need to start the Failover again though so be aware of this. In DSS V7 you dont have to and you can HOT add to the cluster without any downtime.
    Thank you very much for your answer.

    You recommended to delete the replication tasks. Did you mean any task that still might exist on the secondary, broken open-e or also on the healthy primary?
    If I have to delete replication tasks on the primary as well, how can I recreate them? As far as I remember these tasks were auto created when defining logical volumes, but on the primary I cannot delete and recreate volumes since this would delete all my data?

  4. #4

    Default

    If the 2nd node has to be completely rebuilt then you will need delete the replication tasks from the Primary and once the Secondary is ready then reconnect and recreate the tasks again. Deleting the recreating the tasks does not delete the data, the data resides on the Logical volumes/ Volume Group / RAID Array.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  5. #5

    Default

    Hello Todd,

    thank you very much for your help. The system is now up and running again and right now its syncing from the primary.

    Have a nice day
    Rainer

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •