I have in production two node Active-Active cluster.

Open-E v7.0up10.9101.7637

First node named "dss1"
Second node named "dss2"

Network interfaces on both nodes:
- management GUI = eth0
- Storage Client Access= bond0
- Storage Client Access= bond1
- Volume Replication = bond2

Logical Volumes:
dss1: lv0000, lv0001
dss2: lv0002, lv0003, lv0004

iSCSI targets:
dss1: dss1.target0
dss2: dss2.target0

Replication sources settings:
lv0000 dss1=source dss2=destination
lv0001 dss1=source dss2=destination
lv0002 dss1=destination dss2=source
lv0003 dss1=destination dss2=source
lv0004 dss1=destination dss2=source

Replication tasks:
dss1: "VM-Data1" for lv0000, Status = Running
"VM-File-Data" for lv0001, Status = Running
"VM-Arh-Data_reverse" was automatically created by the system for reverse lv0002 replication from dss2, Status = Stopped
"VM-Sql-Data_reverse" was automatically created by the system for reverse lv0003 replication from dss2, Status = Stopped
"VM-Data2_reverse" was automatically created by the system for reverse lv0004 replication from dss2, Status = Stopped

dss2: "VM-Data1_reverse" was automatically created by the system for reverse lv0000 replication from dss1, Status = Stopped
"VM-File-Data_reverse" was automatically created by the system for reverse lv0001 replication from dss1, Status = Stopped
"VM-Arh-Data" for lv0002, Status = Running
"VM-Sql-Data" for lv0003, Status = Running
"VM-Data2" for lv0004, Status = Running

!!!!!

A few days ago "dss1" node fails. "Dss1" resources came under the control of "dss2" node. Now "dss2" node is a host for:
- Virtual IPs: All
- iSCSI targets: dss1.target0 (lv0000, lv0001)
dss2.target0 (lv0002, lv0003, lv0004)
- Replication tasks: "VM-Data1_reverse", Status = Running
"VM-File-Data_reverse", Status = Running
"VM-Arh-Data", Status = Running
"VM-Sql-Data", Status = Running
"VM-Data2", Status = Running

On the failed node had to change the motherboard and disk controller.
After that, I re-install Open-E repeating the original configuration:
- Open-E version, build and licences
- Network connections in the same ports of the same switches
- Network settings
- Host name
- Volume group
- Logical volumes (the same name, type and size). The only difference from initial config is that i setup lv0000 and lv0001 on "dss1" as "destination" for replication tasks, because I'm afraid that will replicate in the wrong direction.

Now, i want to return "dss1" in the cluster.
I can ping all IP Adresses from one node to another.
Host binding from "dss1" to "dss1" is reacheable.

BUT

1) I can not setup host binding from "dss1" to "dss2". The error is "Too many bound hosts on remote host."
2) No one of replication tasks is working on node "dss2". When i tried manualy restart replication task "VM-Arh_Data", it was failed with error "Status: Error 32: Cannot find mirror server logical volumes"
3) Every 10 minutes "dss2" node writes an error in the log "Connection to host 'dss1-host' lost. Please check communication route between local computer and host 'dss1-host'", although all pings in all directions for all ip addresses is successful.
4) I have some recomendations from support team:
1. recreate the RAID. = done
2. re-install the OPEN-E DSS (activate the license, apply small updates if needed). = done
3. the old configuration should be applied automatically on the new OPen-E. if not, create logical volumes, exact the same size as on primary node = done
4. create volume replication tasks. please note, on which node it should be configured as sources, and on which as destinations. start the replications tasks. = can not do because of error
5. if possible, wait untill the data will be consistent. = can not do
6. verify the failover configuration. if it's ok, start the failover on the "primary node". = can not do

BUT they are not working, no autamatically, no manually
5) I have saved settings of "dss1" node in the .cnf file. Can i use it?

Anyone know, how to return failed node in the cluster?