Visit Open-E website
Page 5 of 10 FirstFirst ... 34567 ... LastLast
Results 41 to 50 of 91

Thread: iScsi Auto Failover

Thread has average rating 4.00 / 5.00 based on 1 votes.
Thread has been visited 85423 times.
  1. #41
    Join Date
    Oct 2008
    Posts
    69

    Default

    Hi,

    I read this How-To and i have one question, what is the IP to be used IP for the ISCSI Initiator? The one from bond0 or eth2 ?

    I'll implement this solution soon for a Vmware Server on Ubuntu 8.0.4 and two OpenE DSS (actually i have bond0 for iSCSI, bond1 for Public access on my SAN & Server).


    Thanks a lot for this great update.

    NSC

  2. #42

    Default

    Dear nsc,

    Use the bond IP address.

  3. #43
    Join Date
    Aug 2008
    Posts
    236

    Default

    I'm having a problem with failover. Everything seems to be working, it's just that it refuses to use the auxillary NIC and as such, the failover status is "degraded" Here is my NIC layout

    The first NIC is for management
    Second NIC is Cross over for replication
    Third NIC Unused
    Fourth NIc Unused
    Fifth NIC iSCSI-Path A
    Sixth NIC iSCSI-Path B


    I'm using MPIO and do not want to use bonded interfaces. So is it a requirement that all interfaces for failover have to be on the same subnet?

  4. #44

    Default

    The first NIC is for management
    Second NIC is Cross over for replication Third NIC Unused Fourth NIc Unused Fifth NIC iSCSI-Path A Sixth NIC iSCSI-Path B

    We recommend the replication be bonded - so for NIC 2 please bond with another available NIC that is Unused. We have thought about this in our development and we strongly support this only. If not bonded for the Replication then this will not be supported. This is for your benefit.

    We have not tested with MPIO with regards to the Virtual IP address but I believe you can use the other Unused NIC (4) to set the Virtual IP address and use with MPIO from the host side. NIC example for MPIO should be as follows:

    Must configure on server both network adapters to work in different subnets:
    for example:
    - adapter no1: Ip address: 192.168.2.220, NetMask: 255.255.255.0 (Virtual IP Address)
    - adapter no2: Ip address: 192.168.3.220, NetMask: 255.255.255.0 (Virtual IP Address)

    and the configuration of the client network adapters:
    - adapter no1: Ip address: 192.168.2.221, NetMask: 255.255.255.0
    - adapter no2: Ip address: 192.168.3.221, NetMask: 255.255.255.0
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  5. #45
    Join Date
    Aug 2008
    Posts
    236

    Default

    Thanks very much. Our MPIO is on the host side.

    What about this question: do all NICs for failover have to be on the same subnet? Or should all NICs be able to reach the ping node?

  6. #46

    Default

    Dear enealDC,

    The Virtual IP should be on the same subnet, as well as the ping node. And the Replication should be on its own subnet.

    Regards,
    SJ

  7. #47

    Default

    Quote Originally Posted by To-M
    1. Set up volume replication and start the replication task on the primary system.
    2. On both systems, create a new target with exactly the same name and assign the LUN by clicking on the "+" button.
    3. In the GUI menu, SETUP -> Network, configure bonding using eth0 and eth1.
    4. Connect the cable from eth0 to the first network switch and the cable from eth1 to the second one.
    5. Do the same as above on the secondary system.
    6. On both systems, configure the virtual IP on the bond and select it as an auxiliary interface.
    7. Select eth2 as the auxiliary on both systems (it there must be 2 auxiliary interfaces selected on every system).
    8. In Function: Failover configuration: on the primary, enable primary mode and on the secondary, enable the secondary accordingly.
    9. Select one failover task and click on Apply in function failover tasks on primary system.
    10. Click on Start, in Function: failover manager on the primary system.
    11. Check the status in Function: Failover status; all must be OK and in Task status, the destination volume must be consistent.
    12. Please (either connect or log on?) connect log on to the mirror target with iSCSI initiator using the virtual IP.
    13. Create a partition and format the iSCSI disk.
    14. Test the Failover function by clicking on Manual Failover in Function: Failover Manager on the primary system.
    15. Afterwards, the secondary system must show "active" mode in the node status in Function: Failover status.
    16. In order to test Failback, please click on "Sync volumes" in Function: Failover manager on the secondary system.
    17. Once both systems are in sync, the Failback button will be activated. Please check Task Status in Function: Failover status on the secondary system; it must show both volumes as consistent.
    18. Click on the Failback button in Function: Failover manager on the secondary system.
    19. Afterwards, the primary system is back in active mode and ready for failover.
    Hello,

    I used the above guide to setup autofailover, but everytime I try to enable a task on the primary node it says "Most probably there has been an error when connecting to the other node, or one of the selected tasks does not have a counterpart (a reverse task) on the other node. Please check the secondary node address and make sure you have configured appropriate reverse tasks."

    eth4+5 are in bonding mode, eth3 is the auxiliary interface for replication and heartbeat. the bond (eth4+5) do have a public ip address, eth3 a private (10.0.0.x). Pinging between both servers using the private address and also the replication do work fine. What could be the problem?

    Thanks

  8. #48
    Join Date
    Aug 2008
    Posts
    236

    Default

    I'm still struggling to understand how the auxillary interfaces should be configured.
    Someone please help me here. Should the auxiliary interfaces be in the same subnet; because otherwise they both will not be able to reach the ping_node! And if you configure both interfaces in the same subnet, you get a warning!

  9. #49
    Join Date
    Aug 2008
    Posts
    236

    Default

    Also - here is an additional question. In light of the 4TB limitation for replication, is open-E vigorously working on solution for failing over multiple iSCSI volumes? I don't mean to state the obvious, but these limitations really make this feature hard to use. Lets say that i have a 16TB DSS license and I need to provide some redundancy for that entire 16TB. I have to create 4, 4TB volumes but then I have to choose which volume I will provide redundancy for. That's a really tough choice to make. I don't want to undermine the achievement of failover. We are very thankful for it. I do however want to urge you guys not to rest however because we really need to tie everything together. I didn't ever care about the 4TB restriction for replication until I realized that I could only provide redundancy for 4TB in a system that holds 6TB for example...

  10. #50

    Default

    We will have multi iSCSI Volume Replication in next release during the week of the 17th.

    We do recognize the 4TB limitation and we are looking into this please allow us more time for the developers to research this and hopefully without additional cost.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •