Visit Open-E website
Results 1 to 9 of 9

Thread: Implications of Using Multiple LVs in iSCSI Failover Configuration

  1. #1

    Default Implications of Using Multiple LVs in iSCSI Failover Configuration

    Hi,

    I'm setting up replication between Open-E servers for 7x LVs on some very-large disk arrays. Config is 2x Bonded Gbe for replication, 2x Bonded Gbe for iISCSI, 1x Gbe for ping node and 1x Gbe for management.

    I'm nervous about the warning I get when I select 7x LVs in the iSCSI failover setup:
    "You have chosen more than one replication task. This can potentially result in significantly longer switching times for failover and failback events. It is most recommended to perform tests of failover and failback in this configuration before using the system in a production environment. Do you want to continue?"

    What are the implications of this? Did I miss a best practice somewhere?

    Also, sign me up for another vote to be able to add a new LV to the replication without losing access to the virtual address (and therefor the iscsi LUN).

    TIA
    Dave

  2. #2

    Default

    There are no implications we post this so that you know that it might take longer then just 1 task as many there is more checks and balances - and we know about adding the other volumes to the task. This is another feature that will be added, please give our engineers more time
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  3. #3

    Default

    Quote Originally Posted by To-M
    There are no implications we post this so that you know that it might take longer then just 1 task as many there is more checks and balances
    I see. Thanks for the clarification .

    Quote Originally Posted by To-M
    and we know about adding the other volumes to the task. This is another feature that will be added, please give our engineers more time
    No worries, just wanted to throw my vote in as well.

    Thanks for the (as always) quick response!
    Dave

  4. #4

    Default

    thanks - it was not pointed at you but for others - trust me we want this more then you can imagine and hope to have this by summer or sooner - get ready some nice surprises are coming soon - cant say but you'll see .
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  5. #5
    Join Date
    Aug 2008
    Posts
    236

    Default

    I support a customer with HA setup using multiple LVs that provide a backing store for their HyperV R2 cluster. Failover times are acceptable and our last test cycles revealed no downtime during failover operation.
    As with all things, your mileage will vary, but I think you'll be ok.

  6. #6

    Default

    thanks @enealDC and @To-M.

    One more question: I've seen documentation indicating the a bonded interface will not provide for better performance with replication. I understand that a bond only multiplies bandwidth for multiple connections; does each replication task not create a new connection? Am I going to see better performance with 7x jobs across 2x Gbe or should I get 1x 10 Gbe inteface to speed this up?

    TIA
    Dave

  7. #7
    Join Date
    Aug 2008
    Posts
    236

    Default

    You'll certainly want to go with 10GBE. Don't try using multiple nics. Its just not a very scalable method IMO

  8. #8

    Default

    Quote Originally Posted by enealDC
    You'll certainly want to go with 10GBE. Don't try using multiple nics. Its just not a very scalable method IMO
    Thanks -- I'll wait to drive a Ferrari too .... Just kidding.

    I agree with your assessment but the information I'm trying to get at is whether or not bonded NICs will increase performance at all in the case of multi-LV replication.

    Basically -- is it one job that does replication (ie, one TCP stream) or multiple jobs (ie, N TCP streams). If multiple, then we can take advantage of multiple NICs even if that's not an ideal solution. If one, then you're right that 10GbE is the only option...

    Thanks again
    Dave

  9. #9
    Join Date
    Aug 2008
    Posts
    236

    Default

    Each replication job is independent of the other, so yes, it is multiple streams.
    A dedicated 10GBE link between two DSS servers is so affordable nowadays, I can finally start recommending it. A CX4 10GB adapter will probably cost a wee bit more than say a 4 Port Copper Nic. Many Supermicro mobos have 10GBE built onto the platform. It's better for overall cable management, IRQ management and performance.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •