Visit Open-E website
Results 1 to 10 of 10

Thread: Volume Replication

  1. #1

    Default Volume Replication

    Hello,

    I have two Servers with failover configuration:

    1 Volume replication Nic
    4 Nics Bond0
    1 Nic for Web-Access

    With IOMeter i have an write-io of 110 MB, but when i make a manually failover and only one Server is active, i have a write-io of 220 MB.

    Is the bandwidth for replication limited to one Nic? Is it possible to add a second nic for replication to increase the bandwidth for replication?

  2. #2

    Default

    Currently all replication tasks use only one NIC but You can create bonding and run replication on it.

  3. #3

    Default Bond for Volume Replication

    Thank you for the Tip.

    I have create a blance-rr bond for the Volume Replication with two Nics, but this does not increase the throughput, the write-io with active replication is is still at 110 MB.

  4. #4

    Default

    During creation of volume replication task you can set option: " Bandwidth for SyncSource (MB)". By default it's set to 40 MB. To change this value you have to delete this task and create it again.

  5. #5

    Default Bandwidth for SyncSource

    There is a knowledgebase article: Why Open-E request to specify bandwidth value for volume replication? http://kb.open-e.com/Why-Open-E-requ...ation_172.html

    "Once the destination volume show consistent, replication will use maximum possible bandwidth."

    So the Option "Bandwidth for SyncSource" should not impact the Bandwidth, when the Volume is consistent.

  6. #6

    Default

    hmm, I didn't know that.

    Ok, could you try one more thing? Shutdown secondary machine and see if speed will also increase to 220 MB.

  7. #7

    Default

    When the secondary Machine ist shutdown, the speed with IOMeter also increase from 110 to 220 MB. Apparently only the Replication slow down the througput.

  8. #8

    Default

    Ok, You can try change some tuning options. In console go to Hardware Configuration (ALT + CTRL + W) and choose "DRBD tuning".
    Here you can find some info about available options http://www.drbd.org/users-guide/re-drbdconf.html

    I learned today that balance-rr and replcation doesn't always work as expected. The perfect solution would be using 10Gb ethernet cards, but probably this is not an option?

  9. #9

    Default

    I take a look at the website www.drdb.org and on the options under "DRDB Tuning". I think this is a difficult issue and it needs many time to evalute an test it. Currently I have not the time because the System must go to production. But we plan a 10Gb Connection for Replication between the Servers in the next months.

    Thank you very much for your help!

  10. #10

    Default

    Quote Originally Posted by salmon
    I learned today that balance-rr and replcation doesn't always work as expected. The perfect solution would be using 10Gb ethernet cards, but probably this is not an option?
    Well, I just learned that too from the open-e guys, but bonding will not help at all.
    And it makes completely sense. In bonding mode the source is selecting one path - lets say per session. So if the session does not change (and it is not when the sync is running, the same with iscsi) it will not give you performance boosts at all.
    Have a look at the following blog article of open-e. It made completely sense to me:

    http://blog.open-e.com/bonding-versus-mpio-explained/

    Bye,

    Matthias

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •