Visit Open-E website
Results 1 to 8 of 8

Thread: ISCSI Active Fail over Performance

  1. #1

    Default ISCSI Active Fail over Performance

    My performance from each of my san boxes is about 75-80% of the nic read or write. But when I enable fail over it drops to about 15% utilization of a 10gbit nic. Is there a way to boost performance of fail over and is there a way to see how fast everything is working between each of the servers and such? The boxes are 20 disk sas 15k rpm with acera cards.

    I also read that hyper-v fail over clustering isn't supported yet. Is there a timetable for this being implemented? I'm using two open-e DSS's for fail over and two HA VM servers. Didn't realize it would cut out if I enabled fail over but right now protection is more important in a way but it would be nice to know when it would be implemented.

    Last thing is I can't seem to manage the areca cards from the website program. Is there a fix for 1880 areca raid cards?

    Thanks for any help. I'm really liking open-e and hoping i can boost my fail over performance to at least 50% of a 10gb nic.

  2. #2


    This fix for the Hyper-V with the iSCSI Failover is ETA at end of June this year. Your VM HA setup with the Auto Failover should not cause some issue, try to use the Static connection when setting up the ESX initiator. On the Areca 1880 you should be able to open up the web page to configure it when you go to the GUI in Statup > HW RAID, can you tell me if the page tries to open or if there is some other effect?

    Try these settings with the DRBD tuning (which is the Volume Replication) and other Target settings.

    To tune replication:

    In console go to Hardware Configuration (ALT + CTRL + W) and choose "DRBD tuning".
    current settings:


    Adjust the target values as follows:
    1. From the console, press CTRL+ALT+W,
    2. Select Tuning options -> iSCSI daemon options -> Target options,
    3. Select the target in question,
    4. change the MaxRecvDataSegmentLength and MaxXmitDataSegmentLength values to the
    maximal required data size (check w/ the initiator to match).

    Using VMware, others have reported best values from below:


    In the case of VMware and Microsift Initiators, these adjustments can make a
    performance difference as well.

    The target values that seem to work best are here:

    Jumbo Frames can also be used provided your hardware has such support.
    All the best,

    Todd Maxwell

    Follow the red "E"
    Facebook | Twitter | YouTube

  3. #3


    Error 312 (net::ERR_UNSAFE_PORT): Unknown error. <- chrome gives me this error after it pops a new tab.

  4. #4
    Join Date
    Aug 2010


    Try another browsers, (Fire Fox, IE, Safari, etc... ), as there is a bug in the latest chrome browser.

  5. #5


    IE and firefox are the same.. except no error. Just "could not connect" screens

  6. #6


    Also I went to change the settings you instructed about and it said it would turn off fail over and I'd lose all settings tied to it. That doesn't include my original iscsi partition correct? And I guess that means I must re setup the fail over system?

  7. #7
    Join Date
    Aug 2008


    You *shouldn't* 't loose any configuration settings, or data.

  8. #8


    Alright I implemented your changes that you recommend.. Got a little scared since it did wipe out a few settings when I did it @_@;... Anyways I'm getting about 32-35% utilization now which is a lot better than before but I suppose I might need to tune a little on the hyper-v side as well. Danke for the help though.. if you think of anything else tell me and I shall try it.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts