Visit Open-E website
Page 1 of 3 123 LastLast
Results 1 to 10 of 21

Thread: Adding new iSCSI Volumes

Thread has average rating 5.00 / 5.00 based on 1 votes.
Thread has been visited 17784 times.
  1. #1

    Default Adding new iSCSI Volumes

    Hi,

    When adding a new iSCSI volume, I get a Warning message saying "These changes require resetting iSCSI connections....etc". Does this mean I have to stop all access to any iSCSI volumes and or shut down all VM's using iSCSI volumes. Surely resetting iSCSI sessions is not a good thing while data is being accessed and there could be a possible data corruption??

    I dont remember seeing this with the 1TB Free version. Was this added when iSCSI failover was introduced?

  2. #2

    Default

    hi hfourie

    The messages states that some iSCSI initiators may not recognize the new volume.
    and that resetting the iSCSI connection may be required.
    In this case it would be better to stop data transfer and reset the iscsi connections
    We are working on improving this but, no eta as to when it will be released

  3. #3

    Default

    When I create a new iSCSI volume, I get..

    "These changes require resetting iSCSI connections.

    WARNING! RESETTING ISCSI CONNECTIONS WILL CAUSE A SHORT OUTAGE FOR ALL ISCSI CONNECTIONS. SOME APPLICATIONS WILL HAVE SERIOUS PROBLEMS WITH ISCSI RESETS!

    Press OK to reset iSCSI connections or CANCEL to abort,"

    if you click cancel, no volume is created. If you click ok, all iscsi sessions are reset, possibly corrupting a file system.

    So does this mean that everytime I want to create a new iscsi volume, I have to stop all application and server access to the SAN?

  4. #4

    Default Same problem

    This is exactly what I have with my DSS here:

    There are around 15 iSCSI targets running. I just create a new iSCSI _volume_ (not yet assigning a target!), I get the warnings about connection reset, and then the current iSCSI initiators get lots problems like

    Feb 10 14:01:47 vms-4035 vmkernel: 9:02:24:49.578 cpu4:1104)iSCSI: session 0xb71
    2370 to iqn.shoe-2013.vmachines-sb dropped

    This was a hangup of VMware ESX iSCSI target, where all vmachines reside. I have to reboot. I found this problem several times.

    It is absolutely unacceptable that the creation of a target volume causes running iSCSI connections to fail. Storage software designed for TB of space must be able to handle this without timeouts etc! I can't stop my company every time I need some 100 GB of space.

    Regards,

    Robert

  5. #5
    Join Date
    Nov 2008
    Location
    Hamburg, Germany
    Posts
    102

    Default

    Hmm, I do think that your initiators are to blame then. I am using also multiple targets and my initiators don't seem to have troubles with that.

    In fact I would expect an iSCSI initiator to be able to cope with short "outages" and to reconnect to the targets on its own.

    Cheers,
    budy
    There's no OS like OS X!

  6. #6

    Default

    Yes I have the same problem as the_nipper. It is unacceptable to have to shut everything down. When my vm reconnects to the drive it becomes read only mode and requires a restart to become writable again. I have moved to nfs or nas because of this.

  7. #7

    Default

    Does anyone have a definitive awnser to this one?

    I'm dealing with the same issue here. I can't imagine I have to close down my 19 VL's in order to create a new one.

    I put in a question via the support desk. I'll keep you posted.

  8. #8

    Exclamation

    Yeah, this is a huge problem. It has to be fixed. We have to know a firm ETA about this. Put this at the top of your queue, because this puts a huge limit on the type of environments this can be deployed under, but I'm sure you already know that. We all need a fix for this as soon as possible.

    We're having this problem with fibre channel.

  9. #9

    Exclamation

    There has to be a way of fixing this because EMC and folk don't have this problem.

  10. #10

    Lightbulb

    BTW, this is not an issue if you publish all of your logical volumes to the default group in fibre channel. In that case, you would have to do LUN masking on the INITIATOR side instead of on the Open-E TARGET side. If you have QLogic initiator HBAs in Windows, then use the QLconfig utility to select the luns that should be used on each initiator. This is a pain because you have to do this on every initiator machine, but it should work.

    Also, you can use the QLDirect driver for fibre channel path failover (like MPIO but no active-active, it's active-passive). This is I think the only way to do fibre channel (path) failover in Windows 2003 or earlier. (Windows 2008 has a generic Fibre channel MPIO driver, I think.)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •