Visit Open-E website
Results 1 to 10 of 21

Thread: Adding new iSCSI Volumes

Thread has average rating 5.00 / 5.00 based on 1 votes.
Thread has been visited 25003 times.

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Default Same problem

    This is exactly what I have with my DSS here:

    There are around 15 iSCSI targets running. I just create a new iSCSI _volume_ (not yet assigning a target!), I get the warnings about connection reset, and then the current iSCSI initiators get lots problems like

    Feb 10 14:01:47 vms-4035 vmkernel: 9:02:24:49.578 cpu4:1104)iSCSI: session 0xb71
    2370 to iqn.shoe-2013.vmachines-sb dropped

    This was a hangup of VMware ESX iSCSI target, where all vmachines reside. I have to reboot. I found this problem several times.

    It is absolutely unacceptable that the creation of a target volume causes running iSCSI connections to fail. Storage software designed for TB of space must be able to handle this without timeouts etc! I can't stop my company every time I need some 100 GB of space.

    Regards,

    Robert

  2. #2
    Join Date
    Nov 2008
    Location
    Hamburg, Germany
    Posts
    102

    Default

    Hmm, I do think that your initiators are to blame then. I am using also multiple targets and my initiators don't seem to have troubles with that.

    In fact I would expect an iSCSI initiator to be able to cope with short "outages" and to reconnect to the targets on its own.

    Cheers,
    budy
    There's no OS like OS X!

  3. #3

    Default

    Yes I have the same problem as the_nipper. It is unacceptable to have to shut everything down. When my vm reconnects to the drive it becomes read only mode and requires a restart to become writable again. I have moved to nfs or nas because of this.

  4. #4

    Default

    Does anyone have a definitive awnser to this one?

    I'm dealing with the same issue here. I can't imagine I have to close down my 19 VL's in order to create a new one.

    I put in a question via the support desk. I'll keep you posted.

  5. #5

    Exclamation

    Yeah, this is a huge problem. It has to be fixed. We have to know a firm ETA about this. Put this at the top of your queue, because this puts a huge limit on the type of environments this can be deployed under, but I'm sure you already know that. We all need a fix for this as soon as possible.

    We're having this problem with fibre channel.

  6. #6

    Exclamation

    There has to be a way of fixing this because EMC and folk don't have this problem.

  7. #7

    Lightbulb

    BTW, this is not an issue if you publish all of your logical volumes to the default group in fibre channel. In that case, you would have to do LUN masking on the INITIATOR side instead of on the Open-E TARGET side. If you have QLogic initiator HBAs in Windows, then use the QLconfig utility to select the luns that should be used on each initiator. This is a pain because you have to do this on every initiator machine, but it should work.

    Also, you can use the QLDirect driver for fibre channel path failover (like MPIO but no active-active, it's active-passive). This is I think the only way to do fibre channel (path) failover in Windows 2003 or earlier. (Windows 2008 has a generic Fibre channel MPIO driver, I think.)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •