-
Adding new iSCSI Volumes
Hi,
When adding a new iSCSI volume, I get a Warning message saying "These changes require resetting iSCSI connections....etc". Does this mean I have to stop all access to any iSCSI volumes and or shut down all VM's using iSCSI volumes. Surely resetting iSCSI sessions is not a good thing while data is being accessed and there could be a possible data corruption??
I dont remember seeing this with the 1TB Free version. Was this added when iSCSI failover was introduced?
-
hi hfourie
The messages states that some iSCSI initiators may not recognize the new volume.
and that resetting the iSCSI connection may be required.
In this case it would be better to stop data transfer and reset the iscsi connections
We are working on improving this but, no eta as to when it will be released
-
When I create a new iSCSI volume, I get..
"These changes require resetting iSCSI connections.
WARNING! RESETTING ISCSI CONNECTIONS WILL CAUSE A SHORT OUTAGE FOR ALL ISCSI CONNECTIONS. SOME APPLICATIONS WILL HAVE SERIOUS PROBLEMS WITH ISCSI RESETS!
Press OK to reset iSCSI connections or CANCEL to abort,"
if you click cancel, no volume is created. If you click ok, all iscsi sessions are reset, possibly corrupting a file system.
So does this mean that everytime I want to create a new iscsi volume, I have to stop all application and server access to the SAN?
-
Same problem
This is exactly what I have with my DSS here:
There are around 15 iSCSI targets running. I just create a new iSCSI _volume_ (not yet assigning a target!), I get the warnings about connection reset, and then the current iSCSI initiators get lots problems like
Feb 10 14:01:47 vms-4035 vmkernel: 9:02:24:49.578 cpu4:1104)iSCSI: session 0xb71
2370 to iqn.shoe-2013.vmachines-sb dropped
This was a hangup of VMware ESX iSCSI target, where all vmachines reside. I have to reboot. I found this problem several times.
It is absolutely unacceptable that the creation of a target volume causes running iSCSI connections to fail. Storage software designed for TB of space must be able to handle this without timeouts etc! I can't stop my company every time I need some 100 GB of space.
Regards,
Robert
-
Hmm, I do think that your initiators are to blame then. I am using also multiple targets and my initiators don't seem to have troubles with that.
In fact I would expect an iSCSI initiator to be able to cope with short "outages" and to reconnect to the targets on its own.
Cheers,
budy
-
Yes I have the same problem as the_nipper. It is unacceptable to have to shut everything down. When my vm reconnects to the drive it becomes read only mode and requires a restart to become writable again. I have moved to nfs or nas because of this. :(
-
Does anyone have a definitive awnser to this one?
I'm dealing with the same issue here. I can't imagine I have to close down my 19 VL's in order to create a new one.
I put in a question via the support desk. I'll keep you posted.
-
Yeah, this is a huge problem. It has to be fixed. We have to know a firm ETA about this. Put this at the top of your queue, because this puts a huge limit on the type of environments this can be deployed under, but I'm sure you already know that. We all need a fix for this as soon as possible.
We're having this problem with fibre channel.
-
There has to be a way of fixing this because EMC and folk don't have this problem.
-
BTW, this is not an issue if you publish all of your logical volumes to the default group in fibre channel. In that case, you would have to do LUN masking on the INITIATOR side instead of on the Open-E TARGET side. If you have QLogic initiator HBAs in Windows, then use the QLconfig utility to select the luns that should be used on each initiator. This is a pain because you have to do this on every initiator machine, but it should work.
Also, you can use the QLDirect driver for fibre channel path failover (like MPIO but no active-active, it's active-passive). This is I think the only way to do fibre channel (path) failover in Windows 2003 or earlier. (Windows 2008 has a generic Fibre channel MPIO driver, I think.)
-
Maybe in next release?
Maybe this will be fixed in v6. But if so that sure sounds like the M$ way of doing things to keep people buying the next version of the software. And then v6 will have some serious limitations/bugs taht won't be fixed till v7, ...
-
Has this issue been fixed in DSS v6? I've looked through the V6 threads and haven't seen anything pertaining to this.
-
Yes, we are coming out with a fix for this issues in this months release for V6.
-
Will it only affect those who are not using replication?
-
It is for the new SCST that we are creating an update for all iSCSI functions and FC Targets.
-
In V6, is yet possible to expand a volume that is in replication without knocking other replicating volumes offline? Or is it possible to add/remove replicated volumes without disturbing replicating volumes that are in use? :confused:
Thanks!
-
If the volumes are replicating you will need to stop the replication task. I believe in the future engineers are looking into this feature (not ETA!!) but the process would be that you would have to increase the Destination volume first then the Source for obvious reasons but for now you have to stop both and increase the size for both (being equal on both ends of course).
"" Or is it possible to add/remove replicated volumes without disturbing replicating volumes that are in use?""
With manual failover yes but if you need to perform this in Auto Failover no.
-
Thanks, Todd.
"If the volumes are replicating you will need to stop the replication task. I believe in the future engineers are looking into this feature (not ETA!!) but the process would be that you would have to increase the Destination volume first then the Source for obvious reasons but for now you have to stop both and increase the size for both (being equal on both ends of course)."
- In order to adjust the size of a volume in V6 (unlike V5) I would only have to turn off the replication task that is associated with the volume. No effect on the Virtual IP or having to turn off failover to make the adjustments?
Thanks for the answers, I just want to make sure that what I need to have is available in the latest release before I purchase the upgrade.
-
I should have made this clearer - you are correct that you will have to stop the Auto Failover service on both and the replication task on both then modify the volume from the action pull down button from the volume manger then take off the replication then expand the volume on both ends then reverse the process - I need to make a video of this - just need a little more time - maybe by next 2 weeks i will have this done.
-
Thanks, Todd.
So once I take the auto-failover off, that will break/disable the virtual IP causing each volume that is connected to a server to "break" until the VIP is re-enabled by turning the failover back on.
-
Actually you are stopping the Failover Manager from the GUI SETUP -> network -> iSCSI Failover -> Function: Failover Manager that stops or starts the auto failover. VIP should still be on unless ping node is pointing to the other server by the way a good ping node would be a ups in case others wanted to know.