We have this in our backlog tasks as a feature requests but I dont have an eta as to when this will be released.Quote:
Originally Posted by enealDC
Printable View
We have this in our backlog tasks as a feature requests but I dont have an eta as to when this will be released.Quote:
Originally Posted by enealDC
I've done some testing and on certain hardware, you can get better performance using a combination of RAID1 and LVM striping then you can out of hardware RAID10.
Since LVM options allow you to not only control the number of stripes but also the stripe width, can you expose these options?
An option for changing the I/O scheduler per block device would be nice. Not sure if this is already possible, couldn't find it. Also not sure how to check the current selected scheduler, this is AFAIK normally CFQ by default.
Once the Volume Replication task is scheduled (if this is what you are talking about) can be viewed from Status > Tasks and can only be stopped but not changed unless you are wanting to stop and replicate the volume to a different system of same size volume on the Destination system.
Hardware VSS provider functionality for use with Microsoft DPM to protect hyper-v guests would really lift open-e to the enterprise level we hoped to achieve with it.
- dedicated block cache device
http://www.facebook.com/note.php?note_id=388112370932
http://bcache.evilpiepirate.org/
it would be nice to have it, hot-configurable in wt and wb modus.
It is unfortunate that the developer/project has gone idle. Especially, when the code is only described as "ready for non-critical uses", which is hardly what I would describe a SAN/NAS to be...Quote:
Originally Posted by red
Replication over FC, is this it in the plans?
None for this year though this could change depending on the demands.
any data deduplication feature to be available soon?
Yes we are looking into this but no set date, we hope to have some ideas as to when by late summer.
Any news on hardware VSS support (iscsi target hardware provider) for use with DPM/Hyper-V?
Are you looking to just OFFLOAD or something else?Quote:
Originally Posted by Remco87
The reason is that you can start a consistent snapshot with our API function.
For SAN please continue to read below.
DSS V6 provides the option to start DSS Snapshot from a created script, so, if you have a database running on DSS V6 you can start a snapshot on DSS V6 from the script and then run backup.
We have this step-by-step guide on our website from below.
http://www.open-e.com/library/how-to-resources/
Open-E Snapshots:
2010.12 Remote Snapshot Control with API of DSS V6
Here is the definition of the VSS provider-
The VSS provider is the component that creates and maintains the shadow copies. This can occur in the software or in the hardware. The Windows operating system includes a VSS provider that uses copy-on-write. If you use a storage area network (SAN), it is important that you install the VSS hardware provider for the SAN, if one is provided. A hardware provider offloads the task of creating and maintaining a shadow copy from the host operating system.
We can ONLY offload the software provider if we do it on our end. Is this what you are
asking for and so then we will look to provide this to you?
If this is for NAS it is not something we do.
I would also like to see a VSS provider.
If you run the DSS as storage for a hyper-V cluster and want to take snapshots of the virtual machines running on the DSS your host machines must have the hardware VSS provider to enable backup jobs to run in parallel.
I don't want to snapshot the entire DSS, just the VMs that are stored on it.
It is possible to do this without the hardware VSS provider but it means taking the backups one at a time. We have 10 VMs stored on our DSS so this can take 14+ hrs and requires the SAN be put into redirected access mode.
With redirected access switched on one host access the storage and the other hosts access that host for access to the VHD files. This has a huge performance impact.
I feel that the DSSv6 can't be used in enterprise environments to host hyper-v failover clusters because of the lack of the VSS provider that prevents proper backups.
Technet info on CSV cluster backups with hardware VSS Provider.
http://blogs.technet.com/b/asim_mitr...v-cluster.aspx
the ability to lower the I/O priority for initializing a file i/o volume. Or, cancel the initialization.
join the vsphere5 api storage programs.
focusing development on the vmware storage api's would be something I would like to see.
i'm going to say in they will be a major consideration in the future!
-VMware vStorage APIs for Array Integration (VAAI)
-VMware vStorage APIs for Storage Awareness (VASA)
A better SMB / CIFS Support for Windows.
I want to use the Full Rights like Change, Visit etc. and not only the three SMB rights like read, write and execute...
Wondering if it could be done:
Linux supports with mdadm growing of sw raid arrays. I can`t find it in open-e, there is only feature for adding a spare disk not growing the s/w array with that disk. I could create another RAID group and merge them with volume manager but I will lose space due to parity disk.
Linux supports multipathing (failover, multibus etc.) with FC initiators and dm-multipath. I wonder it could be added as many FC connections are done by fabric using many paths for redundancy and HA.
With DS4800 connected as target to the DSS with two paths I see one DS4800 LUN and one LUN with 0GB with "not supported" label. When I fail one of two redundant controllers drive 0GB disappears. When i unfail this controller, sometimes I get a message from DS4800 that DSS uses wrong path through bad controller, through not preferred path.
I think FC should be realy tuned.
Another my conception is to add a FlashCache support. SSD disks are much faster so caching data from SAS, SATA or FC disks on SSD drives should give a performance jump for performance demanding setups.
When i remove drive from RAID array physicaly with sw raid and again put the disk in I get "part of s/w raid" label and can only clean disk and need to reboot the DSS is bad... I should also have the ability to put this drive again in the same array without manual changes in DSS. Mdadm should support such a thing.
You may say who uses sw raid but using FC disks it is the best and cheapest way to go.
Also think about giving ppl access to root account from SSH with prompt and access to running their Linux commands.
I am really interested in that software so far testing it for two days.
I will have a VIP with a single node running Active autofailover SSD iSCSI. This means you can start / boot each node and service failover without losing the connection to the virtual IP ... would be very useful for all systems in cluster
Would be perfect if we could get FC to auto failover, as this is a weakness with open-e compared to some of the other storage products
Any news on hardware VSS support for use with DPM/Hyper-V/CSV's?
This topic is on this thread.
http://forum.open-e.com/showthread.p...2490#post12490