Visit Open-E website
Page 2 of 7 FirstFirst 1234 ... LastLast
Results 11 to 20 of 63

Thread: The feature request thread!!!

Thread has average rating 5.00 / 5.00 based on 2 votes.
Thread has been visited 43241 times.
  1. #11

    Default

    is the teaming still giving problem with Hyper-V R2? I finally have a chance to set up teaming in order to catch up the min 1GB/s bandwidth from my SSDs array....setting up in progress....

  2. #12

    Default

    The allocation of IO resources to volume initialization is something that really needs addressed.

  3. #13

    Default

    Quote Originally Posted by Robotbeat
    Another feature request (in response to call I just had with a customer of ours):
    the ability to lower the I/O priority for initializing a file i/o volume. Or, cancel the initialization.
    What were the particulars of the IO issue? We had not noticed this as being a significant issue until we went to initialize a 500 GB volume on a 3.0 TB SATA array, which hosted other live volumes. This was on an Intel SSR212MC2 with dual quad core processors, 8 GB Memory. The CPU, Network IO, etc were all fine, but the IO dedicated to the volume initialization appeared to be way out of sync.. We won't be doing that without taking servers off line first until we get a handle on what is causing it.

  4. #14
    Join Date
    Feb 2009
    Posts
    142

    Default

    Same problem here with initializing a File i/o volume that was created after we had production sites running. The production servers started timing out so bad we had to stop the initialization. Now we just use File i/o volumes without initializing them first and they seem to be working ok <fingers crossed> Never could get a straight answer on the downside of not initializing a file i/o volume. Was told that EMC forces everybody to initialize there File I/O volumes so it must be good. But I bet when EMC users do initialize a File i/o volume they don't kill their production sites access at the same time.

  5. #15
    Join Date
    Nov 2009
    Posts
    53

    Default

    It would be awesome, if it's possible, to integrate the Adaptec Storage Manager into the DSSv6 to control the H/W Raid controllers of Adaptec by remote and with the software of Open-E.

    Is there any way to do this?

  6. #16
    Join Date
    Feb 2009
    Posts
    142

    Default

    The raid card would have to have a web based admin program like areca or 3ware do. Adaptec does not have that.

  7. #17

    Default My short list

    Friendly log messages
    Many-way, LAN/WAN replication/failover
    The ability to give "friendly names" to interfaces, disk units, volumes
    Network information connection specifics (link type, link speed) in the web GUI

  8. #18
    Join Date
    Jul 2008
    Location
    austria, vienna
    Posts
    137

    Default

    active-active iSCSI cluster (e.g. a kind of RAID over 2 machines), storage virtualization.
    regards,
    Lukas

    descience.NET
    Dr. Lukas Pfeiffer
    A-1140 Wien
    Austria
    www.dotnethost.at

    DSS v6 b4550 iSCSI autofailover with Windows 2008 R2 failover cluster (having still some issues with autofailover).

    2 DSS: 3HE Supermicro X7DBE BIOS 2.1a, Areca ARC-1261ML FW 1.48, 8x WD RE3 1TB, 1x Intel PRO1000MT Dualport, 1x Intel PRO1000PT Dualport.

    2 Windows Nodes: Intel SR1500 + Intel SR1560, Dual XEON E54xx, 32 GB RAM, 6 NICs. Windows Server 2008 R2.

  9. #19

    Lightbulb

    The allocation of IO resources to volume initialization is something that really needs addressed.
    Apparently, the latest release candidate takes care of this on the ftp server (it's in beta, now). I haven't tried this out, but this should soon make its way to a full release, available via software update (online update).

    Active-active replication is possible, but would require something like a SAN filesystem anyway, so it wouldn't be that useful (active-active replication is available in drbd, which open-e uses) for the most part.

    Another nice feature which just came out in drbd is the ability to do replication via low-latency infiniband (not just infiniband-over-ip), using the sockets direct protocol. This is really neat, and is something that other SAN providers are working on for next-gen very high performance systems. We don't have any customers that need something like this, but it would be a good way to all but completely mitigate the performance hit from replication.

  10. #20
    Join Date
    Aug 2008
    Posts
    236

    Default

    Quote Originally Posted by webguyz
    Same problem here with initializing a File i/o volume that was created after we had production sites running. The production servers started timing out so bad we had to stop the initialization. Now we just use File i/o volumes without initializing them first and they seem to be working ok <fingers crossed> Never could get a straight answer on the downside of not initializing a file i/o volume. Was told that EMC forces everybody to initialize there File I/O volumes so it must be good. But I bet when EMC users do initialize a File i/o volume they don't kill their production sites access at the same time.
    My advice - use block IO OR don't init file/io volumes.
    The initialization process simply uses dd to write zeros out the blocks.

    I've never gotten an answer from Open-E on what will happen if you don't init the volume.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •