is the teaming still giving problem with Hyper-V R2? I finally have a chance to set up teaming in order to catch up the min 1GB/s bandwidth from my SSDs array....setting up in progress....
is the teaming still giving problem with Hyper-V R2? I finally have a chance to set up teaming in order to catch up the min 1GB/s bandwidth from my SSDs array....setting up in progress....
The allocation of IO resources to volume initialization is something that really needs addressed.
What were the particulars of the IO issue? We had not noticed this as being a significant issue until we went to initialize a 500 GB volume on a 3.0 TB SATA array, which hosted other live volumes. This was on an Intel SSR212MC2 with dual quad core processors, 8 GB Memory. The CPU, Network IO, etc were all fine, but the IO dedicated to the volume initialization appeared to be way out of sync.. We won't be doing that without taking servers off line first until we get a handle on what is causing it.Originally Posted by Robotbeat
Same problem here with initializing a File i/o volume that was created after we had production sites running. The production servers started timing out so bad we had to stop the initialization. Now we just use File i/o volumes without initializing them first and they seem to be working ok <fingers crossed> Never could get a straight answer on the downside of not initializing a file i/o volume. Was told that EMC forces everybody to initialize there File I/O volumes so it must be good. But I bet when EMC users do initialize a File i/o volume they don't kill their production sites access at the same time.
It would be awesome, if it's possible, to integrate the Adaptec Storage Manager into the DSSv6 to control the H/W Raid controllers of Adaptec by remote and with the software of Open-E.
Is there any way to do this?
The raid card would have to have a web based admin program like areca or 3ware do. Adaptec does not have that.
Friendly log messages
Many-way, LAN/WAN replication/failover
The ability to give "friendly names" to interfaces, disk units, volumes
Network information connection specifics (link type, link speed) in the web GUI
active-active iSCSI cluster (e.g. a kind of RAID over 2 machines), storage virtualization.
regards,
Lukas
descience.NET
Dr. Lukas Pfeiffer
A-1140 Wien
Austria
www.dotnethost.at
DSS v6 b4550 iSCSI autofailover with Windows 2008 R2 failover cluster (having still some issues with autofailover).
2 DSS: 3HE Supermicro X7DBE BIOS 2.1a, Areca ARC-1261ML FW 1.48, 8x WD RE3 1TB, 1x Intel PRO1000MT Dualport, 1x Intel PRO1000PT Dualport.
2 Windows Nodes: Intel SR1500 + Intel SR1560, Dual XEON E54xx, 32 GB RAM, 6 NICs. Windows Server 2008 R2.
Apparently, the latest release candidate takes care of this on the ftp server (it's in beta, now). I haven't tried this out, but this should soon make its way to a full release, available via software update (online update).The allocation of IO resources to volume initialization is something that really needs addressed.
Active-active replication is possible, but would require something like a SAN filesystem anyway, so it wouldn't be that useful (active-active replication is available in drbd, which open-e uses) for the most part.
Another nice feature which just came out in drbd is the ability to do replication via low-latency infiniband (not just infiniband-over-ip), using the sockets direct protocol. This is really neat, and is something that other SAN providers are working on for next-gen very high performance systems. We don't have any customers that need something like this, but it would be a good way to all but completely mitigate the performance hit from replication.
My advice - use block IO OR don't init file/io volumes.Originally Posted by webguyz
The initialization process simply uses dd to write zeros out the blocks.
I've never gotten an answer from Open-E on what will happen if you don't init the volume.