Ticket #1020209
Thanx!
Type: Posts; User: pfeifferl
Ticket #1020209
Thanx!
I sent logs to support, but they didn't find an error ...
I use balanced-rr as bond without switch (direct connection NIC to NIC).
The reboot occured on long running system, I never had this...
Hi,
our 2 node DSS v6 failover cluster has troubles again (no RAID/HDD issues at this time). Today DSS2 (sec. node) suddenly rebooted and hung at 27% boot up sequence, booted again, ...
Then I...
Do you use current up50 version? Have you volumes greater than 2 TB? >> http://forum.open-e.com/showthread.php?t=2164
next release will fix that.
Is it possible to do block level volume replication to multiple sites? e.g. replication for iSCSI failover + additional replication to third DSS box for backup?
If not, is it on the roadmap?
thanks, but the source volume must be a NAs volume, right? I need it for a iSCSI volume ...
Is it possible to do a volume replication of LV to another LV on the same DSS?
I want to duplicate data from one RAID volume A to another RAID volume B without interrupting the iSCSI connection to...
Hello,
we have a productive DSS cluster with each 8 HDDs on Areca-RAID-controller (RAID5 and 6). We use Western Digital RE3 1 TB HDDs (WD1002FBYS-01A6B0, F/W 03.00C05, prod. date sep 2008 and feb...
ok, the same bad experience ... :(
Do you have more details about your defective HDDs? RE3 or RE4? WD1002FBYS-01A6B0 or -02A6B0? What F/W - C05 or C06? Production date (2008/2009)?
Areca...
today another HDD (WD1002FBYS - 01A6B0) crashed with timeout on sec. DSS :mad: ... I hate WD disks ...
yes, the next release should fix this issue!
I read in release notes of upcoming release (up55; in engineering phase) that there is a fix for troubles in SCSI-3 PR with LUNs bigger than 2 TB - will this fix this issue??
ok, thanx!
What's the default on DSS: enabled or disabled flowcontrol?
We also use two DSS v6 up40 as iSCSI SAN for our Hyper-V R2 cluster (specs see signature).
Performance and reliability are good, but we have two issues:
1. DSS failover breaks Hyper-V VMs on...
up40 build 4550 is running (old SCST 1.x), up50 build 4786 has issues (new SCST 2.x) ...
The cluster validation log says:
Failed to read drive layout of Cluster disk 1 from node cluster02, status 170
Cluster Disk 1 does not support Persistent Reservations. Some storage devices...
Warning!
I updated to up50 tonight and Windows 2008 R2 cluster had big troubles with CSV (redirected IO), because iSCSI persistent reservation has an issue in this release ...
I will contact...
Hi,
where can I check/edit the INTEL NIC driver settings and options - especially the flow control option (enable/disable)??
I think I have trouble with enabled flow control (dropped packets)...
more answers from support:
The system logs, they are available to download from our Web GUI from status ->hardware -> Function: Logs.
Sometimes, a temporary system logs kept on the DOM (...
>> http://kb.open-e.com/WARNING-Low-space_117.html
2010/10/13 03:20:01 WARNING: Low space! ( < 2 MB ). Please contact with the support.
Whats that??
Hi,
Is there a feature on the open-e roadmap to let the virtual IP on DSS failover cluster ever up & running, even if the failover service is stopped?
I think this is important, because at the...
I checked both defective drives with the WD Data LifeGuard Diagnostics Tool and both drive are healthy and error-free ...!? Why they failed in RAID? Should I however return the drives to WD (RMA)?
RAID rebuild on prim.. node succeeded in about 4 hours :) .
Has anyone else troubles with WD RE3 HDDs (WD1002FBYS - 01A6B0)?? Our defective HDDs are production date Sep 2008 and Feb 2009 ...
thanx for quick reply! ok, I will do failover to sec. DSS first, than I run rebuild on primary via RAID controller console. We don't use Adaptec, we use Areca and I hope that there are no such...