after reboot we get the message: error: no system volume found! Services can't work w/o system volume.
Selftest OK.
From cli we see units OK as before but shares are not accessible anymore. NAS settings from web gui not accessible :No system volume found.
Bios settings checked. We tried another Open-E NAS r3 module with less licenced storage
with same result.
Are you using snapshots from this LV? Try to remove them in Extended tools from the Console - go to CTRL + ALT + X then select remove snapshots. Then restart the system.
Can you check the RAID health from the BIOS of the RAID controller to see if there is any issues of corruption on disk or raid.
Also try to do Logical Volume Restore in Extended tools - this will restart the system as well.
Hi,
after reboot we get the message: error: no system volume found! Services can't work w/o system volume.
Selftest OK.
We try to get the problem solved with european support for 1 1/2 weeks now, and had serveral remote service connections with no success. We cannot access our data.
A certain part (40GB) of our 5,4 TB array couldn't get backed up before the error occured.
All the hardware is checked more than once, and even flashed with the newest updates.
Controller Manager says Array = "optimal" (2 times verified)
Configuration is very simple, no snapshots, no backup tasks, just a AD is activated to serve our windows environment.
First question: What the hell is stored in this system volume (4 GB) It could be just a few log files ???
2nd Question: Why the open-e support cannot repair it ?
I have checked the case and looks as though there are sector errors from the logs. You might want to contact the engineers from RAID Controller and see if they can do a deeper scan. I could also result from a broken RAID set and we are not able to fix this on the controller end. The RAID manufacture might be able to assist you on this part.
The System Volume not found is reported when we are not able to see the assigned volume group.
2009/01/08 07:46:46|end_request: I/O error, dev sdb, sector 2107736
2009/01/08 07:48:07|end_request: I/O error, dev sdb, sector 2107752
2009/01/08 18:51:29|end_request: I/O error, dev sdb, sector 72480
2009/01/08 20:06:43|end_request: I/O error, dev sdb, sector 2556244408
Hi, i have the same problem with R3nas. After a rebuilding completed succesfully and a reboot, i have read the following message on startup:
No system volume found after reboot
I've tryed to recovery file system without results; then, i've tryed also to restore LV volume group but this function, maybe, doesn't function very well because it showes me 2 recovey point available; but, both points are corrupted or, however, is impossible to recovery LV VG from them.
The thing that i don't understand very well, is why in the volume manager is so longer visible the correct size and the system volume previously existing; but it's unavailable.
There's a secure way to recover my data (that are very very importants)????
Try to see if you can set in the Extended tools from the Console screen in CTRL + ALT + X then select default volume group. If you cant download the logs to send them to support@open-e.com then add another single drive just to create a Volume Group then send the logs.
I checked the logs that you sent in and engineers were still reviewing the case. From what we have talked about is that there might be some issues with the raid set and try to do a health check from the controller side. Here is what we are seeing from the logs. As you can see we are only seeing that the Volume Group sdb: 1797.4 GB.
Run a RAID health check then try the PV Resize and set default Volume Group in the CTRL ALT X from the console.
From the controller are there any logs?
disk -l
*-----------------------------------------------------------------------------*
Disk /dev/sdb: 1797.4 GB, 1797427036160 bytes
255 heads, 63 sectors/track, 218524 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
--- Volume group ---
VG Name vg+vg00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 41
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 5
Open LV 5
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.63 TB
PE Size 32.00 MB
Total PE 53567
Alloc PE / Size 53567 / 1.63 TB
Free PE / Size 0 / 0
VG UUID dssx3Q-27wx-PLol-e5s7-sUv7-2qMV-22lZvj
RAID is healthy, I do not know if there are any controller logs and if so, where do I find them. I use Adaptec 5805.
I tried resetting all possibles settings to factory defaults using C-A-X, to no avail. I noticed that after resetting the default volume group did not disappear - not sure if it is ok.
I have a feeling that somewhere in the system there is a record of an old array, and it is trying to mount it as /dev/sda, and the new one as /dev/sdb.