Further testing...
Booting with just one disk, managed to get OpenE online. Then added the disks one by one rescanning each time. All the disks detected (almost) correctly, but the arrays remained in Degraded mode and no indication could be found as to whether they were resyncing or not. Sitting and waiting did not help, they remained in degraded mode
Interesting was that prior to this, the devices listed as S000, S001, S002 and S003. When bringing them on "hot" they listed as S000, S002, S003 and S004.
Tomorrow I will be scrapping the lot and starting from scratch with the latest OpenE version on the site, as I am a couple of minor revisions behind. If I can't trust the darned thing to boot up if a disk fails then it's not much cop! There must be something amiss. Email alerts also failed, even though on initial setup I did receive the test email. That one may well be junk-filter related, though, and requires further investigation.
RAID 0
RAID 0 (striped disks) distributes data across multiple disks in ways that gives improved speed at any given instant. If one disk fails, however, all of the data on the array will be lost, as there is neither parity nor mirroring. In this regard, RAID 0 is somewhat of a misnomer, in that RAID 0 is non-redundant. A RAID 0 array requires a minimum of two drives. A RAID 0 configuration can be applied to a single drive provided that the RAID controller is hardware and not software (i.e. OS-based arrays) and allows for such configuration. This allows a single drive to be added to a controller already containing another RAID configuration when the user does not wish to add the additional drive to the existing array. In this case, the controller would be set up as RAID only (as opposed to SCSI in non-RAID configuration), which requires that each individual drive be a part of some sort of RAID array.
Did you try to do your settings through the Console mode , by using an SSH software as putty ?
you can connect by using username : cli and password use what you put it in SSH software password option).
try to press CTRL+ALT+W , or CTRL+ALT+X and make your changes from there.
also you can press F1 for shortcuts help.
Also you said "When I remove one of the disks from the machine and boot it up...." and you are using RAID0 OR RAID1 , and both of this RAIDs must use 2 HDD at least!
so your array are done with 2 HDD, and when you remove one , then you destroy the architecture that RAID work with ( as I know),
so it’s not an Open-E error because you are doing something fully unexpected ! or am I wrong?
it’s like someone try to run computer without RAM or CPU
With a RAID1 (Mirror), if one disk fails the other should run the array in degraded mode. As there was no critical data on the array I was in the process of testing various scenarios, one of which was a critical absolute failure of a disk, simulated by booting the machine without the drive.
That's when I ran into OpenE not booting with the "Unknown Operand" error, and have so far failed to both resolve it, and figure out what's causing it. The testing continues.
what is happening is that open-e writes a serial number on the disk to keep track of them
when you broke the raid 1 by removing the disk, the software was expecting a new disk drive to be added back to the raid
by adding the just removed drive you are confusing the software.
to keep you testing going, by reformat the drive so that there is nothing on it.
(I know there is no data but the serial number is there)
you can use the console utility in CRT-ALT-X "delete contants of unit"
this will wipe the drive clean.
also I think the software RAID does not support hot swap of the drives.
(at least it did not when I tested this )
Thankyou for the feedback Symm, some updates are warranted I think!
- I am installing onto a 4Gb USB pen , "FAT" filesystem (Fat16) with the default chunk size. A friend of mine who uses OpenE from a USB pen uses a 2Gb drive, and when you do an openE install to a HDD it creates a 2Gb partition. I wondered if this was significant.
- Initially, I powered down the machine, then removed the disk. It was not a "hot" removal. OpenE then failed to boot up, when you would _expect_ it to boot up, but to indicate that the affected array was now in degraded mode. However, it crashed on boot with an Unknown Operand error. Had that been a real scenario, with a disk losing power or plain old dying outright, I would have been locked out of my storage server and in a right mess! Lucky for me that I discovered this in testing, not in deployment.
- Installing to a HDD has been so far successful, with the system booting up with 1 disk removed from a 4-disk array. This leads me toward there being a problem with either the USB pen in use, the machine's USB bus, or the fact that I'm using a 4Gb partition on the USB pen instead of 2.