Q: Install to Supermicro -> Kernel panic after first boot - LSI raid card
Hi, I am trying to find out if there is a straightforward way to adjust boot options / to see if I can get a working stable config on this hardware:
Supermicro X10SDV-TLN4F mainboard with Xeon CPU
LSI 9271-4i with 2 x SSD in a Raid1 volume; and 2 x SATA in a second Raid1 volume
The installer boots fine, and allows me to select either drive to install OpenE DSSv7. I specify the first raid1 volume, installer proceeds normally and then completes, asks for reboot when done.
It reboots, first shows me a list of default assigned IPs on the interfaces of the host. Then, after a few seconds, I end up with an 'ongoing' kernel dump style message that updates every ~30 seconds, and that is about the end of the story.
Please see attached picture/link for what the panic screen console looks like.
-- LSI raid card is a known problem ?
-- Motherboard / chipset ?
-- something else ?
I see RIP messages from the RAID controller. What happens when you remove the controller from the board and start again? Place once of the SSD on the Main board and install the DSS on that to test but running without the controller to see if reproduces the issue. Is the controller firmware updated as well and the mother board?
Hi Todd, thanks for the reply! I am afraid this is a very controlled environment - ie - I don't have physical access to the server; and the physical config is not in my control. if this does not work as it stands, then I simply use a different host / rent a different config from the hosting provider. As such - I don't think I can do the test process you recommend. Alas.
I did only now (doh) read the HCL and confirm that the LSI-9271-4i does not appear to be specifically listed as supported. So out of the gate I am guessing this might be a baseline problem.
My plan-B Scenario I think I can try, possibly, even though it sounds bit insane.
-- install Proxmox (KVM Based virtual machine hosting virtualization environment) on the bare metal
-- install Open-E as a VM inside Proxmox / as a KVM Based VM / using standard Linux VirtIO HDD and NIC device driver config.
-- I am guessing? (hoping?!) that Open-E will have good support for linux VirtIO devices and thus it will allow me to work around this hardware limitation.
It also gives me the added benefit, I can run a BCache SSD cache layer on my datastore that holds the VM image for the OpenE VM / and possibly get better performance than I might otherwise have managed with running Open-E directly on the bare metal. :-)
I can followup to this thread a bit later (next few days) in case I have any further progress on this.
I will send you a new release candidate (Check your inbox) that I want you to try that has newer LSI drivers (Dell OEMs LSI). Let me know if it works for you.
Hi, I don't have physical access to the server - it is a leased "server as a service" in remote datacentre. So I think I am stuck. This hardware won't work I believe.
oops, yes, I certainly have remote IPMI / remote admin / remote virtual CD. I am just downloading the ISO now. I will followup on the thread after I have tested this installer.
Hi Todd, just to confirm. I re-installed using the new ISO and this time things appear to be working perfectly. No more kernel crash messages on the console I was able to configure the IP address on public interface for the server; get in via WebUI; activate a trial subscription - and things look awesome. So next step is to setup my second node and configure replication / and see how it works in this environment for an HA NFS 2-node cluster :-)
Hello,
We have exactly the same problem. Can you please provide us with the needed iso asap? We need to install for a client an cannot continue.
Thanks!
Regards,
Arjan