-
Hi Symm,
Eth2 is connected directly to a host server with a cross-over cable, as it happens this server is out of use at the moment and shut down, to further rule out problems with this connection I've disconnected the cable but performance is still bad.
Thanks,
Jon
-
Just shut down the SAN and run a consistency check on the RAID and also memtest.
Both have come back clean...
I'll post on Monday with a performance update.
Cheers,
Jon
-
I've seen some tweeks in other post for iscsi, have you tried those ?
it was something like this:
maxRecvDataSegmentLen=65536
MaxBurstLength=16776192
Maxxmitdatasegment=65536
I've also seen posts that states that the initiator and target needs to be set to the same value.
from what you have posted from dmesg, I would suspect a network issue.
-
Each host server is connected to the SAN with a separate x-over connection which put me off the idea of a network problem.
Performance is good again now, the SAN has been back in production for 3 days and is running great. It seems that the RAID parity was corrupt, the consistency check has rectified this. I've scheduled one to run every weekend to avoid this happening again.
I still don't feel 100% secure that the SAN is ok so I'm going to keep a close eye on it I'll post back if I get any new info.
Thanks for all you help guys,
Jon
-
Just a quick question to everyone, I can't seem to use the the megaraid program to get to the RAID for things like logs etc... I've read that you need to use the highest eth number for access, I've tried eth0 and eth3 but neither seem to work.
Does anyone know if a resolution to this issue is on the cards? It'd be great if I could access the RAID logs.
Cheers,
Jon
-
Have you tried to access the megaraid from the console?
-
Yes, I can access the CLI and I've figured out the command to save the latest 100 Log entries to a file... although I've no idea where this file is going, I don't seem to be able to get proper access to the file system to save it anywhere usefull like a usb drive or to the raid :confused:
-
So it seems I was wrong...
After a week of good performance, the "load" on the SAN has gone up to a constant 6-8 causing poor performance on all of our virtual servers again.
Seems like a consistency check has made no difference, with memory also ruled out I'm now a bit lost as to what to try next???
Any one got any ideas?
Thanks for the help guys,
Jon