I moved the most important VMs to a additional SSD Raid5 Array and updated the DSS V6 to "6.0up99.8101.8328 64bit", even with the small update I ended at a old LSI driver version in V6 (2011 or so).
Is it now safe to reenable CacheCade after updating the LSI FW to the most current level?
Btw. if you confirm that the S200 box from xtivate.de is not certified, I will ask the guy why the sold me a S200 and 200.
We have 100's + different builds in testing so I am not sure about this build that you have "8328" as we try to stay with the official releases, currently there is still build "7337" now that is the latest build for V6. V7 release that came out has the latest drivers, small update for V6 would have to be created. So far I have not seen issues with the latest firmware from LSI for both V6 and or V7 so I would assume they have resolved it. I did not see xtivate.de on our list where they have done there certification with us as one of the Manufactures that are listed. http://www.open-e.com/partners/certified-systems/
LSI Have released a new firmware 23.18.0.0014 (MR 5.8 P1) Nov 26, 2013 and the release notes include the following fixes:-
SCGCQ00525647 (DFCT) - Cachecade 2.0 with MR 5.7 fw peformance is much lower than MR5.6
SCGCQ00552847 (DFCT) - Data Integrity issue found with CacheCade when CacheCade disassociation is initiated after running IO
I note they have also removed MR 5.6 & 5.7 from their download archive (the problem one and the preceding one). I resolved my problem by rolling back to MR 5.6 which seems to be stable with Cachecade write cache on.
Is anyone brave enough to try the new version and see if the problem is resolved in this? Not sure if I want to risk it on production servers after all the problems this caused the last time I upgraded.
we also have the problems described here. Open-e DSS V7, A-A-failover, iSCSI, LSI with CacheCade and MR 5.8. Massive data corruption, corresponding to write IO.
We'd like to set CacheCade to read-only, as a first workaround. Any hint how to do this?