Todd -
What I will try is to switch over to IETd tonight (actually in the next couple hours). I will force a failover to SAN2, enable IETd on SAN1, failback and then enable IETd on SAN2.
Hopefully this should be a seamless transition. Once moved over, do you want me to let it run? Or try those settings you suggested yesterday on the target side?
As far as breaking the bond, that will be the tricky part and I would like to wait to do that last.
Also, I have been doing some basic disk testing with CrystalDiskMark (Win) and hdparm (Linux), and I have noticed the volume group on the external PERC controller (RAID6 8x450gb 15k SAS) gets me an average of ~110MB/s Seq Read, and ~65MB/s Seq write in Windows and a buffered disk read of ~180MB/sec in Red Hat Enterprise 5 x64 (hdparm -tT /dev/sda).
Now, the internal PERC controller (RAID10 6x450gb 15k SAS) gets me a sluggish Seq read of ~50MB/s and a Seq write of ~30MB/s and as low as 5MB/sec. If I do a Storage vMotion to the External PERC, it gets a little better at ~95MB seq read and ~33MB seq write. I'm confused...all controllers have the same setup except the PERC 5/e in SAN2 (refer to my diagram for reference).
Any thoughts? I thought I had some consistent results, but a couple of the servers that were dropping is on the external array. Doesn't matter where I move them to. If I've confused you, let me know and I might be able to explain better.
Thanks again for all your help. I will update the IETd switchover later tonight...