I have been testing various block and frame size settings to see what performs the best and I have run into write performance issues.
The SetupDSS 3511 with a Qlogic QLE2462
XenServer 5 (Update 3) with a Qlogic QLE2462
Qlogic 5600 switch in between
I created a 10GB volume on DSS for each block size (512, 1024, 2048 and 4096). I attached each as a Storage Resource to XenServer.
One at a time I created a 5GB partition and attached them to a Win2k3 VM server. The only one Win2k3 could see properly was with the 512 block size. All the others did not show the proper size partition and would not initialize. I also changed the FC frame size settings on both ends (512, 1024, 2048), but it did not make a difference.
I then proceeded to perform READ tests. The best numbers were with a 512 frame size. Then I started WRITE testing and this is where my trouble began. I would get a quick burst, then the test would lockup.
The next thing I did was to do a simple file copy. I copied 2GB of data to the partition. The copy would go in short bursts and take forever to complete.
For comparison I attached the XenServer to a volume on our SANMelody SAN and did the same set of test. While the performance was less then what I was getting with DSS, the READ and WRITE operations worked perfectly.
So my conclusion, being that I am using the exact same software and hardware for testing, is there is something wrong on the DSS side.
One other note -- I found that if I create a FC volume on DSS and attach it to the XenServer, then detach it from the XenServer and change something on DSS, like delete and created another volume, I have to reboot both xen and DSS for xen to see the updated volumes correctly.
Any suggestions on how to fix the WRITE issue??
Mike
Are you using the "default" blocksize when you format the drives as ntfs? When I tested FC volumes with different blocksizes set on the DSS interface, I manually set the blocksize in windows as I was formatting them as ntfs. This was also with windows 2003. Have you done anything like that?
What RAID level are you using?
Can you try enabling cache and read ahead on your RAID controller.
how much RAM do you have ?
Is it locking up or is the write fast in the beggining and then slowing down?
Forget to ask or maybe you did test this already but what about directly connecting w/o the switch. I know that the Datacore you didn't have to do this but just a thought.