What build of V6 are you using ? the latest is build 5087.
What firmware are you using on the RAID card? (latest DSS supports the latest firmware)
Have any adjustments been made to the SCST parameters, at each target?
Are you using jumbo frames? Do you have hardware that supports jumbo frames?
Have you tried these tests without the bond, or less ports in the bond? As bonding doesnt always provide a performance boost.
Are you using write caching on the RAID?
The reason I mention these things, is they will all have an impact on performance. You should investigate making some adjustments in these areas.
Searching this forum, and the open-e knowledgebase, it can help. http://kb.open-e.com/
1./ 6.0up35.8101.4452 64bit - last time I tried to do an update ir didnt say there were any update.
2./ What ever firmware was available about 1 year ago
3./ I did change values on the targets and on the DSSv6 system in CLI, made no difference, cant remember what they were but I did lots of tweaking about 6 months ago, system just had to be left due to too much critical stuff riunning on it, so giving things a look again now.
4./ Yes jumbo frames are enabled on the xenservers using the following for the bond0 and the eth2 & eth3 physicals
[root@Unix01 ~]# xe pif-param-set uuid=39e0a99f-216c-f887-bf79-069c185d0792 other-config:mtu=9000
5./ Yes tested without bond, and also by removing all interfaces but 1 per system. Also tried replacing the linksys switch with a juniper 2200.
6./ Yes write cache is enabled. One thing I was unsure of is that if I go into hardware raid in the web browser it shows it's setup, but then if I go to software raid it seems there is software raid too... I'm not sure if this is how "DSS" looks at things or if the person that set it up messed up and did hardware and software raid
I get about 60MBps read and about 50MBps write... If I try this from a system that has disks local I get about 90MBps, if I try it from SAS disks I get 128MBps.