Surely I should get better speeds than this. Running Xen 5.5 connecting back to open-e box that has 8 disks in raid6 with iscsi. The below tests were run within a VM.
vdb1:/home/icepick# time dd if=/dev/zero of=/tmp/1 count=500000
500000+0 records in
500000+0 records out
256000000 bytes (256 MB) copied, 5.75789 s, 44.5 MB/s
real 0m5.836s
user 0m0.360s
sys 0m5.108s
vdb1:/home/icepick# time dd if=/dev/zero of=/tmp/1 count=5000000
5000000+0 records in
5000000+0 records out
2560000000 bytes (2.6 GB) copied, 62.6363 s, 40.9 MB/s
real 1m2.717s
user 0m3.064s
sys 0m43.471s
open-e box has 4 gige interfaces running in bond, all disks are connected to a 3ware card
What build of V6 are you using ? the latest is build 5087.
What firmware are you using on the RAID card? (latest DSS supports the latest firmware)
Have any adjustments been made to the SCST parameters, at each target?
Are you using jumbo frames? Do you have hardware that supports jumbo frames?
Have you tried these tests without the bond, or less ports in the bond? As bonding doesnt always provide a performance boost.
Are you using write caching on the RAID?
The reason I mention these things, is they will all have an impact on performance. You should investigate making some adjustments in these areas.
Searching this forum, and the open-e knowledgebase, it can help. http://kb.open-e.com/
1./ 6.0up35.8101.4452 64bit - last time I tried to do an update ir didnt say there were any update.
2./ What ever firmware was available about 1 year ago
3./ I did change values on the targets and on the DSSv6 system in CLI, made no difference, cant remember what they were but I did lots of tweaking about 6 months ago, system just had to be left due to too much critical stuff riunning on it, so giving things a look again now.
4./ Yes jumbo frames are enabled on the xenservers using the following for the bond0 and the eth2 & eth3 physicals
[root@Unix01 ~]# xe pif-param-set uuid=39e0a99f-216c-f887-bf79-069c185d0792 other-config:mtu=9000
5./ Yes tested without bond, and also by removing all interfaces but 1 per system. Also tried replacing the linksys switch with a juniper 2200.
6./ Yes write cache is enabled. One thing I was unsure of is that if I go into hardware raid in the web browser it shows it's setup, but then if I go to software raid it seems there is software raid too... I'm not sure if this is how "DSS" looks at things or if the person that set it up messed up and did hardware and software raid
I get about 60MBps read and about 50MBps write... If I try this from a system that has disks local I get about 90MBps, if I try it from SAS disks I get 128MBps.