ISCSI target.
Also turn off the bond.
ISCSI target.
Also turn off the bond.
Open a support case, so we can see your system logs
What model disk controller? Do you have Jumbo Frames enabled? Did you tweak the ISCSI target settings as described in various threads in this forum? Getting between 52 MB/s - 89 MB/s depending on traffic using your command line in a centos 5 vm using LSI 9260 disk controller and RAID 10 Sata Drives and Jumbo Frames,target settings tweaked and using MPIO (not bonded). Have about 32 vms active under Xenserver 6.0
Thanks for the answers, I contacted support and make the following changes:
maxRecvDataSegmentLen=262144
MaxBurstLength=16776192
Maxxmitdatasegment=262144
FirstBurstLength=65536
DataDigest=None
maxoutstandingr2t=8
InitialR2T=No
ImmediateData=Yes
headerDigest=None
Wthreads=8
and jumbo frames 9000 and my disk SEAGATE ST31000424SS 00069WK36GJR
Do you have any additional configuration xenserver?
What version do you have? payment or free?
What is the build of your Xenserver? is it the latest? as your xen might have an issue with your NIC drivers, did you checked for that!
switch are using some special?
using Dell 6248 with Gigabit ports. You need to make sure if using Jumbo Frames that its supported by your switch. Make sure your packets are not getting fragmented. Use something like this test from your xenservers to your DSS server ISCSI IP's: ping -M do -s 8972 -c 10 10.10.10.1 (where 10.10.10.1 is your dss ip addr) If the packets are fragmenting it will tell you, if they are ok then the repsone will be 10 normal looking ping responses.
Your Xenserver nics have to have MTU=9000 set and the same on the DSS side. On the Dell 6248 we have to set each port that we want to do Jumbo Frames to an MTU of 9016 which kicks it into Jumbo Frame mode. Check the docs for your switch to see if there is anything you have to do to enable JF's.