-
Slow.......
Hello,
We have a Xenserver 6.2 with all patches setup with redundant switches and active/active open-e and 4x1Gbit Multipath setup.
Xenserver 4x1GB > 2 switches 2x2 paths > open-e node 4 x1Gbit cards)
We have in multipath.conf:
device {
vendor "SCST_FIO|SCST_BIO"
product "*"
path_selector "round-robin 0"
path_grouping_policy multibus
rr_min_io 100
}
We see in statistics of the two open-e nodes between 100-180Mbit traffic per network card and we have 4 in them... We also experience some slow performance during backup times...
Shouldn't the Open-e server in there statistics show 4 cards with 800-900Mbits of trafic? It feels that the Xenservers 6.2 are using 1GBit over 4 links? (4 cards on open-e times 200Mbit max = 800Mbit.)
Can some one please enlighten us?
as addition we never saw more then 100MB/s disk speed with dd test uncached
-
You might try tuning the targets themselves:
1. From the console, press CTRL+ALT+W,
2. Select Tuning options -> iSCSI daemon options -> Target options,
3. Select the target in question,
4. change the values to the maximum required data size (check w/ the initiator to match in iscsi.conf file for linux).
maxRecvDataSegmentLen=262144
MaxBurstLength= 1048576
Maxxmitdatasegment=262144
FirstBurstLength=65536
DataDigest=None
maxoutstandingr2t=8
InitialR2T=No
ImmediateData=Yes
headerDigest=None
(deprecated)Wthreads=8
Also for best performance, attach only a single LUN per target.