flow control won't help. jumbo frames may increase throughput for some workloads.
what version of open-e are you using?
are you using sw raid or hw raid?
flow control won't help. jumbo frames may increase throughput for some workloads.
what version of open-e are you using?
are you using sw raid or hw raid?
using version: 6.0up13.8101.4023 64bit
using HW raid.
is that graph normal though?
and why are they getting different speeds?
tnx for the reply.
anyone there?
bump.
My guess is that you might be seeing some sort of interaction with Open-E's cache scheduling. With write back caching, it should queue up writes in RAM, then flush them periodically, with disk ordered writes. The more RAM you have, the more efficient this will be.Originally Posted by tsc
I'm currently testing Open-E with VMWare and getting up to 975MB/sec throughput. This is for data in Open-E's cache. My physical disk throughput is limited to the 3gb SAS interface to my MD1000.
Here's my configuration:
Dell MD1000 storage enclosures
Dell 1950 server for Open-E
32gb RAM
Perc 6e w512 MB
Fujitsu XG700-CX4 switch
Supermicro AOC-STG-I2 10gb NICs
jumbo frames
VMWare iSCSI
Prior to jumbo frames, I was getting 750MB/sec.
Over 1gb, my thoughput is 110MB/sec.
The fact that you are stuck around 25MB/sec makes me suspect that you are bottlenecked by a PCI 250mb interface. Try using PCIe or PCI-X. I'm not sure what interface the 2850 motherboard connections are using, but my 1950's motherboard NICs and SAS interfaces are fast PCIe. Or perhaps, you are using a PCI card?
This is a really excellent read for me. Must admit that you are one of the best bloggers I ever saw. Thanks for posting this informative article.
zhu zhu petsOriginally Posted by dorler