Hello!
I'm having some problems with I/O performance on our hyper-v cluster.
We have 3 hosts each running 3 guests connected to our DSS v6.
Each host has a direct 1Gbps connection (crossover cable) to a NIC on the DSS, the DSS has 4GB ram (although on the open-e status page it doesn't look like this is all being used) and 16 x 1TB SATA drives connected to an LSI 3Gbps RAID card in a RAID 6. The VG is a single LUN Block I/O
I'm when I look at my guest servers in perfmon i'm getting around 0.250 sec/read (and write) I understand that anything higher than 0.020 is regarded as poor performance.
The guest servers do feel pretty slow, which is what lead me to investigate.
Would switching the DSS to a single 10Gbps and putting in a 10Gbps switch for the hosts be the only thing that could solve this?
Or can anyone offer and tuning or config advice that will avoide the huge investment of 10Gbps network equipment for me?
Many thanks for all of your help,
Jonathon
P.S more info available on request... just let me know what you need (although it's a live system so I can't interupt services)