We have just moved to ESX4 and not found any problems as yet
2 ESX servers running 5 Vm's and counting
only niggle i can see is that large file copies from a vm guest to and iscsi target are not so fast.
just trying to setup a new iscsi san with open-e dss. we're evaluating different softwares for iscsi and want to use open-e dss.
i have a few questions:
vSphere 4 Server with 1x Quad Port Intel
Open-E DSS Server with 1x Quad Port Intel (last Update, installed today)
Dell 6224 Switch
i just setup a lacp (channel-group 1 mode auto) for the esx4 and a lacp (channel-group 2 mode auto) for the dss.
in dss i choose "new 802.3ad" for the bonding mode.
How do i configure the vSwitch? is there something like a screenshot? i just tried "route based on ip hash", but the performance is very bad.. something like 3mb/s read.
Can you provide some more detailed information as to how your networking is set up? For example, do you have a vswitch set up just for the vkernel iscsi? Does it have two physical ports of your quad-port NIC? Are you using two ports on your DSS, or more?
My recommendation would be to start out with nothing fancy, no bonding on the vmware or dss or switch. Just establish one path from vmware through the switch to the dss and make sure you have reliable iscsi traffic flowing through there. You should be able to get a raw io of at least 80-100 MB/s on a gigabit port (the theoretical maximum is 125 MB/s). Even if you bond two ports, you probably won't ever get over 120 MB/s from one vmware iSCSI initiator to one dss, since the packets still just go over one physical connection at a time. Bonding connections works great for creating redundant paths but won't give you more bandwidth. (networking experts, please correct me if I'm wrong on this).
So, once you've got your iscsi pipe working, you can turn set up your bonds.
For sure we've set a vswitch just for the iSCSI Traffic. We're using 4 NICs for iSCSI@ESX 4NICs for iSCSI@DSS and 1 for Management@DSS+ESX)
How to get more than the theoretical maximum of 125mb/s into one vm? MPIO? How to setup ESX then? Every iSCSI Nic is one vSwitch? I tried this already. got 3mb/s read
We seem to be experiencing the same issues at a customers site. vSphere 4.0 and DSS V5 is commong among the other replies here. I have opened a support request for this.
Our reseller provided us with DSS5 even tho we got the unit in August, when DSS6 should have been released. Is there any way that we can get the upgrade for DSS5?
DSS6 works very well with vSphere4. I have been running solid on DSS6 for 2-3 weeks now, which I was not able to do with DSS5.
I still get connectivity drops on various LUNs under really high I/O with v6, but they recover within seconds... this is a major improvement over v5 where the LUNs would actually have to be dropped and recreated in order for the ESX to pick them up again.
SCST seems to be where the problem was fixed. IETd seemed to be the root of the problem with v5.
This is correct where the new DSS V6 has the SCST and the DSS V5 is using the IET and was certified with VMware 3.5. We are certifying the DSS V6 with VMware 4.0 but not with DSS V5. I would contact your reseller to inquire about the upgrade to DSS V6 to use the SCST.