OK, I'll give you a bit more information on my set-up and hopefully it will enlighten you all!
I have 3 x ESX hosts, each with 6 NICs. 2 NICs dedicated to LAN traffic and 4 for iSCSI. My DSS boxes have the same configuration although only 1 NIC is currently connected to the LAN for management purposes.
iSCSI traffic is kept completely seperate from the LAN. At present, the DSS boxes, well, one of them at least during testing (!) is connected to the iSCSI switch via 4 NICs in a 802.3ad bond and 4 ports on the switch are aggregated together for this purpose.
On the ESX hosts I have 4 VMkernels each with a dedicated NIC which are physically connected to the iSCSI switch using MPIO/Round Robin policy/IOPs=1.
I do have another iSCSI switch but have not introduced this yet and I am happy to split the VMkernel traffic across the two (i.e. 2 NICs per iSCSI switch) if it's going to boost performance.
I am currently getting around 200MB/s throughput using a 100% read test in IOmeter.
I guess I'm wondering can I ever expect to better this or is it as good as it's going to get performance wise?