I'm in the middle of a trial and seeing a huge performance issue that seems to be limiting data to 100mbps.
Initially I tried enabling jumbo frames (on esxi, switches and in the tuning of Open-E ). I had also setup a balance-rr bond but in order to simplify it I have set it to standard MTU and 1 nic and seeing the same results.
I currently have 2 iSCSI vmkernel ports pointing at the Open-E server with round robin as the path and each nic is evently distributed.
All of the network ports are up at 1000gbps other than the main management interface which is 100mbps but we are not connecting to that IP for iSCSI.
Any hints, tips or suggestions would be appreciated.
Try with MPIO with 2 NICs on the DSS side and your 2 dedicated vmkernel NICs. Your correct on keeping the jumbo frames the default. Also try to make these changes on the DSS side but stop the VM's and reconnect from the ESX once you make these changes.
1. From the console, press CTRL+ALT+W,
2. Select Tuning options -> iSCSI daemon options -> Target options,
3. Select the target in question,
4. change the values to the maximum required data size (check w/ the initiator to match).
Doing this will reset the iSCSI connections at each edit. Please pause any hosts connected
to the LUNS. These adjustments need to be made at each node if in failover configuration, and at the initiators.