I have two servers connected directly by a switch to OpenE for iSCSI LAN, both at Gigabit.
One is a Windows 2008 Server, i have good performance when copying file on the ISCSI volume (80 - 90 Mo / s write).
Second is a Windows 2003 SBS and i have really bad performance.
The ISCSI Target have the same option on OpenE (C+A+W Iscsi Daemon Options Max Values possible), Jumbo Frame are OFF.
When copying data on my OpenE with SMB using ISCSI LAN from Windows 2003, i have really good performance, it's look like only my ISCSI from Windows 2003 is slow.
That depends also on the switches and how they cope with bonding. There're several posts about this "issue". Bonding had been originally desinged for fault-tolerance, not speed.
I couldn't get more performance out of a bond as well, so I devided indeed to wire the server through dedicated links to my DSS, or in my case: run multiple links in different networks from my server to my DSS.
I am starting to seriously thinking of getting a pile of 10 GbE gear.
That's exact the same setup as mine. I am running one of my DSS on a Dell PE1750 with 2x onboard GbE and a PCI-X QuadPort GbE.
I do have a 802.3ad bond and I have just configured a firther NIC for connecting the 3rd iSCSI volume to one of my Xserves, that has also 4 GbE ports.
Up to now, I have been connecting my 2 iSCSI Volumes through the bond and never got more than 130 MB/sec, but now I have the 3rd iSCSI volume connected through the additional link and we'll see tomorrow how it works out, when the next full-backup is due.
bonding only gives more potential bandwidth if you have many hosts and the result of the XOR function works in your favor. You are going to have to resort to using multiple links or higher bandwidth links...