That depends also on the switches and how they cope with bonding. There're several posts about this "issue". Bonding had been originally desinged for fault-tolerance, not speed.
I couldn't get more performance out of a bond as well, so I devided indeed to wire the server through dedicated links to my DSS, or in my case: run multiple links in different networks from my server to my DSS.
I am starting to seriously thinking of getting a pile of 10 GbE gear.
That's exact the same setup as mine. I am running one of my DSS on a Dell PE1750 with 2x onboard GbE and a PCI-X QuadPort GbE.
I do have a 802.3ad bond and I have just configured a firther NIC for connecting the 3rd iSCSI volume to one of my Xserves, that has also 4 GbE ports.
Up to now, I have been connecting my 2 iSCSI Volumes through the bond and never got more than 130 MB/sec, but now I have the 3rd iSCSI volume connected through the additional link and we'll see tomorrow how it works out, when the next full-backup is due.
bonding only gives more potential bandwidth if you have many hosts and the result of the XOR function works in your favor. You are going to have to resort to using multiple links or higher bandwidth links...
Afaik, the problem is that both nodes will have to support that. XOR will not gain any speed improvement as one can easily graps from the provided equation in the online help.
I think for a speed bump one had to choose balance-rr, but I couldn't convince OS X to support that. OS X by defalt uses XOR itself.
I have now prepared my multiple-link scenario and I will run some tests tomorrow.