That depends also on the switches and how they cope with bonding. There're several posts about this "issue". Bonding had been originally desinged for fault-tolerance, not speed.
I couldn't get more performance out of a bond as well, so I devided indeed to wire the server through dedicated links to my DSS, or in my case: run multiple links in different networks from my server to my DSS.
I am starting to seriously thinking of getting a pile of 10 GbE gear.
That's exact the same setup as mine. I am running one of my DSS on a Dell PE1750 with 2x onboard GbE and a PCI-X QuadPort GbE.
I do have a 802.3ad bond and I have just configured a firther NIC for connecting the 3rd iSCSI volume to one of my Xserves, that has also 4 GbE ports.
Up to now, I have been connecting my 2 iSCSI Volumes through the bond and never got more than 130 MB/sec, but now I have the 3rd iSCSI volume connected through the additional link and we'll see tomorrow how it works out, when the next full-backup is due.