If you place a single gigabit card in both the primary and secondary, then with a fast enough disk array at each end, you will easily saturate the gigabit ethernet connection.
I have two DSS servers (one FULL, one running a trial at the moment). With different hardware RAID controllers in each:
Primary:
Single Dual-Core Intel 2.3GHz CPU, 4GB RAM
4 x 300GB SAS 15Krpm + PERC6 H/W RAID10 forming a single 550GB volume.
2 x 1TB SATA 7200rpm + Software RAID1 forming a single 1TB volume.
Secondary:
Dual Single-Core AMD 2.4GHz CPU, 2GB RAM
4 x 1TB SATA 7200rpm + 3Ware 9550 SATA2 RAID1 forming two 1TB volumes.
Replicating two volumes on the primary to two volumes on the secondary array, I saturate the gigabit link between them in Asynchronus Replication mode with each task set to 60MB/s maximum during the initial synchronisation.
So this should give you some confidence that great performance is possible with relatively cheap hardware.
If you have the finance to go to 10Gbit ethernet, then sure, go ahead, but its unlikely you will ever need this much bandwidth in your replication channel....unless you are google.
Bonding will only give you redundancy, unless you have more than one replication task to run at any one time.
We use Asynchronus Replication since we are not using any form of failover. This has the added advantage of allowing us to keep the iSCSI LUN's we have in "Write Back" cache mode. When failover mode is used, the "Write Back" cache is disabled and Synchronous Replication must be used instead.
Best regards
TFZ
If it can go wrong, it generally will!