The hardest thing to wrap your head around bonding is that is doesn't multiply your bandwidth. If you have 4 gigabit cards bonded together you will NOT get 4 gigabit of bandwidth. You will get load balancing across all 4 cards, but your performance will never be more then 1 gigabit, but you will have simultaneous traffic across all 4 channels.
Multipathing allows simultaneous access across 2 channels (or more) if you use Least Queue method of MPIO so it appears your getting more bandwidth but your really splitting the load
We have dual 10g cards in our 2 DSS's and just a single dual ethernet card dedicated for ISCSI in each server doing MPIO. We have 6 servers right now and never really strain anything and get good performance under Hyper-V