thanks - it was not pointed at you but for others - trust me we want this more then you can imagine and hope to have this by summer or sooner - get ready some nice surprises are coming soon - cant say but you'll see
.
I support a customer with HA setup using multiple LVs that provide a backing store for their HyperV R2 cluster. Failover times are acceptable and our last test cycles revealed no downtime during failover operation.
As with all things, your mileage will vary, but I think you'll be ok.
thanks @enealDC and @To-M.
One more question: I've seen documentation indicating the a bonded interface will not provide for better performance with replication. I understand that a bond only multiplies bandwidth for multiple connections; does each replication task not create a new connection? Am I going to see better performance with 7x jobs across 2x Gbe or should I get 1x 10 Gbe inteface to speed this up?
TIA
Dave
You'll certainly want to go with 10GBE. Don't try using multiple nics. Its just not a very scalable method IMO
Thanks -- I'll wait to drive a Ferrari tooOriginally Posted by enealDC
.... Just kidding.
I agree with your assessment but the information I'm trying to get at is whether or not bonded NICs will increase performance at all in the case of multi-LV replication.
Basically -- is it one job that does replication (ie, one TCP stream) or multiple jobs (ie, N TCP streams). If multiple, then we can take advantage of multiple NICs even if that's not an ideal solution. If one, then you're right that 10GbE is the only option...
Thanks again
Dave
Each replication job is independent of the other, so yes, it is multiple streams.
A dedicated 10GBE link between two DSS servers is so affordable nowadays, I can finally start recommending it. A CX4 10GB adapter will probably cost a wee bit more than say a 4 Port Copper Nic. Many Supermicro mobos have 10GBE built onto the platform. It's better for overall cable management, IRQ management and performance.