After some research I decided to go with open-e to build a redundant SAN for a small cloud environment.
I got a setup in mind, but am looking for confirmation if this setup makes sense and will work with open-e before I make the purchase.
Goal: get a fully redundant setup for a vsphere 5 environment.
Initial setup: 3 ESXi hosts and 2 SANs using iSCSI
First I wanted to connect the esxi hosts with 2 x 1 gbit connections to a switch and use MPIO.
But to make the replication between the 2 SANs work I would also need sufficient bandwidth.
Looking at some recent hardware, I've noticed that a setup with 10 GBe nics wouldn't cost too much more than decent (and more) gbit nics.
There are some supermicro servers that come with 10GBASE-T on board as well, and for the additional cards I would go for the Intel X540-T2 (10GBASE-T) nics.
Proposed wiring:
- Each SAN: 4 x 10 GBASE-T nic
- Each ESXi: 2 x 10 GBASE-T nic (+ 2 x gbit connection for the regular network connections)
Each ESXi would have 1 cable to the primary SAN and 1 cable to the secondary SAN.
And for the replication a direct connection between the 2 SAN's.
Some questions:
1) is the proposed wiring ok to make it fully redundant ?
2) I guess 10GBASE-T won't be a problem ? (I think I prefer this one over SFP+ and it is most likely future proof)
3) How should I setup the ping nodes for this setup ?
4) If I want to get extra ESXi servers I would need to add an additional 10GBASE-T nic in each SAN ... is there a limit on the number of NIC's I can use in open-e ?
Using MPIO with multiple 1GB is a good way to go.
The 10GBe connection for replication will also be advised.
In the future when adding more ESX, you can easily add another 10GBe and use that instead of MPIO connections.
No limit to NIC configuration from our side, its more hardware limited than anything.
Ping nodes should be equipment with 100% uptimes, and can be reached from each DSS V6: http://blog.open-e.com/ping-node-explained/