I forgot to add that dss box ip is 10.0.0.2
Server one has ip's of 10.0.2.1,10.0.3.1,10.0.4.1,10.0.1.1
Server Two is 10.0.1.2,10.0.2.2,10.0.3.2,10.0.4.2
I forgot to add that dss box ip is 10.0.0.2
Server one has ip's of 10.0.2.1,10.0.3.1,10.0.4.1,10.0.1.1
Server Two is 10.0.1.2,10.0.2.2,10.0.3.2,10.0.4.2
I'n my simple mind, it would stand to reason that the DSS must have the same number of connections (network cards) as the servers have I.e. four cards in the DSS with IP's 10.0.1.10 10.0.2.10 10.0.3.10 10.0.4.10 unless you are somehow achieving this by splitting up the 10GB connection in to 4 VLANS?
With our MPIO implementation we effectively have 2 parallel physical connections between eash server and the DSS, this logically scales to many more connections very easily and doesn't require a switch to be installed in between. This has the advantges of saving you money and removing the single point of failure (the switch) from the storage sub-system.
Cheers
TFZ
If it can go wrong, it generally will!
No Vlans. For another $500 I can add a second 10 gb connection to my dss box, but I've already got several thousand into this and don't really want to put more into it if I can help it. For the ammount of servers I have that are going to connect to this, direct connections to the dss box was cost prohibitive. It was cheaper to buy the switch.
This morning I since realized that I wasn't comparing apples to apples when I was testing speeds either. I was using a 10 gb file on one server and a 120 on the other. Once I was using both 10 gb files on both servers, I notice they were actually pretty close to the same. However, it isn't consistent. One minute I can copy the file in about 15 seconds, the next it takes a couple of minutes.
I do have a ticket open with open-e so I am going to have them give my system a good one over and make sure it is configured right.
If you have any form of write cache enabled, then beware that what you may measure is the rate at which you can write to DSS's cache rather than to the disks. Once that cache fills up, you then start measuring the disk performance (i.e. the write rate drops to whatever your RAID5 array can achieve).
It may be far more reliable to measure the disk performance using a tool like IOmeter or HD Tune as these tools do not require a local file to be read from what may be a slow local disk, instead creating the network traffic on the fly.
I'm sure you can still use MPIO, but you may need to segment your 10GB Ethernet in to VLAN's that are then split-out to physical ports on the switch. I've never tried this and have no idea whether or not DSS supports this approach.
Good luck
TFZ
If it can go wrong, it generally will!