But CX4 is limited to 15m right ? Looks like it will be too short for me (my two san are not on the same floor).
On 10GBase-T it's 100m with copper but i can't find a switch 10GBase-T (8 ports?) and NIC 10GBase-T compatible with OpenE.
But CX4 is limited to 15m right ? Looks like it will be too short for me (my two san are not on the same floor).
On 10GBase-T it's 100m with copper but i can't find a switch 10GBase-T (8 ports?) and NIC 10GBase-T compatible with OpenE.
you can buy extender for cx4 from intel
with intel connect cable you can go up to 100m
in my case replication goes trough 2x gbit balance rr bondingOriginally Posted by nsc
bonding shouldnt impact read performance at all, since only writes are being replicated to the slave, read operations are being serverd direct from your master box, so if you are not getting any better then 1ge read performance then there is something wrong with your esx - dss setup.
and btw bonding will _not_ increase your replication/write throughput - you will need to change to infiniband or 10ge
I aggree with the fact that replication must not impact read performance.
My ESXi 4 is in MPIO support RR Vmware : i see activity on my two ETH for iISCSI traffic (but may it's not working...)
So what is your teaming mode in DSS to get 170Mb / s in read ?
I tried Balance-RR (lower read performance) and Balance ALB.
What is yours ?
bonding between client and dss? what for? mpio does it for me.Originally Posted by nsc
for an example: dss with 2x ge ports,
port A with 10.0.0.1/24 connected to switch A
port B with 10.0.1.1/24 connected to switch B
now client (xen in this case) also have 2ge ports, which are connected to switch A and B, this way each of the port is seperate from the other one and in its own network, for the 'balancing' i setup a iscsi connection to each of the network (10.0.0.1 and 10.0.1.1) and put mpio over it to balance the io requests (change path every 4io requests) - this way im getting about 170MB/s read
ok red now i have understand
I want to have HA Open-e so i have only one Virtual IP for link between client & OpenE... I can't use Mpio on IP for the link... (Or may be DSS V6 can give me more than one Virtual IP?
I'll try your setup anyway.
But with your setup you dont have automatic failover for your san right ?
Yes, you can have more than one vIP - I use MPIO with 2 vIPs.Originally Posted by nsc
regards,
Lukas
descience.NET
Dr. Lukas Pfeiffer
A-1140 Wien
Austria
www.dotnethost.at
DSS v6 b4550 iSCSI autofailover with Windows 2008 R2 failover cluster (having still some issues with autofailover).
2 DSS: 3HE Supermicro X7DBE BIOS 2.1a, Areca ARC-1261ML FW 1.48, 8x WD RE3 1TB, 1x Intel PRO1000MT Dualport, 1x Intel PRO1000PT Dualport.
2 Windows Nodes: Intel SR1500 + Intel SR1560, Dual XEON E54xx, 32 GB RAM, 6 NICs. Windows Server 2008 R2.
i removed my bond for iscsi (eth3 / eth5)
on eth3 : 10.0.0.1 Jumbo Frame 9000
on eth5 : 10.0.0.3 Jumbo Frame 9000
on esxi host i add 10.0.0.3 for iscsi
now i have my 4 paths to access my iscsi volume
Round Robin Vmware is active
No perfomance improvement with CrystalDiskMark
I'm using OpenE V5 (not V6) may it's a limitation with IET / SCST option for ISCSI ?
I'm planning to move to V6...
i had to tweak mpio settings to get any increase in performance at all - 4 IOs per path instead of default 1000
i dont find this kind of option in Vmware.
Following a blog message, i just have to activate Round Robin option.
I'll wait for V6 then...