HI

Setup DSS V6 (build 12 or 13 dosent matter)

2 Supermicro servers (Quad cores, 4GB mem)
3ware 9650S controllers (Latest Firmware)
Intel SSD Raid 10 drives (256K stripset)
Dual port Intel Server Nic PCIX (have alsto tried internal Intel nic on Motherboard)
HP 2510G switch Enable Jumbo frames

PRI (Virual ip) NIC DSS have MTU 9000
SEC (Replication) NIC DSS have MTU 1500 (setting this to other value some strange things happens?! )

Vmware ESX 3.5, Jumbo enable
Testing inside from a 2003 server (Only server running against that storage)

Test with Block and File i/o dosent matter.

When everything is setup with both servers online the write speed is very low from transfer size 1K up to 32K its like 25% speed or worse and 40-50% from 32K upp to 8MB

When i take down the passive server and im only running against primary server the performance is Ok, not amazing but its OK.

Same if i make a failover and running aginst sec server and turn of pri server, so the servers indviduall is performing OK

They run against the virtual IP

Replication speed between the host is from 60Mb to 100MB so that is OK

(I have tried alot of diffrent things with or without Jumbo frames, but in singel server mode Jumbo is performing much better)

I mean the disk aint the problem, we are talking Monster Speed on board and Random IOPS is over 2000 (Not cached) (one of this monster disk is performing like 20 striped 7.2K SATA disk and i have them in Raid 10)

I do not need more speed than 100MB, IOPS is importent so going up to 10G on NIC is not my solution, i just want this solution perform like it would do in singel server mode.

So what is wrong here?
Why do i get strange behavor when im trying to increase MTU on replication interface?
Even with MTU 1500 it should perform nearly be better.
Another strange thing is that stat page on nic is pretty weird, cant go over 200Mbit!?
But i can see that replication speed is alot better than that...

Any suggestions please!