Thanks Tom, i have no error in Areca management.
i'm creating a support ticket right now.
Thanks Tom, i have no error in Areca management.
i'm creating a support ticket right now.
We are looking at them now, there is some drop packets on eth1 and there is a MTU/Jumbo frame miss match value MTU:1500 / 9000 for your bond 1. You also need to update both of your DSS systems to the latest version, you have very old version.
Now from the logs it shows the lv0003 not online but only active ( lv+b+lv0003 vg+vg00 -wi-a- 1000.00G). So "-wi-a" should look like "-wi-ao" the "a" means active and the "o" means online. Try to stop and restart the LUN from the Target.
Version: 6.0up55.8101.5087
eth1: too many iterations (6) in nv_nic_irq.
eth1: too many iterations (6) in nv_nic_irq.
bond1 Link encap:Ethernet HWaddr 02:54:48:5C:5F:7F
inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER MULTICAST MTU:9000 Metric:1
RX packets:29394787420 errors:811 dropped:0 overruns:0 frame:811
TX packets:38990567435 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:67970103894290 (61.8 TiB) TX bytes:258399518745771 (235.0 TiB)
eth0 Link encap:Ethernet HWaddr 02:54:48:5C:5F:7F
UP BROADCAST SLAVE MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Interrupt:251
eth1 Link encap:Ethernet HWaddr 02:54:48:5C:5F:7F
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:29394787420 errors:811 dropped:0 overruns:0 frame:811
TX packets:38990567435 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:67970103894290 (61.8 TiB) TX bytes:258399518745771 (235.0 TiB)
Interrupt:250 Base address:0x8000
Hi To-M,
For MTU, i can't change it. When Bond is up, with the console management i can only change the MTU for Bond not the MTU for each attached NIC :
http://img810.imageshack.us/img810/2701/bondmtu.png
For my LUN, everything look OK :
http://img51.imageshack.us/img51/3735/lunx.png
So i just stop / start the replication task :
http://imageshack.us/photo/my-images...plication.png/
For Update, i knew that i have an older version but after 6 months without any errors or problems, i'm not a big FAN of updating something working ;-)
We'll see affter the end of replication.
So you created the Bond without setting both of the NICs to Jumbo frame value - then you will have to schedule some down time then to break the bond and set the other NIC to the proper Jumbo frame and re-create the bond.Originally Posted by nsc
Thanks for the info about MTU on Bond.
After the restart of replication task, the lv0003 goes down and nothing working anymore.
I have to hard reset my two ESX Host after a proper restart for OpenE.
But the data stored n VMDK from LV003 are corrupted
I think the hard reset was too hard...
Did you checked your RAID Controller health, also your RAID Array (storage disks) health?