Hi,
we have two DSS boxes (each 14 GBit Ports, 2 DualCore Opterons, 4 GB RAM, 24x150 GB WD Raptor, Areca 1280) an using NAS with volume replication mostly. The DSS servers are used in a active/passive combination, means production servers only connect to the first/primary DSS.
The second DSS is only replication target. We have configured several (three at the moment) snapshot task and the correspondigs NAS resource (NFS here) on the second DSS. If we activate the snapshots manually, we can see the correct NFS mounts:
s027:~# showmount -e 10.254.151.12
Export list for 10.254.151.12:
/snapnfsvm02 *
/snapnfsvm01 *
but if we schedule a task, we get something like:
Export list for 10.254.150.12:
/RAMDISK/volumes/d64hoz-KI12-jVMn-HiBr-07wb-KQYd-nCJitZ *
/RAMDISK/volumes/d64g1l-PWa7-q8ba-bke3-ZiqL-BUhe-D7u0a2 *
Is this a bug or do we make some mistakes? How can we get the correct mounts with snapshot tasks?
regards
heimic