I have built a new system and the load right now if very small, but I having some serious performance issues.

I have a single linux box hooked up to the open-e using NFS. It is connected to 3 different shares. Every so often (can't find a pattern), the NFS seems to lock-up/hang for about a minute, meaning any read requests for that mount are queued. For example I am even unable to perform 'ls -la' on that directory until it come back again.

Nothing is reported in /var/log/messages on the linux box to suggest connection was lost. Running a ping reports no errors,a ll come back 0.2ms or so.

I mount the shares in fstab using default values as follows:

10.20.20.160:/data1 /data/data1 nfs defaults 0 0
10.20.20.160:/system /data/system nfs defaults 0 0
10.20.20.160:/logs /data/logs nfs defaults 0 0


These defaults give these settings:

[root@cnbflbs21 data1]# cat /proc/mounts | grep nfs
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0
10.20.20.160:/data1 /data/data1 nfs rw,vers=3,rsize=524288,wsize=524288,hard,proto=tcp ,timeo=600,retrans=2,sec=sys,addr=10.20.20.160 0 0
10.20.20.160:/system /data/system nfs rw,vers=3,rsize=524288,wsize=524288,hard,proto=tcp ,timeo=600,retrans=2,sec=sys,addr=10.20.20.160 0 0
10.20.20.160:/logs /data/logs nfs rw,vers=3,rsize=524288,wsize=524288,hard,proto=tcp ,timeo=600,retrans=2,sec=sys,addr=10.20.20.160 0 0


Other info:
open-e = 5.0.DB49000000.3278
linux = CentOS 5.4 x64 (it is running as a guest on ESX 4)
RAID = RAID6, 3ware 9650SE, 15 x 1TB drives + 1 hot spare