We are going to do a demo for a customer in the morning about using the DSS mainly as a NAS system, with manual failover (when will NAS autofailover be available, even in beta? In a month? a year? Maybe for some protocols but not others?). The customer has backup software that stores data deduplicated (i.e. it chops a file up into small chunks, like 16kB, and checksums each chunk, storing the checksums in a database and each chunk as a file on a fileserver/NAS somewhere) into millions of little files. These files might even be all in the same directory. How well does the DSS handle this situation? Is there anything I should keep in mind while I'm talking to the customer?

I assume that DSS uses XFS for the NAS volumes. Is there any possible way to tune the blocksize/something-else to optimize for small files? There are different hardware RAID configurations that should be used, obviously, but what else?

Also, in another question, is there some list of the advantages/disadvantages of using the 64-bit kernel vs 32-bit? We might want to use more than 4GB of memory, and I think that the 32-bit default kernel doesn't really take advantage of more than 4GB of memory. Anything else?