Best strategy for saving huge amount of small files
Hello,
we are using a Open-E Storage Server with an Dual XEON 5140 (Dual Core), 4GB RAM and 5TB SATA Disk drives in RAID6 Mode. This server should be used for a custom application which produces a huge amout a small files (1.000.000 20kb files in 2h).
I made some tests, putting all files in the same directory, this slows down the system if you have more that a few thousands files in it. My next step was creating 1.000 subfolders:
0/0/0
0/0/1
...
9/9/9
Based on the last numbers of the file id I saved all files to corresponding folder. But after a while, maybe one hour and about 400.000 files, everything gets slow again....
Maybe I should create 10.000 subfolders? What are your tips?
Btw. is there any option to tune the file system for small files? Can I set options like noatime?
Yeah, I think it's XFS, which has variable block size (can be good for small files). Does Open-E have any tuning parameters that can help optimize for many small files? We have a customer who's looking to use the open-e box as a storage pool to store lots (millions) of small (~16KB) files.