Spec'ing a new box for Open-E. Have 24 hot swap drives and trying to decide if its better to do:
1) 1 huge Raid 6 Array and hot spares
2) 4 raid 10 arrays and hot spares
I keep thinking a huge array is going to have performance issues once you start connecting more then a few clients, but it would be nice to carve out a chunk of disk space as needed out of a huge array.
A smaller set of raid 10 arrays requires more deliberate planning and worries of running out of disk space, but the performance would probably be better. I could allocate targets as needed depending on how busy a raid 10 set is, but again, more hands on.
Was wondering what others do when contemplating settin up a SAN box. Also curious what hardware you decided it. I was looking at 3ware or an Areca controller and using sas 15k drives.
I don't think that there is a general answer to this. It always depends on what you plan to do with your storage. Whithout knowing that any suggestion would be a complete guess.
And I think that's why there ha sno answer posted to this question yet.
We are setting up mainly file server storage, that is presented as iSCSI volumes. My RAIDs are almost always external Infortrend RAIDs with 16 drives in a RAID6 configuration.
But when I am setting up storage for our Oracle Database cluster, I tend to go with RAID10 instead, because in this setup favor less latency over higher bulk throughput.
as for the hardware, Ive had good luck with supermicro motherboards
as for the raid controller either the 3ware or areca will work well, may want to use the one you are more comfortable with
My environment is all Hyper-v virtualized servers, mostly webservers as well as utility servers like mail and customers small sql db's. Most servers are about 50-80 gig of disk space
I've been leaning toward 4 drive raid 10 sets and monitoring them to see if a raid 10 set is busy enough so the next raid 10 set is used for the next targets and so on. With a 16 drive raid all the drives are constantly running full bore and you can't really tune it if you have a single web server thats busy and does tons of little writes to the log files. Also the rebuild times take forever with a raid 6 array, but you do have that second drive which would be a plus.
I guess the question becomes do I want performance and more hands on tuning with raid 10 sets, or a large raid 6 array for less tuning and ease of setup and secuity but with a performance penalty on write and rebuilds.
I don't know, if you're running Windows Serve ror Linux, but the Linux drive caches really do a great job when it comes to those little log files, but YMMV of yourse.
As for the rebuild and performance penalties on RAID6: my units, and again I am talking about external RAID systems, rebuilt a 16 drive 7 TB RAID6 in roughly 4 hours, so I really can't see the point there. The raw read/write performance is 175 MB/sec vs. 160 MB/sec over FC, so that seems to be reasonable as well.
In the end I think that you have already made your mind up, so why don't just go for it? Obviously nobody screamed out at you for sugegsting something silly or dump.
When it comes to tuning and setup nobody will ever garantee that a particular suggested drive/volume layout will work or meet your demand.