My environment is all Hyper-v virtualized servers, mostly webservers as well as utility servers like mail and customers small sql db's. Most servers are about 50-80 gig of disk space
I've been leaning toward 4 drive raid 10 sets and monitoring them to see if a raid 10 set is busy enough so the next raid 10 set is used for the next targets and so on. With a 16 drive raid all the drives are constantly running full bore and you can't really tune it if you have a single web server thats busy and does tons of little writes to the log files. Also the rebuild times take forever with a raid 6 array, but you do have that second drive which would be a plus.
I guess the question becomes do I want performance and more hands on tuning with raid 10 sets, or a large raid 6 array for less tuning and ease of setup and secuity but with a performance penalty on write and rebuilds.
I don't know, if you're running Windows Serve ror Linux, but the Linux drive caches really do a great job when it comes to those little log files, but YMMV of yourse.
As for the rebuild and performance penalties on RAID6: my units, and again I am talking about external RAID systems, rebuilt a 16 drive 7 TB RAID6 in roughly 4 hours, so I really can't see the point there. The raw read/write performance is 175 MB/sec vs. 160 MB/sec over FC, so that seems to be reasonable as well.
In the end I think that you have already made your mind up, so why don't just go for it? Obviously nobody screamed out at you for sugegsting something silly or dump.
When it comes to tuning and setup nobody will ever garantee that a particular suggested drive/volume layout will work or meet your demand.