G'day Lukas,
I am partway through testing an Adaptec 5405 and 12xSeagate ES.2 1TB Disk configuration.
But preliminary observations are:
The more disks in a raid, the longer the rebuild time, and therefore the more chance of a total failure if a second drive goes or you have a "Bit Error Failure".
So because of this I would not use anything other than Raid6. It is just too risky that a second error while the Raid5 is rebuilding will result in lost data.

Our rebuild times are:
Raid5 x 3 =10hrs ,Est Throughput=415Mb/s
Raid6 x 4 =10hrs ,Est Throughput=424Mb/s
Raid6 x 5 =9hrs ,Est Throughput=583Mb/s
Raid6 x 5 =11hrs ,Est Throughput=489Mb/s
Raid6 x 6 =10hrs ,Est Throughput=888Mb/s
Raid6 x 11 =17hrs ,Est Throughput=1118Mb/s
Raid60 x 10 =11hrs ,Est Throughput=953Mb/s
Raid10 x 4 =4hrs ,Est Throughput=1016Mb/s
Notes: The "x ??" figure is the number of 1TB drives, and the Est throughput is the Array Size in MB divided by the time to rebuild.

Data testing of the Raid6x5 Disk arrays (hdparm and mounted iSCSI) gives a write figure of 260Mb/s.

Further good info is at Storage Advisors and here

While we still have real world throughput testing to complete I think our final configuration will be to have a RAID6 x4 and a Raid6 x 6, with 1 hot spare and one "Simple" disk for "Temp" files etc.
As Raid 6 "loses" capacity of arrays with an odd number of disks, we will go for a 6-4(5.4TB) vs 5-5(4.5TB) configuration.
I feel more comfortable having two arrays than one large one and it reduces the risk of failure.

Also, just while I think of it, the Online Capacity Expansion feature(OCE) of a lot of controllers has it's disadvantages. On the Adaptec to go from a 5 to a 6 Disk RAID6 it takes about 4 days, and while you can still use the Volumes, the throughput is 10% (approx 20Mb/s) during the process.