I searched around the web and people are arguing about different stripe size for the best performance....but mostly are windows users.
Since Open-E is linux based, would it make any difference for using different stripe size on RAID 0 set up?
Some claimed that the stripe size depends on the numbers of SSDs connected,
some said because of the page size, it shall be 64kb straight,
some said because you want to make sure the data are stripe across all drives, the bigger the stripe size, the better the performance.....
VMs, VMs & more VMs.....for Hyper-V R2 via iSCSI SAN set up.
I did some testing on Windows2K8R2x64, seems like bigger stripe size is not showing good performance. But I'm not sure about the characteristics of linux based application like Open-E.
My concern here is, does open-e use a particular block size for data transfer or it can be controlled by user.
When working with RAID controller like Adaptec, does it matter what we could configure at the open-e level? Or it leave it all to the controller?
Last but not least, what is the impact to all when Intel SSD is used? Some engineers claimed that 256KB is optimized but I found out smaller size seems to perform better. That was my testing on Windows2008R2, not sure about Open-e and Linux as I do not have so much time to test all combi so far.
Would see if I could post my results and findings earlier than any sharing here