Okay, I'll try to do some testing to find out which is better.
Can To-M or anyone else give me a simpler explanation of the difference between Block and File I/O and what circumstances might call for the use of either of them?
I have used Block I/O for everything so far and did not even know that File I/O existed until reading the various posts here regarding VMWare.
So if someone could enlighten me a little that would be awesome.
In Block I/O we use only cache specified for devices.
File I/O uses the files system cache and devices cache. The default maximum of device cache in 32bit kernel is limited actually to about 1GB. Note that there is no such limitation in 64bit kernel.
In Block IO the storage performance at the iSCSI initiator is 100% the performance of the RAID. Only a few MB will be used for cache.
In File I/O the storage performance at the iSCSI initiator is much faster than the performance of the RAID. The complete free memory will be used for cache so the read and write access to the RAID will be optimized.
There are many topics on the forum about some performance difference with both. Several users have reported performance values with File I/O and will tell you depending on what you are doing that Block I/O can be faster but with less tasks.
What tests besides iometer might you want me to run To-M or anyone else?
I am running a dual quad core intel server with 16 GB of RAM for Hyper-V
My virtual machines are on a 4 x 750 GB drive RAID 10 array running on an Areca 1160 PCI SATA II Controller. My Open-E box is a dual core Xeon with 2 GB of RAM with two nics bonded in balance-rr running in 32 bit mode.
Currently the LUN I am using is running Block I/O.