Okay, I'll try to do some testing to find out which is better.
Can To-M or anyone else give me a simpler explanation of the difference between Block and File I/O and what circumstances might call for the use of either of them?
I have used Block I/O for everything so far and did not even know that File I/O existed until reading the various posts here regarding VMWare.
So if someone could enlighten me a little that would be awesome.
In Block I/O we use only cache specified for devices.
File I/O uses the files system cache and devices cache. The default maximum of device cache in 32bit kernel is limited actually to about 1GB. Note that there is no such limitation in 64bit kernel.
In Block IO the storage performance at the iSCSI initiator is 100% the performance of the RAID. Only a few MB will be used for cache.
In File I/O the storage performance at the iSCSI initiator is much faster than the performance of the RAID. The complete free memory will be used for cache so the read and write access to the RAID will be optimized.
There are many topics on the forum about some performance difference with both. Several users have reported performance values with File I/O and will tell you depending on what you are doing that Block I/O can be faster but with less tasks.
What tests besides iometer might you want me to run To-M or anyone else?
I am running a dual quad core intel server with 16 GB of RAM for Hyper-V
My virtual machines are on a 4 x 750 GB drive RAID 10 array running on an Areca 1160 PCI SATA II Controller. My Open-E box is a dual core Xeon with 2 GB of RAM with two nics bonded in balance-rr running in 32 bit mode.
Currently the LUN I am using is running Block I/O.
I have one request: what are the difference between the file io and the block io? can you test it using io Meter with 2K Access specification file size with 50% Random??
Not sure if this adds any value, but after lots of testing with IOmeter with file- & block-IO modes, I came to the conclusion that the figures obtained depend highly on the IO profile used for testing, and our/your _estimation_ of how close your real-world situation is to that profile.
I used the VMware forum's Open Perf Test iometer config file, and compared different Raid levels, SATA vs SAS, file- vs block-IO, and memory sizes. An advantage of file-IO comes from caching random writes, which would kill a (say) SATA R5 block-IO setup, but only when the iometer test file size is not too much bigger than the mem size of your box. From memory, the Open Perf Test uses a 4GB test file size, which means that with 1 client & a memory size of 4GB or more, all/most of the whole test file is cached! Is this real world? I'm not convinced, though maybe it does represent the working set size of a typical ESX workload. I normally do almost all of my iometer testing using one (or a number of) 20GB test file/s, and with (say) 4GB of Ram, small random IO writes under file-IO mode are actually slower than with block-IO.
My conclusion after all this testing is that _no one_ (that I have seen anyway) has published a really decent iometer config that correctly reflects "typical" work loads of what ESX servers experience in real life. Eg. an ESX server running (say) 8 vm's running some mixture of F & P (large & small, seq & random IO), email server (transactional plus gen IO), database server (transactional), application server etc etc. Then correctly estimating/matching memory/Ram size to load (for Open-E's File-IO or other caching modes for other brands of storage units). Just to recap, the Open Perf Test config does run thru different IO profiles, which is great, but gives no help in determining an over-all performance metric for us simple folk.
Happy to share findings or contribute to testing efforts in an attempt to get a better realworld result. Sorry if I sound cynical!