We have been following the BlockIO vs FileIO for iSCSI volumes for ESX discussion, and see how some users get really different iometer results to us when comparing the 2 modes!
We wish to create an iometer test configuration that would allow us to simulate real-world ESX loads, and therefore be more useful for VM capacity planning. We should be able to assume some broad mix of common applications (eg. some database, email, file and print, middleware etc.).
Right now, we have 2 identical DSS units (10 x 146GB SAS drives in R50 in each) and have configured one to use BlockIO & other to use FileIO. But we cannot get FileIO to perform even close to BlockIO (eg. using 4kB rand writes, 3 workers & 3 iSCSI Luns/volumes). So that means our iometer tests are not so useful - maybe we need to use a lot more workers and iSCSI volumes?
I would recommend 4K, and about 2/3 Read and 1/3 Write and about 80% random. The biggest thing though - you have to disable Windows caching to get accurate results. Why? As far as I know, ESX doesn't cache SAN data including iSCSI. Now I'm not sure about this, but until I considered this, the IOMeter results were opposite to what we were actually experiencing on ESX.
on page one you will find an .icf file for iometer, with this preconfigured test you can compare your IO to different other storage systems. they are listed in some excel sheets.