Visit Open-E website
Results 1 to 8 of 8

Thread: Hyper-V I/O Choices

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Join Date
    Jan 2008
    Posts
    82

    Default

    cphastings.. you are the MAN!!!

    Thanks for helping us here...

    I have one request: what are the difference between the file io and the block io? can you test it using io Meter with 2K Access specification file size with 50% Random??

    Thanks!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

  2. #2
    Join Date
    Sep 2007
    Posts
    80

    Default

    Not sure if this adds any value, but after lots of testing with IOmeter with file- & block-IO modes, I came to the conclusion that the figures obtained depend highly on the IO profile used for testing, and our/your _estimation_ of how close your real-world situation is to that profile.

    I used the VMware forum's Open Perf Test iometer config file, and compared different Raid levels, SATA vs SAS, file- vs block-IO, and memory sizes. An advantage of file-IO comes from caching random writes, which would kill a (say) SATA R5 block-IO setup, but only when the iometer test file size is not too much bigger than the mem size of your box. From memory, the Open Perf Test uses a 4GB test file size, which means that with 1 client & a memory size of 4GB or more, all/most of the whole test file is cached! Is this real world? I'm not convinced, though maybe it does represent the working set size of a typical ESX workload. I normally do almost all of my iometer testing using one (or a number of) 20GB test file/s, and with (say) 4GB of Ram, small random IO writes under file-IO mode are actually slower than with block-IO.

    My conclusion after all this testing is that _no one_ (that I have seen anyway) has published a really decent iometer config that correctly reflects "typical" work loads of what ESX servers experience in real life. Eg. an ESX server running (say) 8 vm's running some mixture of F & P (large & small, seq & random IO), email server (transactional plus gen IO), database server (transactional), application server etc etc. Then correctly estimating/matching memory/Ram size to load (for Open-E's File-IO or other caching modes for other brands of storage units). Just to recap, the Open Perf Test config does run thru different IO profiles, which is great, but gives no help in determining an over-all performance metric for us simple folk.

    Happy to share findings or contribute to testing efforts in an attempt to get a better realworld result. Sorry if I sound cynical!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •