Visit Open-E website
Results 1 to 8 of 8

Thread: Hyper-V I/O Choices

  1. #1

    Default Hyper-V I/O Choices

    Hi Everyone,

    I am using Microsoft's Hyper-V Virtualization software with Open-E and it seems to be working great.

    My question is would File I/O work even better than Block I/O is working for me right now?

    I know that VMWare works best with File I/O but I haven't seen any posts regarding Hyper-V.

    Any help or guidance would be great!

    Thanks!

  2. #2
    Join Date
    Jan 2008
    Posts
    82

    Default

    Hi cphastings,

    Wat you are saying is interesting for me, maybe not to others. I would love to know the answer to your question.

    But why dont know create a volume in block IO mode, and test the performance with IO meter. Then delete it and flip to File IO and test it again.

    Can you let us know how you configured it. Somebody asked once!!!

  3. #3

    Default

    Thanks for the update "cphastings"!!

    We have not done any tests with Hyper-V so any info would be GREAT for our guys!
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  4. #4

    Default

    Okay, I'll try to do some testing to find out which is better.

    Can To-M or anyone else give me a simpler explanation of the difference between Block and File I/O and what circumstances might call for the use of either of them?

    I have used Block I/O for everything so far and did not even know that File I/O existed until reading the various posts here regarding VMWare.

    So if someone could enlighten me a little that would be awesome.

    Thanks!

  5. #5

    Default

    In Block I/O we use only cache specified for devices.

    File I/O uses the files system cache and devices cache. The default maximum of device cache in 32bit kernel is limited actually to about 1GB. Note that there is no such limitation in 64bit kernel.

    In Block IO the storage performance at the iSCSI initiator is 100% the performance of the RAID. Only a few MB will be used for cache.

    In File I/O the storage performance at the iSCSI initiator is much faster than the performance of the RAID. The complete free memory will be used for cache so the read and write access to the RAID will be optimized.

    There are many topics on the forum about some performance difference with both. Several users have reported performance values with File I/O and will tell you depending on what you are doing that Block I/O can be faster but with less tasks.

    Read Raudi's comment on the end of his post.

    http://forum.open-e.com/showthread.php?t=607

    "BlockIO might be faster then FileIO, but only when access the RAID with one initiator and one task."
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  6. #6

    Default

    What tests besides iometer might you want me to run To-M or anyone else?

    I am running a dual quad core intel server with 16 GB of RAM for Hyper-V

    My virtual machines are on a 4 x 750 GB drive RAID 10 array running on an Areca 1160 PCI SATA II Controller. My Open-E box is a dual core Xeon with 2 GB of RAM with two nics bonded in balance-rr running in 32 bit mode.

    Currently the LUN I am using is running Block I/O.

  7. #7
    Join Date
    Jan 2008
    Posts
    82

    Default

    cphastings.. you are the MAN!!!

    Thanks for helping us here...

    I have one request: what are the difference between the file io and the block io? can you test it using io Meter with 2K Access specification file size with 50% Random??

    Thanks!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

  8. #8
    Join Date
    Sep 2007
    Posts
    80

    Default

    Not sure if this adds any value, but after lots of testing with IOmeter with file- & block-IO modes, I came to the conclusion that the figures obtained depend highly on the IO profile used for testing, and our/your _estimation_ of how close your real-world situation is to that profile.

    I used the VMware forum's Open Perf Test iometer config file, and compared different Raid levels, SATA vs SAS, file- vs block-IO, and memory sizes. An advantage of file-IO comes from caching random writes, which would kill a (say) SATA R5 block-IO setup, but only when the iometer test file size is not too much bigger than the mem size of your box. From memory, the Open Perf Test uses a 4GB test file size, which means that with 1 client & a memory size of 4GB or more, all/most of the whole test file is cached! Is this real world? I'm not convinced, though maybe it does represent the working set size of a typical ESX workload. I normally do almost all of my iometer testing using one (or a number of) 20GB test file/s, and with (say) 4GB of Ram, small random IO writes under file-IO mode are actually slower than with block-IO.

    My conclusion after all this testing is that _no one_ (that I have seen anyway) has published a really decent iometer config that correctly reflects "typical" work loads of what ESX servers experience in real life. Eg. an ESX server running (say) 8 vm's running some mixture of F & P (large & small, seq & random IO), email server (transactional plus gen IO), database server (transactional), application server etc etc. Then correctly estimating/matching memory/Ram size to load (for Open-E's File-IO or other caching modes for other brands of storage units). Just to recap, the Open Perf Test config does run thru different IO profiles, which is great, but gives no help in determining an over-all performance metric for us simple folk.

    Happy to share findings or contribute to testing efforts in an attempt to get a better realworld result. Sorry if I sound cynical!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •