Visit Open-E website
Results 1 to 8 of 8

Thread: Hyper-V I/O Choices

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Default Hyper-V I/O Choices

    Hi Everyone,

    I am using Microsoft's Hyper-V Virtualization software with Open-E and it seems to be working great.

    My question is would File I/O work even better than Block I/O is working for me right now?

    I know that VMWare works best with File I/O but I haven't seen any posts regarding Hyper-V.

    Any help or guidance would be great!

    Thanks!

  2. #2
    Join Date
    Jan 2008
    Posts
    82

    Default

    Hi cphastings,

    Wat you are saying is interesting for me, maybe not to others. I would love to know the answer to your question.

    But why dont know create a volume in block IO mode, and test the performance with IO meter. Then delete it and flip to File IO and test it again.

    Can you let us know how you configured it. Somebody asked once!!!

  3. #3

    Default

    Thanks for the update "cphastings"!!

    We have not done any tests with Hyper-V so any info would be GREAT for our guys!
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  4. #4

    Default

    Okay, I'll try to do some testing to find out which is better.

    Can To-M or anyone else give me a simpler explanation of the difference between Block and File I/O and what circumstances might call for the use of either of them?

    I have used Block I/O for everything so far and did not even know that File I/O existed until reading the various posts here regarding VMWare.

    So if someone could enlighten me a little that would be awesome.

    Thanks!

  5. #5

    Default

    In Block I/O we use only cache specified for devices.

    File I/O uses the files system cache and devices cache. The default maximum of device cache in 32bit kernel is limited actually to about 1GB. Note that there is no such limitation in 64bit kernel.

    In Block IO the storage performance at the iSCSI initiator is 100% the performance of the RAID. Only a few MB will be used for cache.

    In File I/O the storage performance at the iSCSI initiator is much faster than the performance of the RAID. The complete free memory will be used for cache so the read and write access to the RAID will be optimized.

    There are many topics on the forum about some performance difference with both. Several users have reported performance values with File I/O and will tell you depending on what you are doing that Block I/O can be faster but with less tasks.

    Read Raudi's comment on the end of his post.

    http://forum.open-e.com/showthread.php?t=607

    "BlockIO might be faster then FileIO, but only when access the RAID with one initiator and one task."
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  6. #6

    Default

    What tests besides iometer might you want me to run To-M or anyone else?

    I am running a dual quad core intel server with 16 GB of RAM for Hyper-V

    My virtual machines are on a 4 x 750 GB drive RAID 10 array running on an Areca 1160 PCI SATA II Controller. My Open-E box is a dual core Xeon with 2 GB of RAM with two nics bonded in balance-rr running in 32 bit mode.

    Currently the LUN I am using is running Block I/O.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •