Visit Open-E website
Results 1 to 3 of 3

Thread: VMWARE iSCSI vs NAS (NFS)

  1. #1

    Default VMWARE iSCSI vs NAS (NFS)

    Hi everyone,

    I'm trying hard to figure out the different pros and cons of using iSCSI vs NAS/NFS for ESX .vmdk storage.

    NFS is of course filebased ......

    What is iSCSI .... normally you would say block based, but open-e recommend setting tuning options for vm volumes to FILE I/O mode instead of BLOCK I/O ?

    The performance gap between NFS and iSCSI i huge. Measured with IOMeter on the same physical storage target from a VMserver running on ESX the result for sequential READs at 1MB chunks is 80MB/s on iSCSI and 10MB/s on NFS volumes .....

    Has anyone experienced problems with NFS as storage for ESX?


    /Frank

  2. #2

    Thumbs up

    Hi,

    we are using DSS with NFS on several Blades (2xQC CPUs, 8 GB RAM) within a heartbeat cluster in two IBM Bladecenters. We aren't using ESX but VMWare Server on Debian 4.0, so we can better customize our needs.

    When we had implemented the cluster we have taken single and multiple threaded I/O-tests from one and multiple blades at the same time and have compared NFS vs. iSCSI (on a Oracle-Cluster-Filesystem). ISCSI was multipathed with 2x1 GBit Ethernet connections on each blade and 8x1 GBit Ethernet ports on the storageserver (Areca 24Port/24 WD Raptor 150 GB 10.000 UPM harddisks, 3xIntel Quad-Port GBit Server Adapter).

    The single thread and multiple thread performance from one blade was very good on iSCSI (r/w (single) = 210 MB/s / 180 MB/s) on NFS r/w = 110 MB/s / 105 MB/s because of that only one filesystem was only using one network path (no multipathing possible). But using iSCSI with OCFS2 from multiple blades at the same time on DSS was very poor. About r/w = 40 MB/s / 30 MB/s with 2 blades and single threaded tests and below 10MB/s (read and write) with more then three blades operating on a OCFS2 via iSCSI on DSS. At this point the speed on NFS was quite better. We had mounted the NFS filesystems on each blade through different VLANs (four VLANs in our config) to separate the traffic from the blades to the storageserver. Also the L3-Switches have 3Gbit trunks between them. So we have got an aggregated bandwidth of over 200 MB/s with three operating blades at the same time. The DSS storage (with Areca 1280 and 2 GB Cache) was tested with r/w = 650 MB/s / 800 MB/s, so we think that the aggregated performace will increase further with more blades operating on the storage server (we think 300-400 MB/s should be no problem).

    regards
    Michael

  3. #3
    Join Date
    Oct 2007
    Location
    Toronto, Canada
    Posts
    108

    Default

    Quote Originally Posted by streamapps
    ...
    What is iSCSI .... normally you would say block based, but open-e recommend setting tuning options for vm volumes to FILE I/O mode instead of BLOCK I/O ?
    VMWare iSCSI performance is poor, there are a number of sources on the web which say as much.

    Open-e recommends File I/O because VMWare recommends File I/O...

    Quote Originally Posted by streamapps
    The performance gap between NFS and iSCSI i huge. Measured with IOMeter on the same physical storage target from a VMserver running on ESX the result for sequential READs at 1MB chunks is 80MB/s on iSCSI and 10MB/s on NFS volumes .....
    Measuring the performance sequential operations has no context to reality.

    VMWare will not be accessing the files sequentially. You need to test using random operations, reads and writes, for a better approximation of how VMWare will be accessing the drives.

    Sean

    P.S. Did you also know that, apparently, IOMeter generates false results on some Linux platforms?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •