Visit Open-E website
Page 2 of 2 FirstFirst 12
Results 11 to 19 of 19

Thread: 10GB Performance Disaster

  1. #11
    Join Date
    Apr 2009
    Posts
    62

    Default

    Block I/O or File I/O?

    Did you apply those tuning options and retest?

  2. #12

    Default

    Quote Originally Posted by dweberwr
    My DSS box will be here in a few days (finally!). I am going the 10 gb route myself on my dss box and found this thread quite interesting. What I don't understand is why would you NOT enable jumbo frames? The investement to 10 gbe stuff is still enough over 1 gbe, why would you not want to eek out every ounce of performance that 10 gbe has to offer, especially when all it takes is a few keystrokes and mouse clicks?
    Aha well, my DSS can't pass enough data to saturate the link, even with non-jumbo frames.

    The other issue is that I have a tap into the iSCSI vlan, and I'd rather not have to set jumbo frames if I don;t have to on any management box that may need to be on there.

    There's also the ease of migration....with little downtime.

    -D

  3. #13

    Default

    Quote Originally Posted by 1parkplace
    Block I/O or File I/O?

    Did you apply those tuning options and retest?
    file i/o of course.

    Yup those tuning options killed the iSCSI connection. set them all back and I was able to reconnect.

    -D

  4. #14
    Join Date
    Apr 2009
    Posts
    62

    Default

    interesting... Those are the settings I use and all works for me. not sure why they wouldnt work for you. I was originally given those settings by an Open-E tech.

    I switched to Block I/O recently, fixed a lot of problems for me. This was from advice from Open-E, they said they certified DSS with VMWare with Block I/O, not File I/O.

    I deleted all my volumes and recreated as Block I/O and then VMWare saw all my LUNs and the performance was still on par.

  5. #15

    Lightbulb

    Quote Originally Posted by 1parkplace
    interesting... Those are the settings I use and all works for me. not sure why they wouldnt work for you. I was originally given those settings by an Open-E tech.

    I switched to Block I/O recently, fixed a lot of problems for me. This was from advice from Open-E, they said they certified DSS with VMWare with Block I/O, not File I/O.

    I deleted all my volumes and recreated as Block I/O and then VMWare saw all my LUNs and the performance was still on par.
    Block I/O is better. Fewer caching layers (I think), lower latency. In some circumstances, that means cached I/O can be a little slower, but if you have a fast back-end (a decent RAID card), it works fine. In fact, in some cases, File I/O just gets in the way.

  6. #16
    Join Date
    Apr 2009
    Posts
    62

    Default

    Quote Originally Posted by Robotbeat
    Block I/O is better. Fewer caching layers (I think), lower latency. In some circumstances, that means cached I/O can be a little slower, but if you have a fast back-end (a decent RAID card), it works fine. In fact, in some cases, File I/O just gets in the way.
    I agree. Believe it or not... my machines have a great hardware backend and 32GB of RAM each, but File I/O just couldn't cut it.

    I have a simple setup with 2 ESX servers and 2 DSS servers, each ESX targets both DSS and has an even split of VMFS volumes on each DSS.

    After switching to Block I/O... Storage vMotion, vMotion, etc all sped up greatly(10x faster in some cases), barely increased "High Disk Latency" counter on the ESX servers during storage moves.

    Overall I am much happier using Block I/O.

  7. #17

    Default

    Hey 1parkplace

    before you changed to block I/O did you get and cmd_abort (1143) errors?
    did going to Block I/O fix this?

  8. #18
    Join Date
    Apr 2009
    Posts
    62

    Default

    Quote Originally Posted by symm
    Hey 1parkplace

    before you changed to block I/O did you get and cmd_abort (1143) errors?
    did going to Block I/O fix this?
    Not 100% sure, I know I used to get those before DSS v6 and also with the NetXen 10GB cards.

    The upgrade to v6 and the intel 10gb happened at the same time, so not sure which fixed my problems.

    I would assume it would probably remedy that situation though, since it takes caching in RAM out of the equation. As long as you have the spindles to keep up with the data load and your RAID controller is solid, you shouldn't have those problems.

  9. #19

    Default

    Quote Originally Posted by 1parkplace
    Not 100% sure, I know I used to get those before DSS v6 and also with the NetXen 10GB cards.

    The upgrade to v6 and the intel 10gb happened at the same time, so not sure which fixed my problems.

    I would assume it would probably remedy that situation though, since it takes caching in RAM out of the equation. As long as you have the spindles to keep up with the data load and your RAID controller is solid, you shouldn't have those problems.

    Thanks for answer!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •