Visit Open-E website
Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: Slow performance (or unexpected rates) with 10Gb and SSD storage arrays

  1. #1

    Default Slow performance (or unexpected rates) with 10Gb and SSD storage arrays

    We have a couple of the following clusters setup:

    Open-E DSS v7 (latest build) in Active/Passive iSCSI for VMware
    Intel R2224GZ4GC
    1 cluster pair uses RMS25CB080 (SAS 6Gb) controllers with 16x 480GB SSDs in each server
    1 cluster pair uses RS3DC080 (SAS 12Gb) controllers with 12x 1TB SSDs in each server
    All servers have 10Gb NICs (Intel X540-based)
    Sync network is direct attach between a 10Gb NIC

    We can't seem to EVER get over 200Mbits traffic on the sync network. We've tuned the DRBD settings, we've tested the data rates. And we've setup these servers with Windows and get a ridiculously high transfer rate when they're running any other OS. But, when we use Open-E, the rates are always slow.

    We're about to upgrade another cluster pair from DSS v6 to DSS v7, and upgrade to 16x 1TB SSDs each, and I'm worried that we'll encounter the same issues.

    Any ideas on what to look for, or, do we need to open a support case? We had discussed the setup with Intel engineers, and they said that we should be getting much better rates than we've been seeing. It's faster than spindles, by far, but, no where near where we should be at.

    Oh, and I just noticed another post describing something similar:
    http://forum.open-e.com/showthread.p...12-Build-10529

    Thanks!
    MJP Technologies - Intel Technology Provider Platinum Member

  2. #2

    Default

    Did you run the System benchmark tests for the Read and Writes from the Console in Hardware configuration? I would like to see what your getting from the system. I think if we can see the logs from both systems that would be good too as we should look in the logs to see if there are issues with the NIC/packets being dropped or bad from the RX or TX side. Send them in so we can review them.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  3. #3

    Default

    Quote Originally Posted by To-M View Post
    Did you run the System benchmark tests for the Read and Writes from the Console in Hardware configuration? ..... Send them in so we can review them.
    I didn't run a Write test before getting the volumes in place, but, the Read test was pulling back about 4-500MB before I moved the VM load to it.. Now it's reading about 250MB on the read.
    (This is only for the newer pair, haven't checked the older one)

    Should I open a support case for this to submit the logs?
    MJP Technologies - Intel Technology Provider Platinum Member

  4. #4

    Default

    Yes send in the log file so we can review them to see if the write back is enabled on the controller.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  5. #5

    Default

    Write back won't be enabled on the controllers - they're setup for FastPath (LSI)... That requires:
    Write Policy: Write Through
    IO Policy: Direct IO
    Read Policy: No Read Ahead
    Disk Cache Policy: Disabled

    But, even with Caching enabled, it's the same... (though we lose performance since FastPath becomes disabled)
    MJP Technologies - Intel Technology Provider Platinum Member

  6. #6
    Join Date
    Oct 2006
    Posts
    202

    Default

    We are getting a similar issue, and have noticed when doing an iometer test we can get good write performance without replication but with replication we lose about 50% performance. We test with windows product and the replication almost max's out the 10GB used for replication.

    We have tried different raid controllers from LSI, and Adaptec all with SSD cache.

    We have tried the DRDB tuning and jumbo frames but no improvement

    Any suggestions would be appreciated as we are wanting to put this into production but cant until we have this resolved
    Last edited by gharding; 12-22-2014 at 07:11 PM.

  7. #7

    Default

    I too would be interested in a fix for this. I have noticed some speed decreases when introducing replication, but have not gotten as far as really benchmarking it.
    Thanks
    Derek

  8. #8

    Default

    Send in the log file to support and provide as much details as you can.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  9. #9

    Default

    I've finally had the chance - I created tickets for 2 different clusters that have speed and performance issues in similar setups.
    MJP Technologies - Intel Technology Provider Platinum Member

  10. #10

    Default DRBD tuning for 10GBe

    I had went off of this for my DRBD tuning options:
    http://kb.open-e.com/Tuning-recommen...1GbE_1603.html

    Just wondering if there are better tweaks for direct connected 10GBe?
    Thanks
    Derek

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •