Visit Open-E website
Results 1 to 9 of 9

Thread: DSS v7 up50 - slow cluster performane with 10Gbe

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    What are your RAID cache settings?
    Have you tweaked the initiator advanced properties to match DSS V7?
    ===
    QueuedCommands=8
    DataDigest=None
    MaxOutstandingR2T=32
    InitialR2T=No
    FirstBurstLength=65536
    MaxRecvDataSegmentLength=1048576
    HeaderDigest=None
    MaxXmitDataSegmentLength=1048576
    ImmediateData=Yes
    MaxBurstLength=1048576

  2. #2

    Default

    RAID: HP P420/2GB BBWC, 2x 200GB SSD SmartCache

    After tweaking initiator its better, but it could be more. In compared to degraded cluster.
    "1073741824 Bytes (1,1 GB) kopiert, 4,77698 s, 225 MB/s"

  3. #3
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    you may want to check how the caching is set on the raid. If I recall correctly, you may set different percentages for read or write on the card.

  4. #4

    Default

    yes, you're right. 70% read / 30% write

  5. #5
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    you might try different cache values based on your testing.
    balance-rr is the least sophisticated bond, this should not be a problem.

  6. #6
    Join Date
    Oct 2006
    Posts
    202

    Default

    I have a similar issue and from all my testing it points to the replication that causes the performance decrease. I have tried making changes to the replication but cannot improve the performance. I have tested all the 10GB cards using the network test and all tests correct. Eg running iometer replication off /cluster degraded i get RX 1900MBs WR 1800MBs, when cluster normal RX 1250MBs WR 700MBs

  7. #7
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    Replication can have a impact on performance. This typically can be mitigated by making sure raid caching is working, and is large enough.
    In the current releases, the bandwidth is dynamically allocated for each task. This and other tweaks can be done to the replication module:
    http://kb.open-e.com/Tuning-recommen...1GbE_1603.html (applies to 10GBe also).

  8. #8

    Default

    Maybe its a problem with balance-rr bonding mode?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •