Visit Open-E website
Results 1 to 9 of 9

Thread: Open-E DSS performance

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    MPIO.... how many paths? over 1GB?

  2. #2
    Join Date
    Feb 2009
    Posts
    142

    Default

    I believe the performance issue is on the Xenserver side. I have a DSS V6 and sharing it with a XEN network and Xenserver 6 network.

    All servers using MPIO (2 1G cards) and the SAN has a 10G card. (using Block i/o)

    using your test:
    dd if=/dev/zero of=/var/tmpMnt bs=1024 count=12000000

    On a linux server under XEN I was getting about 85MB/s (we have SATA drives in DSS) and on a linux server under Xenserver 6.0 I was only getting about 28MB/s

    All settings on DSS had been optimized as per various posts in this forum for each volume. See various posts about ISCSI performance when goggling 'iscsi performance xenserver'

  3. #3
    Join Date
    Feb 2009
    Posts
    142

    Default

    Oops!

    I have to take this back. I didn't realize I had my DSS Management IP added to the multipath of my 2 ISCSI nics, so that multipath -ll command showed 3 total active connections when when there should have been 2. The traffic was going out over my DSS mangement nic which has an MTU of 1500 instead of the 2 NICS that had Jumbo Frames enabled and an MTU=9000.

    Fixed it and ran the test and getting the same throughput for the XEN servers as well as the Xenserver servers, about 85MB/s.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •