Visit Open-E website
Results 1 to 10 of 11

Thread: Very low performance issue

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Default Very low performance issue

    Hi,
    I still have a huge performance issue with Open-E DSS v6.

    My server configuration :
    Xeon L5520 (4 quad cores)
    8DDR3 ECC
    5x Crucial RealSSD 256Gio (290Mio/s in SATA2 per SSD)
    3x Intel e1000 Gigabits Ethernet

    The 3 networks cards are in bonding (802.3ad) and the SSDs are in software Raid5 (using DSS) so I should get a 3Gbit/s connection and a very high bandwidth capacity for the disks (I except more than 800Mio/s in local).
    But the problem is that I don't even managed to have more than 100Mio/s per iSCSI volume ! The speed is not stable and various between 20Mio/s and 100Mio/s even if the client is 2 or 3 links bonded.

    I tried to connect a client to the SAN directly (without using a switch) and I still have the problems.

    Thanks for guiding me

  2. #2
    Join Date
    Oct 2008
    Posts
    69

    Default

    Hi,

    For me, you can't aggregate bandwith with bonding.
    You have to use MPIO.

  3. #3
    Join Date
    Aug 2010
    Posts
    404

    Default

    Also you did not tell us what is the build of DSSv6 that you are running now?

  4. #4

    Default

    I did the test :
    *-----------------------------------------------------------------------------*
    hdparm -t /dev/sda
    *-----------------------------------------------------------------------------*


    /dev/sda:
    Timing buffered disk reads: 774 MB in 3.00 seconds = 257.96 MB/sec

    (Same results on sdb, sdbc, sdd)

    *-----------------------------------------------------------------------------*
    hdparm -t /dev/sde
    *-----------------------------------------------------------------------------*


    /dev/sde:
    Timing buffered disk reads: 400 MB in 3.00 seconds = 133.25 MB/sec

    This is the last SSD, it is a bit slow !

    *-----------------------------------------------------------------------------*
    hdparm -t /dev/sdf
    *-----------------------------------------------------------------------------*


    /dev/sdf:
    Timing buffered disk reads: 274 MB in 3.02 seconds = 90.81 MB/sec

    This is the 2x 2.5" RAID1 HDD used for DSS installation

    *-----------------------------------------------------------------------------*
    hdparm -t /dev/md0
    *-----------------------------------------------------------------------------*


    /dev/md0:
    Timing buffered disk reads: 1884 MB in 3.00 seconds = 627.64 MB/sec

    It's a bit slow for a Raid5 of 5 SSD, maybe the problem is caused by sde...

    *-----------------------------------------------------------------------------*
    ethtool bond0
    *-----------------------------------------------------------------------------*

    Settings for bond0:
    No data available

    It that normal?

    All eth0,eth1,eth2 are set to 1000Mbit/s and no errors occured.


    Version of DSS : 6.0up75.8401.5377 64bit

  5. #5
    Join Date
    Aug 2010
    Posts
    404

    Default

    You need a small update for your NIC card, as you are having build up75, so please open a support ticket, and ask me to send you a Small Update files.
    0852-DSS_V6_up75-Intel_1Gbps_NIC_driver_set
    0853-DSS_V6_up75-Intel_10Gbps_NIC_driver_set

    ______________________
    Alaa Souqi

  6. #6

    Default

    I'm still using the Lite version since I don't know if DSS will works well with my configuration, so I can't open a support ticket.
    What should I do ? Pay for support for obtaining a patch who will maybe not solves my problems?
    Thanks however

  7. #7
    Pi-L Guest

    Default

    search for individual cards with ethtool, not for bond, like ethtool eth0 etc.

  8. #8
    Pi-L Guest

    Lightbulb

    Since there could be few things causing lowered performance except hardware failure so the best thing to do first is to download logs from DSS (in Web GUI -> Status -> Hardware -> Logs) and search tests.log file for:
    1) hdparm - to check actual raid performance
    2) ethtool - to check actual network cards speed
    3) ifconfig - to see if there is any packet loss (errors in network transmission)
    Please take a look at it cause it may be for example cards are running only 100MBps mode for some reason.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •