Visit Open-E website
Results 1 to 6 of 6

Thread: Poor Performance from DSS

  1. #1
    Join Date
    Jun 2009
    Location
    Livingston, Scotland
    Posts
    5

    Default Poor Performance from DSS

    We have been using Open-E DSS for about 18 months now and have been very impressed with features but never that impressed with performance. This has now become a major issue and I would appreciate some advise on how to improve this.

    We have 2x SC836TQ-R800B chassis (16-bay SAS/SATA backplane) with SuperMicro X7DBE MB, single 5410 2330GHz, 4GB RAM and 10x Seagate Barracuda 750GB SATA drives in a RAID6 array.
    We are using LSI 84016E RAID controllers on each box with 256MB cache and battery backup.
    Networking is DELL 2724 gigabit web-managed switches using CAT6 cabling.
    This disk space is presented to VMware using iSCSI and software initiator.
    The ESX hosts are DELL PE2850 with Dual 3GHz Xeon CPUs and 16GB RAM and these use Intel 8254NXX gigabit ethernet adapters.
    There are 3 LUNs on each Open-E box (1x500GB & 2x1000GB). Each of these has been initialised and is set to use FileIO. We replicate each of these 3 LUNs to the 2nd box and also use iSCSI fail-over. The Open-E boxes use Intel 80003ES2 gigabit NICs and balance-rr bonding
    We are seeing very poor write performance to any drives shared through the Windows 2003 VMs, typical figures using CrystalDiskMark are

    Sequential Read : 37.387 MB/s
    Sequential Write : 1.635 MB/s
    Random Read 512KB : 38.525 MB/s
    Random Write 512KB : 1.279 MB/s
    Random Read 4KB : 7.306 MB/s
    Random Write 4KB : 0.208 MB/s

    I must be doing something wrong here or hardware's not up to the job.

    Any advice you may be able to offer in terms of tunig parameters, RAID configuration, network configuration and any benchmarks for what we should expect from this setup would be appreciated.



    Thanks

    Andrew

  2. #2

    Default

    18 months ago we did not have the Auto Failover; did you send the logs to support so we can see if there are areas that we can investigate?

    Others are getting respective performances from posts below to compare - there are more but here is just a small few.

    http://forum.open-e.com/showthread.php?t=1392
    http://forum.open-e.com/showthread.php?t=1319 (look at the "SQLIO test in iscsi mode" tests)

    In the test.log what is the hdparm for sdx?
    Any dropped packets for the NICs - look in the same log file for ifconfig -a.

    Try changing the following in the iSCS Target settings from Console (CTRL + ALT + W - Tuning options..)

    maxRecvDataSegmentLen=262144
    MaxBurstLength=16776192
    Maxxmitdatasegment=262144
    maxoutstandingr2t=8
    InitialR2T=No
    ImmediateData=Yes
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  3. #3
    Join Date
    Jun 2009
    Location
    Livingston, Scotland
    Posts
    5

    Default

    Todd

    Apologies, you are quite right, we only started using fail-over this year as of update 3278.

    I have sent out logs to your support guys and hopefully they can spot something I haven't noticed.

    hdparm stats look pretty slow, so maybe something wrong with underlying disks/array?


    hdparm -t /dev/sda
    *-----------------------------------------------------------------------------*


    /dev/sda:
    Timing buffered disk reads: 58 MB in 3.05 seconds = 19.03 MB/sec

    *-----------------------------------------------------------------------------*
    hdparm -t /dev/sdb
    *-----------------------------------------------------------------------------*


    /dev/sdb:
    Timing buffered disk reads: 172 MB in 3.15 seconds = 54.64 MB/sec

    No dropped packets or collisions on NICs as far as I can see.

    Our 2nd box hdparm output looks a bit more healthy

    /dev/sda:
    Timing buffered disk reads: 4 MB in 4.03 seconds = 1016.08 kB/sec

    *-----------------------------------------------------------------------------*
    hdparm -t /dev/sdb
    *-----------------------------------------------------------------------------*


    /dev/sdb:
    Timing buffered disk reads: 548 MB in 3.01 seconds = 182.19 MB/sec

    *-----------------------------------------------------------------------------*
    hdparm -t /dev/sdc
    *-----------------------------------------------------------------------------*



    Thanks

  4. #4

    Default

    No problem - hey check the RAID Controller to see if write cache is enabled.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  5. #5
    Join Date
    Apr 2009
    Posts
    62

    Default

    I know you are using a different setup, but we use VMWare as well, check out my stats post.

    http://forum.open-e.com/showthread.php?t=1449

    Drew

  6. #6

    Default

    thanks for sharing such a nice information.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •