Visit Open-E website
Results 1 to 10 of 13

Thread: Open-E DSS Benchmark with PIC

Thread has average rating 5.00 / 5.00 based on 1 votes.
Thread has been visited 17582 times.

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Default

    Open-E DSS (Target)
    ------------------------
    Server: Industry standard
    RAID Controller: Areca 1160 (PCI-X) 1GB cache
    RAID Mode: 2x RAID10 (8 HDDs per RAID set) WB enabled
    HDD: 16x Barracuda ES.2 SATA 500GB
    FC HBA: 2x QLogic QLA2462 (4Gbps)
    CPU: 2x AMD Opteron 275
    RAM: 16GB
    Open-E Version: 5.0.DB49000000.3278 (64Bit)

    VMware ESX Host (Initiator)
    --------------------------
    Server: FSC RX300 S4
    FC HBA: 2x Emulex LP1150 (4Gbps)
    CPU: 2x Intel E5430
    RAM: 32GB
    VMware ESX Version: 3.5 Update 3 Enterprise

    FC Environment
    ------------------
    FC Switch: 2x EMC 5100
    FC GBics: 32x 4G



    We've currently 6 Open-E DSS servers and 6 VMware ESX Hosts. Multipathing is disabled.
    The VMware ESX Hosts from which I've benchmarked has currently 10 servers running on different Open-E DSS servers. The Open-E DSS Storage where the benchmarked virtual machine was stored, has currently 10 servers running.

  2. #2

    Default

    @thx0701: You've tested a file with a size of 8MB. This will be cached by the RAID controller and though your results are not comparable.

    @Robotbeat: Same to you. You should test with files 2 to 4 times larger than your RAID controller's cache to get meaningful measurement results.

  3. #3

    Default

    I should not blame others if making the same mistakes.
    Now I've used 500MB filesize running 8 times.
    (= A total of 4GB filesize which means 4 times the RAID controller's cache)


  4. #4

    Lightbulb

    Part of the reason I started doing these benchmarks was because I wanted to see how fast the cache was. If you have 100GB of system RAM and a database less than 100GB, then you can basically cache the whole thing. That's a good idea if performance is way more important than data integrity. Also, eventually the failover will support memory-coherent replication, instead of waiting for the destination side to write to disk. So then you will have data integrity AND performance.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •