Visit Open-E website
Results 1 to 10 of 13

Thread: Open-E DSS Benchmark with PIC

Thread has average rating 5.00 / 5.00 based on 1 votes.
Thread has been visited 17585 times.

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Default

    Yes. iscsi lun is set in Write Back mode ,too.
    In heavy loading situation ,I strongley recommend cheap fc card.

    I am waiting for my emulex fc card and barcade fc switch.

  2. #2

    Default

    If you send me an email I can give you access to my systems (2 x DSS, Windows 2003 and VMware). They have the Qlogic 4Gb FC HBA's direct connected.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  3. #3

    Lightbulb

    I've been messing with that system, and here's the batch file I made to do the test (in the same directory as SQLIO.exe):
    Code:
    ECHO off
    
    set /a mynumber=10
    FOR /L %%G IN (1,1,5) DO CALL :in_fora
    ECHO Done...
    GOTO :eof
    
    :in_fora
    set /a myblock=1
    ECHO d:\testfile.dat 1 0x0 %mynumber%>mypar.txt
    
    FOR /L %%H IN (1,1,5) DO CALL :in_forb
    set /a mynumber=%mynumber%*4
    GOTO :eof
    
    :in_forb
    set /a myblock=%myblock%*4
    set /a myout=1
    FOR /L %%I IN (1,1,5) DO CALL :in_forc
    GOTO :eof
    
    :in_forc
    
    sqlio.exe -kW -s20 -frandom -o%myout% -b%myblock% -BH -LS -Fmypar.txt|find /i "IOs/sec">>sqlresult.txt
    
    ping -n 2 127.0.0.1>nul
    
    set /a myout=%myout%*4
    GOTO :eof
    The "ping -n 2 127.0.0.1>nul" is in there only because there's not a "sleep" command included by default in windows.
    This batch file is equivalent, but is much longer:
    Code:
    ECHO d:\testfile.dat 1 0x0 10>mypar.txt
    
    sqlio.exe -kW -s20 -frandom -o1 -b4 -BH -LS -Fmypar.txt|find /i "IOs/sec">>sqlresults.txt
    ping -n 2 127.0.0.1>nul
    sqlio.exe -kW -s20 -frandom -o4 -b4 -BH -LS -Fmypar.txt|find /i "IOs/sec">>sqlresults.txt
    ping -n 2 127.0.0.1>nul
    sqlio.exe -kW -s20 -frandom -o16 -b4 -BH -LS -Fmypar.txt|find /i "IOs/sec">>sqlresults.txt
    ping -n 2 127.0.0.1>nul
    sqlio.exe -kW -s20 -frandom -o64 -b4 -BH -LS -Fmypar.txt|find /i "IOs/sec">>sqlresults.txt
    ping -n 2 127.0.0.1>nul
    sqlio.exe -kW -s20 -frandom -o256 -b4 -BH -LS -Fmypar.txt|find /i "IOs/sec">>sqlresults.txt
    ping -n 2 127.0.0.1>nul
    
    sqlio.exe -kW -s20 -frandom -o1 -b16 -BH -LS -Fmypar.txt|find /i "IOs/sec">>sqlresults.txt
    ping -n 2 127.0.0.1>nul
    ...
    etc., etc.
    ...
    And here's the results on Open-E's Todd's test machine using a 4Gb FC link on both ends and a 3ware RAID card (only a single drive, but with write cache enabled on both the disk and controller) with about 512MB of memory on the card and about 2GB of system ram on the DSS side (but, I'm not sure that 32-bit FC works with more than 1GB of cache per volume):



  4. #4

    Default

    Open-E DSS (Target)
    ------------------------
    Server: Industry standard
    RAID Controller: Areca 1160 (PCI-X) 1GB cache
    RAID Mode: 2x RAID10 (8 HDDs per RAID set) WB enabled
    HDD: 16x Barracuda ES.2 SATA 500GB
    FC HBA: 2x QLogic QLA2462 (4Gbps)
    CPU: 2x AMD Opteron 275
    RAM: 16GB
    Open-E Version: 5.0.DB49000000.3278 (64Bit)

    VMware ESX Host (Initiator)
    --------------------------
    Server: FSC RX300 S4
    FC HBA: 2x Emulex LP1150 (4Gbps)
    CPU: 2x Intel E5430
    RAM: 32GB
    VMware ESX Version: 3.5 Update 3 Enterprise

    FC Environment
    ------------------
    FC Switch: 2x EMC 5100
    FC GBics: 32x 4G



    We've currently 6 Open-E DSS servers and 6 VMware ESX Hosts. Multipathing is disabled.
    The VMware ESX Hosts from which I've benchmarked has currently 10 servers running on different Open-E DSS servers. The Open-E DSS Storage where the benchmarked virtual machine was stored, has currently 10 servers running.

  5. #5

    Default

    @thx0701: You've tested a file with a size of 8MB. This will be cached by the RAID controller and though your results are not comparable.

    @Robotbeat: Same to you. You should test with files 2 to 4 times larger than your RAID controller's cache to get meaningful measurement results.

  6. #6

    Default

    I should not blame others if making the same mistakes.
    Now I've used 500MB filesize running 8 times.
    (= A total of 4GB filesize which means 4 times the RAID controller's cache)


  7. #7

    Lightbulb

    Part of the reason I started doing these benchmarks was because I wanted to see how fast the cache was. If you have 100GB of system RAM and a database less than 100GB, then you can basically cache the whole thing. That's a good idea if performance is way more important than data integrity. Also, eventually the failover will support memory-coherent replication, instead of waiting for the destination side to write to disk. So then you will have data integrity AND performance.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •