Visit Open-E website
Page 1 of 2 12 LastLast
Results 1 to 10 of 13

Thread: Open-E DSS Benchmark with PIC

Thread has average rating 5.00 / 5.00 based on 1 votes.
Thread has been visited 14120 times.
  1. #1

    Default Open-E DSS Benchmark with PIC

    Server
    DELL 860 with 6/i
    RAID0 WD GP 750G X2
    Open - DSS Version: 5.0up60.7101.3511 32bit
    FC HBA:Qlogic QLA2344 2Gbps PCI-X in target mode



    Client
    FC initiator client
    IBM X3200 Q6600 with 8GB DRAM (without any storage)
    Win2003
    FC HBA :EMULEX 1050EX 1 Port 2Gbps PCI-E 4x



    The speed is not bad. (Only 2 SATA Green Power HDD raid 0 )
    I also have Dell MD1000 (DAS) and HP DL 180 G5 Server .
    Will do more test and try other soultion later (comstar and openfiler)
    Although open-e dss costs more money,it is still very good soultion.

    Please check this link for more information

    http://translate.google.com/translat...N&tl=en&swap=1

  2. #2

    Default

    Thanks for the tests!!! I want you to check out a user called "Robotbeat". He has other tests running w/ FC and iSCSI - see the link below. I have asked him to check your postings as well.

    http://forum.open-e.com/showthread.php?t=1309
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  3. #3

    Default

    for sqlio test in 2Gbps FC

    sqlio -kR -s20 -frandom -o512 -b1024 -BH -LS testfile.dat
    sqlio v1.5.SG
    using system counter for latency timings, -1894937296 counts per second
    1 thread reading for 20 secs from file testfile.dat
    using 1024KB random IOs
    enabling multiple I/Os per thread with 512 outstanding
    buffering set to use hardware disk cache (but not file cache)
    using current size: 8 MB for file: testfile.dat
    initialization done
    CUMULATIVE DATA:
    throughput metrics:
    IOs/sec: 178.70
    MBs/sec: 178.70
    latency metrics:
    Min_Latency(ms): 147
    Avg_Latency(ms): 2674
    Max_Latency(ms): 3013
    histogram:
    ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
    %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100

    sqlio -kW -s20 -frandom -o512 -b1024 -BH -LS testfile.dat
    sqlio v1.5.SG
    using system counter for latency timings, -1894937296 counts per second
    1 thread writing for 20 secs to file testfile.dat
    using 1024KB random IOs
    enabling multiple I/Os per thread with 512 outstanding
    buffering set to use hardware disk cache (but not file cache)
    using current size: 8 MB for file: testfile.dat
    initialization done
    CUMULATIVE DATA:
    throughput metrics:
    IOs/sec: 167.64
    MBs/sec: 167.64
    latency metrics:
    Min_Latency(ms): 145
    Avg_Latency(ms): 2843
    Max_Latency(ms): 3234
    histogram:
    ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
    %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100

    sqlio -kW -t1 -s30 -f64 -b2 -i64 -BN testfile.dat
    sqlio v1.5.SG
    1 thread writing for 30 secs to file testfile.dat
    using 2KB IOs over 128KB stripes with 64 IOs per run
    buffering set to not use file nor disk caches (as is SQL Server)
    initialization done
    CUMULATIVE DATA:
    throughput metrics:
    IOs/sec: 1042.90
    MBs/sec: 2.03

    sqlio -kR -t1 -s30 -f64 -b2 -i64 -BY testfile.dat
    sqlio v1.5.SG
    1 thread reading for 30 secs from file testfile.dat
    using 2KB IOs over 128KB stripes with 64 IOs per run
    buffering set to use both file and disk caches
    initialization done
    CUMULATIVE DATA:
    throughput metrics:
    IOs/sec: 284338.93
    MBs/sec: 555.34

    sqlio -kW -t1 -s30 -f64 -b2 -i64 -BY testfile.dat
    sqlio v1.5.SG
    1 thread writing for 30 secs to file testfile.dat
    using 2KB IOs over 128KB stripes with 64 IOs per run
    buffering set to use both file and disk caches
    initialization done
    CUMULATIVE DATA:
    throughput metrics:
    IOs/sec: 271342.86

  4. #4

    Default

    SQLIO test in iscsi mode


    sqlio -kR -s20 -frandom -o512 -b1024 -BH -LS testfile.dat
    sqlio v1.5.SG
    using system counter for latency timings, -1894927296 counts per second
    1 thread reading for 20 secs from file testfile.dat
    using 1024KB random IOs
    enabling multiple I/Os per thread with 512 outstanding
    buffering set to use hardware disk cache (but not file cache)
    using current size: 8 MB for file: testfile.dat
    initialization done
    CUMULATIVE DATA:
    throughput metrics:
    IOs/sec: 110.21
    MBs/sec: 110.21
    latency metrics:
    Min_Latency(ms): 192
    Avg_Latency(ms): 4192
    Max_Latency(ms): 4987
    histogram:
    ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
    %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100

    sqlio -kW -s20 -frandom -o512 -b1024 -BH -LS testfile.dat
    sqlio v1.5.SG
    using system counter for latency timings, -1894927296 counts per second
    1 thread writing for 20 secs to file testfile.dat
    using 1024KB random IOs
    enabling multiple I/Os per thread with 512 outstanding
    buffering set to use hardware disk cache (but not file cache)
    using current size: 8 MB for file: testfile.dat
    initialization done
    CUMULATIVE DATA:
    throughput metrics:
    IOs/sec: 64.54
    MBs/sec: 64.54
    latency metrics:
    Min_Latency(ms): 194
    Avg_Latency(ms): 6818
    Max_Latency(ms): 8595
    histogram:
    ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
    %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100

    sqlio -kR -t1 -s30 -f64 -b2 -i64 -BN testfile.dat
    sqlio v1.5.SG
    1 thread reading for 30 secs from file testfile.dat
    using 2KB IOs over 128KB stripes with 64 IOs per run
    buffering set to not use file nor disk caches (as is SQL Server)
    size of file testfile.dat needs to be: 8388608 bytes
    current file size: 0 bytes
    need to expand by: 8388608 bytes
    expanding testfile.dat ... done.
    initialization done
    CUMULATIVE DATA:
    throughput metrics:
    IOs/sec: 5825.56
    MBs/sec: 11.37

    sqlio -kW -t1 -s30 -f64 -b2 -i64 -BN testfile.dat
    sqlio v1.5.SG
    1 thread writing for 30 secs to file testfile.dat
    using 2KB IOs over 128KB stripes with 64 IOs per run
    buffering set to not use file nor disk caches (as is SQL Server)
    initialization done
    CUMULATIVE DATA:
    throughput metrics:
    IOs/sec: 922.26
    MBs/sec: 1.80

    sqlio -kR -t1 -s30 -f64 -b2 -i64 -BY testfile.dat
    sqlio v1.5.SG
    1 thread reading for 30 secs from file testfile.dat
    using 2KB IOs over 128KB stripes with 64 IOs per run
    buffering set to use both file and disk caches
    initialization done
    CUMULATIVE DATA:
    throughput metrics:
    IOs/sec: 281238.06
    MBs/sec: 549.29

    sqlio -kW -t1 -s30 -f64 -b2 -i64 -BY testfile.dat
    sqlio v1.5.SG
    1 thread writing for 30 secs to file testfile.dat
    using 2KB IOs over 128KB stripes with 64 IOs per run
    buffering set to use both file and disk caches
    initialization done
    CUMULATIVE DATA:
    throughput metrics:
    IOs/sec: 269913.43
    MBs/sec: 527.17

  5. #5

    Default

    iscsi targert Server

    DELL 860 with 6/i
    RAID0 WD GP 750G X2
    Open - DSS Version: 5.0up60.7101.3511 32bit
    iSCSI Disk lv0000


    iscsiClient
    IBM X3200 Q6600 with 8GB DRAM
    Win2003



  6. #6

    Default

    Thanks for the update thx0701!

    Was the iSCSI LUN set to Write Back?
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  7. #7

    Default

    Yes. iscsi lun is set in Write Back mode ,too.
    In heavy loading situation ,I strongley recommend cheap fc card.

    I am waiting for my emulex fc card and barcade fc switch.

  8. #8

    Default

    If you send me an email I can give you access to my systems (2 x DSS, Windows 2003 and VMware). They have the Qlogic 4Gb FC HBA's direct connected.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  9. #9

    Lightbulb

    I've been messing with that system, and here's the batch file I made to do the test (in the same directory as SQLIO.exe):
    Code:
    ECHO off
    
    set /a mynumber=10
    FOR /L %%G IN (1,1,5) DO CALL :in_fora
    ECHO Done...
    GOTO :eof
    
    :in_fora
    set /a myblock=1
    ECHO d:\testfile.dat 1 0x0 %mynumber%>mypar.txt
    
    FOR /L %%H IN (1,1,5) DO CALL :in_forb
    set /a mynumber=%mynumber%*4
    GOTO :eof
    
    :in_forb
    set /a myblock=%myblock%*4
    set /a myout=1
    FOR /L %%I IN (1,1,5) DO CALL :in_forc
    GOTO :eof
    
    :in_forc
    
    sqlio.exe -kW -s20 -frandom -o%myout% -b%myblock% -BH -LS -Fmypar.txt|find /i "IOs/sec">>sqlresult.txt
    
    ping -n 2 127.0.0.1>nul
    
    set /a myout=%myout%*4
    GOTO :eof
    The "ping -n 2 127.0.0.1>nul" is in there only because there's not a "sleep" command included by default in windows.
    This batch file is equivalent, but is much longer:
    Code:
    ECHO d:\testfile.dat 1 0x0 10>mypar.txt
    
    sqlio.exe -kW -s20 -frandom -o1 -b4 -BH -LS -Fmypar.txt|find /i "IOs/sec">>sqlresults.txt
    ping -n 2 127.0.0.1>nul
    sqlio.exe -kW -s20 -frandom -o4 -b4 -BH -LS -Fmypar.txt|find /i "IOs/sec">>sqlresults.txt
    ping -n 2 127.0.0.1>nul
    sqlio.exe -kW -s20 -frandom -o16 -b4 -BH -LS -Fmypar.txt|find /i "IOs/sec">>sqlresults.txt
    ping -n 2 127.0.0.1>nul
    sqlio.exe -kW -s20 -frandom -o64 -b4 -BH -LS -Fmypar.txt|find /i "IOs/sec">>sqlresults.txt
    ping -n 2 127.0.0.1>nul
    sqlio.exe -kW -s20 -frandom -o256 -b4 -BH -LS -Fmypar.txt|find /i "IOs/sec">>sqlresults.txt
    ping -n 2 127.0.0.1>nul
    
    sqlio.exe -kW -s20 -frandom -o1 -b16 -BH -LS -Fmypar.txt|find /i "IOs/sec">>sqlresults.txt
    ping -n 2 127.0.0.1>nul
    ...
    etc., etc.
    ...
    And here's the results on Open-E's Todd's test machine using a 4Gb FC link on both ends and a 3ware RAID card (only a single drive, but with write cache enabled on both the disk and controller) with about 512MB of memory on the card and about 2GB of system ram on the DSS side (but, I'm not sure that 32-bit FC works with more than 1GB of cache per volume):



  10. #10

    Default

    Open-E DSS (Target)
    ------------------------
    Server: Industry standard
    RAID Controller: Areca 1160 (PCI-X) 1GB cache
    RAID Mode: 2x RAID10 (8 HDDs per RAID set) WB enabled
    HDD: 16x Barracuda ES.2 SATA 500GB
    FC HBA: 2x QLogic QLA2462 (4Gbps)
    CPU: 2x AMD Opteron 275
    RAM: 16GB
    Open-E Version: 5.0.DB49000000.3278 (64Bit)

    VMware ESX Host (Initiator)
    --------------------------
    Server: FSC RX300 S4
    FC HBA: 2x Emulex LP1150 (4Gbps)
    CPU: 2x Intel E5430
    RAM: 32GB
    VMware ESX Version: 3.5 Update 3 Enterprise

    FC Environment
    ------------------
    FC Switch: 2x EMC 5100
    FC GBics: 32x 4G



    We've currently 6 Open-E DSS servers and 6 VMware ESX Hosts. Multipathing is disabled.
    The VMware ESX Hosts from which I've benchmarked has currently 10 servers running on different Open-E DSS servers. The Open-E DSS Storage where the benchmarked virtual machine was stored, has currently 10 servers running.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •