rogerk, what sort of performance have you been getting with larger test file sizes? Like, twice or thrice as big as your ram?
rogerk, what sort of performance have you been getting with larger test file sizes? Like, twice or thrice as big as your ram?
1 thread writing for 20 secs to file d:\testfile.dat
using 4KB random IOs
enabling multiple I/Os per thread with 64 outstanding
buffering set to use hardware disk cache (but not file cache)
size of file d:\testfile.dat needs to be: 41943040000 bytes
current file size: 4194304000 bytes
need to expand by: 37748736000 bytes
expanding d:\testfile.dat ... done.
using specified size: 40000 MB for file: d:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 147.68
MBs/sec: 0.57
latency metrics:
Min_Latency(ms): 5
Avg_Latency(ms): 369
Max_Latency(ms): 7260
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100
this was a 40GB File from our exchange 2003 on citrix xen.
what a performance do you aspect?
regards
roger
Ouch, those results seem fairly bad to me... Especially the latency.
i made another test with a non production vitual machine
testfile 40GB
C:\Program Files (x86)\SQLIO>sqlio.exe -kW -s20 -frandom -o64 -b4 -LS -Fparam.tx
t
sqlio v1.5.SG
using system counter for latency timings, 3579545 counts per second
parameter file used: param.txt
file g:\testfile.dat with 1 thread (0) using mask 0x0 (0)
1 thread writing for 20 secs to file g:\testfile.dat
using 4KB random IOs
enabling multiple I/Os per thread with 64 outstanding
size of file g:\testfile.dat needs to be: 41943040000 bytes
current file size: 10485760 bytes
need to expand by: 41932554240 bytes
expanding g:\testfile.dat ... done.
using specified size: 40000 MB for file: g:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 33119.70
MBs/sec: 129.37
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 1
Max_Latency(ms): 403
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 84 15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
so i hope this help
i think the test above was made on the wrong machine.
our exchange has a 300GB database...
That's more like it!Originally Posted by rogerk
We need to get a 10GbE card to test with...
C:\Program Files (x86)\SQLIO>sqlio.exe -kW -s20 -frandom -o64 -b4 -BH -LS -Fpara
m.txt
sqlio v1.5.SG
using system counter for latency timings, 3579545 counts per second
parameter file used: param.txt
file g:\testfile.dat with 1 thread (0) using mask 0x0 (0)
1 thread writing for 20 secs to file g:\testfile.dat
using 4KB random IOs
enabling multiple I/Os per thread with 64 outstanding
buffering set to use hardware disk cache (but not file cache)
size of file g:\testfile.dat needs to be: 83886080000 bytes
current file size: 41943040000 bytes
need to expand by: 41943040000 bytes
expanding g:\testfile.dat ... done.
using specified size: 80000 MB for file: g:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 30826.99
MBs/sec: 120.41
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 1
Max_Latency(ms): 787
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 78 21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
this was with a 80GB file
in a non virtual system this should be 30% better.
regards
roger
How are reads?