I just installed a DSS server at a customer's site. It had 20 GB of cache (16GB of system cache and 4GB on the 1680ix Areca card), dual quad-core (eight cores), and 16 1TB SAS drives. It also had a (dual-port) 4Gb fibre channel card in it (for using in target mode).

I ran an SQLIO test on a test system which had a small test volume mounted on it from the DSS. When using 4KB block sizes and random reads (with 8 outstanding requests), it got 14000 I/O! That's about 55 MB/s random I/O. Not bad at all. Not only that, but that's only running with 1Gb fibre channel (that's all the test system had). Granted, it's all cached, but if you're running a small database about 10GB or less, it could very well be all cached anyway, or at least, the part you're actually using could be all cached.

You're supposed to benchmark by using a test file at least 2-4 times as big as your cache, but I only used a 1GB test file, since I didn't have time to test it again. Still, that's pretty good!

Anyways, I'm very happy with this outcome. Also, the average read latency, according to SQLIO, is 0 milliseconds. That's right. Zero. Obviously, SQLIO needs to add an option to display the results in microseconds (or nanoseconds), but looks pretty good.

Linear reads are around 96 MB/s, so that's obviously bumping up against the (initiator-side) test system's 1Gb FC interface (which is only using a 33MHz/32-bit PCI bus).

I'm sorry I didn't get the chance to test the system with a test-file that is bigger than the cache and with a 4Gb FC pci-e card on both ends. I also should've got write speeds, too. It would've been very interesting. Next time, I guess!