Thanks for that document. I'm going to read it.

In my iometer testing i've seen a big difference in the sort of traffic you send to the SAN. If you send large sequential blocks the speed is high that's what i did for the figures in my first post (no memory usage change on open-e). When sending a lot of small random block's the performance drops to less than 10MegaBytes/s.

And that wories me.
I already have 30 xen servers running on 7 hostsystems with local disks. I've measured the io traffic of those servers and found out that for 95% of the day they are doing virtualy no reads, and continues writes of 10kbytes/s. (I thought it would the other way around, but it is consistent on all xen domu's).

When i scale those figures to a 100 xen dom-u's I don't know what to expect when i tranfer those xen servers to the san...

Does anyone know how the measure the sort of disk IO Linux does? For example: 10 1024byte random reads, 28 1M sequential writes, 15 64k random writes, etc..