Okay. So, is there a plan to implement SCSI over RDMA? This, coupled with solid-state disks (either SATA/SAS-based with RAM or with the new Intel SSDs or even pci-e based) or even just a server loaded with (soon, with MetaRAM) up to RAM might bring the IOPS up above 100,000. That would be completely insane. Right now, with a couple of Gbe ports teamed via MPIO (I know, not the best) with 4GB of system RAM, 12-disk RAID 5 array (with 2GB in the RAID controller), we get about 2400 IOPS with SQL io benchmark.

I've read that Infiniband (with RDMA) can, in some cases, provide 3 x fibrechannel IOPS. Has anyone used Infiniband over IP? What sort of IOPS are you getting?

Is Open-E looking at doing any sort of RDMA/iSER sort of work? (Or even FCoE?)

I got to thinking about this because if we're going to SSDs in the next couple years, it'd be nice to have a protocol that can take advantage of the low latencies possible. And x4 Infiniband PCIe cards are about the same price (and bandwidth) as 10Gbe cards are (you can find them pretty easily for under $1000, way less than that refurbished, obviously).

Well, whether you use Infiniband or 10Gbe, RDMA (vs. TCP/IP) should grab you a bunch more IOPS just by getting rid of a lot of protocol that's unnecessary on a little SAN (esp. for point-to-point, no switch) where you aren't likely to drop any data.

Also, what are the highest IOPS that you (i.e. Open-E and anyone of you customers out there) are getting, using FibreChannel or iSCSI?