I've bought 2 Open-E DSS Server 16TB, but before installing this on my servers, I'm evaluating Open-E DSS Lite on my test hardware.
The test hardware are Shuttle PC's, with Marvell 88e8056 PCI-E gigabit network controller, with also a second network card Realtek 8139. The systems are connected to a 3com 3CBLSG24-switch which is divided into 2 vlans, one for the storage (on the Marvell cards) and one for normal access (on the Realtek cards).
I've installed a Open-E DSS Lite on 1 system, a Debian 64-bit system on another system, and Windows 2003 on another system.
iSCSI is working perfectly, with Open-iSCSI or the Microsoft iSCSI client.
BUT the performance seems to be very slow.
On linux:
/dev/sda (local disk)
Timing buffered disk reads: 222 MB in 3.01 seconds = 73.64 MB/sec
/dev/sdb (iscsi disk on open-e):
Timing buffered disk reads: 76 MB in 3.09 seconds = 24.63 MB/sec
This seems to be really disappointing, this is nowhere near Gigabit performance. I've set the jumbo frames to an MTU of 9000 (on both the iscsi and the client machines), but this doesn't seem to help.
I know this is only a test setup, and it's the lite version of the product. What can I do about the performance? Am I doing something wrong?
check that you have write back enabled, also check cache size
are you using hardware or software RAID?
What RAID controller and drives are you using?
what type of raid array do you have setup?
Specs of all system ( server and client ) please , this kind of "all integrated one pci bus" architecture from home computer will not give you any good performance over ISCSI , you need fast CPU and fast BUS for both server and client's , ISCSI need power , alot of power to reach 100MB/s.