Hello,
I have DSS 6 lite installed in a hp box (smartarray 641, 5 scsi HDDs in raid5, dual XEON 3GHZ, 3GB RAM)
I have configured a volume group.
Inside the volume group I have an iScsi volume in File i/o (160GB, initialized) and a NAS volume (100GB)
this box is connected to my Storage LAN using the integrated Broadcom NIC.
The iSCSI volume is used by my vmware cluster (2 hosts) and is vmfs formatted
The NAS volume is shared on my network
If I copy data from a windows box to the NAS volume I have a performance of about 10MB/sec
If I copy data from the ESX hosts to the iSCSI volume (if I migrate a virtual machine for example) I have a performance of 3MB/sec
Reading from the iSCSI volume gives same performance than reading from SAN Volume.
If have tried many combinations of iscsi target parameters.
I have another linux box with SCST in fileio and same target parameters of e-open than outperforms open-e writing data.
any idea?
Make sure that the cache settings on the RAID controller are not set to Write Through but use Write Back. Check the test.log and look for the ifconfig -a and see if the nics are performing at 1000. Also verify the sda or sdb to see what the hdparm speeds are.
Nice try...
The problem is with iSCSI only. with samba performance is OK.
In any case here are the values you asked for:
*-----------------------------------------------------------------------------*
hdparm -t /dev/cciss/c0d1
*-----------------------------------------------------------------------------*
/dev/cciss/c0d1:
Timing buffered disk reads: 148 MB in 3.01 seconds = 49.14 MB/sec
Just from my DSS Lite test with my 3Ware I get higher when enabling the cache.
hdparm -t /dev/sda
*-----------------------------------------------------------------------------*
/dev/sda:
Timing buffered disk reads: 354 MB in 3.00 seconds = 117.99 MB/sec
Also are these ES (Server ) drives. Try direct connecting to the server. You might want to force the speed to 1000 instead f auto neg in the modify driver feature in the Console Tools.
Hello,
the machine is quite old and 50mb/sec is fine for my purposes.
the problem is that I have that kind of performance when writing using samba.
When using ISCSI I have 3MB/sec only!
The problem is not in the network or disk array (samba is perfoming fast enough), I think the problem is with some parametrt in the scst configuration.
WB Enabled.
Tried different settings:
Default
The one I have on another linux box with SCST and that shows far better write performance
the one proposed for vsphere in some other post
Write performance is always 3MB/sec
The other linux box with SCST in fileio what are the performance reads and what is it and was this tested on the same system and can we get the hdparm from there logs with this box.
Is the linux box 6bit mode what are the specs....
I believe this might be isolated as I dont have older systems to test with. Can you test with DSS V6 Trial version?
Also look at the dmesg logs to see if we can see anything that could possibly be going on.
The other linux box is my production SAN.
it is newer (E200i controller, SAS 15k disks, 4GB RAM, hdparam is about 130MB/sec) and I cannot use for testing.
It is fileio on an entire partition of a logical RAID5 disk.
I am testing DSS V6 trial (latest build) on an older box to see if its performance is good enough to use in production.
Since write performance with iSCSI is poor, I tested with samba to ensure that my older hardware is ok.