The release notes told nothing about mixed enviroments, they told only about volumes and not about volume groups or units.
At this moment my default iSCSI target type is FileIO, i have free space in my volume group. If i understand right i can now switch to BlockIO on the console and create a new volume in the free space of the volume group. The result will be that i have my 2 old iSCSI volumes in FileIO and a new one in BlockIO.
Correct?
I got a demo CD, i used my e-mail address to get a access code, with this code i can download the CD.
New volumes are created in Block IO and as stated below in order to use old volumes that are created in File IO the data will need to backed up then restored on new volumes. You can still use your existing volumes as they where created in File IO but you will not be able to mix for Replication. In the end you would be better to keep them either File IO or Block IO.
Release Notes:
NEW:
* IMPORTANT : In ver 1.30 and 1.32 New iSCSI, default volume creation is done in block-IO in contrast to the older version that was file-IO.
In order to create new iSCSI volume in "old" file-IO mode, please switch the default in console tools menu: ctrl-alt-w --> Tuning options-->iSCSI daemon options-->iSCSI volume type.
Block-IO mode is about 30% faster then File-IO and the target volume size is exactly equal to defined iSCSI volume size (in file-io the target size is a bit smaller then defined iSCSI volume size).
Additionally, initialization is no-longer required as with the old file-io volumes.
In order to migrate your data from the file-IO to block-IO volume, the data must be backed up from the existing iSCSI volumes and then restored into newly created iSCSI volumes.
NOTE: please verify data integrity prior to deleting old iSCSI volumes.
Volume replication is only possible between to similar volumes (e.g. block-io <-> block-io or file-io <-> file-io). Volume replication between old file-io and block-io volumes are NOT possible.
Ysterday i got the 2.32 and the small update for the intel drivers from the support to test it.
The problem is still the same. I still think it is a cache problem.
What i'm thinking:
With BlockIO there will be used a small cache so the read and write access is not optimized. With sequential read or write with one client or IOmeter with one worker there is no problem. But when i use more workers, i access the RAID with 2 clients at the same time or i made random access the performance goes extemely down.
With FileIO there will be used a large cache size (3,8 GB) so the writes can better ordered before writing to disk. So there will be more time for reading.
I made a test with IOmeter, this profile i used: smalltest.zip
I used the DSS Demo CD and the performance is the same. Sequential read with one worker is fast and with two workers the value goes down after a few seconds.