G'day Andy,
This is our desired scenario too.
ESX VMFS formatted iSCSI targets being constantly replicated (async is fine) to a DRP DSS (in this case on the GB LAN).
So I think it will be possible, but at this stage the great unknown for us is will it work in a production environment.
The last thing we want is for it to seem to work, but when we revert to the DRP DSS we find corrupted data
G'day Todd,
I just re-read this reply and this is really going to sound stupid, but what are the "obvious" reasons?
I have (perhaps mistakenly) assumed that using the Writeback Cache was ok so long as I had the server under UPS protection and ran a redundant PSU.
Are there other dangers to using the WB option? and is this specific to replication or in general? (ie should I also have it off on the "Master"?)
With replication, does the ECC process only check to ensure it is written to the "slave". Not specifically written to the slave storage?
Since, (in my testing) WB cache made a big difference in smaller transfers we have left it on.
So should we now turn it off during replication, and then as part of the DR procedure turn it on before we make the server live?
The reasons are that we are not able to replicate the cache and not all applications will notice this small amount of data missing. We suggest not using the WB for this reason.
We will be updating notes on this topic in future release.
Yes but not in all instances as some applications are aware of the data not being transferred and will resend what is missing but this will be a very small amount and in most cases to small. We will be looking into some feature to replicate the cache in the future. But this is a very $$$ feature even with our competitors it is extremly expensive.