Thanks for your comments, sorry for more questions!
... then you will still be able to transfer data to the source system, but all the data written to the source will be resynced once the connection is re-established. If the link is severed, the cluster is in a sort of degraded mode.
Was this doc refering to the asynchronous or the synchronous case?

But if we are talking about the synchronous case, I dont see where the limitation is for the senario I am considering, and it implies we could use iSCSI auto failover in a situation where the data may not be fully up to date (or perhaps a failover would have to wait until the sync status was "ok").

This won't happen really with merely a slow replication link.
Sorry but I dont quite understand why this is the case. As WAN links gets faster, WANs and LANs start to merge, and I can easily see the situation where people wish to use synchronous replication over links that may not be able to cope with intense writes (eg. 100's of MB/s), but can happily cope with average replication traffic (eg. only a few MB/s). In this situation, and if the model supports it, the source & destination must get a little out of step, which could be a good thing (flexible).

The case for asynchronous replication (say) over a slow WAN link, on a daily basis is simple and clear, eg. used like an off-site backup, where day old data is fine.
But when we consider an almost continuous replication over medium speed link, it's not so clear cut, as now we are concerned with the bandwidth of the link - maybe fast enough to keep up most of the time, but not all.

There is an option to enable an asynchronous (i.e. buffering) protocol that will allow replication over slower (i.e. WAN) links while keeping both the source and destination in a "consistent" (in this case, meaning non-corrupt) state but not necessarily in an up-to-date state.
I think need to do a little more technical reading on how this systems works, as the term "consistent" means different things when going between (say) a file on a NTFS formatted Lun, and the various blocks that make it up. If a single block is changed on the source but not on the destination, then the destination may well be potentally corrupted, as far as user data is concerned, but if the replication process has kept track of all changes, then it is still overall "consistent".
Maybe the asynchronous version coordinates the process much better, maing it more certain at any point in time what is consistent.

I would just put a RAID card with lots (like 4GB) of buffer space in the destination side.
When you say "buffer space", you mean RAID cache RAM? The cards we use have 512MB of cache fixed, so we can't change that. If you mean more RAM in the destination DSS system, that is certainly possible, but AFAIK, very little Ram gets used for block-IO volumes (just buffers). Maybe it's possible to replicate a block-IO source volume to a file-IO destination volume, and thus take advantantage of cheap DDR2 Ram? (eg. use 4GB in source & 16GB in destination).

Iscsi autofailover I don't think works with asynchronous replication.
I do understand that, it's well documented, but again, if synchronous volume replication can accommodate some degree of asynchronisity (not sure if that's a real word!), then there's an in-between option.