G'day Ralph,
Yep, I was surprised as well, normally this forum is pretty good that way. There are a couple of KB entries, but nothing that really ties it all together.
So we are in the process of doing the testing now. I'll update this thread if I find anything that works (or doesn't) for us.
Hi, just wanted to jump in on this thread... I've got two vSphere 4.0 hosts and two DSS v6 boxes running in an iSCSI failover configuration. I used the defaults out of the box but this weekend played around with the iSCSI target tweaks listed above.
My experience was that the tweaks worked, with the exception of MaxBurstLength = 16776192. When I set that, I lost contact with my luns... the effect was that inside my VM's, basically all disk access froze. Luckily I still had console access and when I set the MaxBurstLength back to the default of 262144 and reset the iSCSI connections my disks came back.
I tried a MaxBurstLength of 1047552 and that worked. I then tried a MaxBurstLength of
2097152 and the luns froze again, so I reset it back to 1047552 and ran Sunday and today with that value.
The changes that I saw were marginal - I ran a 32k 100% write test using iometer and where with the default settings I got 170 iops, with the new tweaks I got 195 iops... so I guess that's about 10% better in my unscientific testing.
Has anyone else tried the MaxBurstLength setting, and what's your experience with it?
Writeback caching? Off due to the iSCSI failover replication
Bonding? Yes, using bonding on both iSCSI channel and replication channel (and mgmt, for what it's worth). I have two built-in gig ports and a quad-port intel NIC, so three pairs of network interfaces.