As we discussed here some times i have the problem, that iSCSI in Block IO is very slow then using the target most in random access because the device cache is smaller than the cache when using the target in File IO.
What happens when i insert a FC HBA? How many cache will there be used?
And when i want to switch to FC, is there a way to convert my Block IO volumes that i can use this with FC? Or must i made a backup and restore to a new volume? (It ssems so because i can enter a block-size at FC and not at iSCSI...)
Even if the cache size is the same, what happens with the performance when i switch from iSCSI to FC? Perhaps it is faster because the block-size of 4K is bigger than with iSCSI? With VMware i can't use MPIO so i have only 1 GBit/s, at FC i can use 4GBit/s, this can be faster too...
And the last question: When using FC, will there be changes in the future that will require to delete and recreate the volume to get more performance?
Ofcourse moving to 4GB FC will be better solution if you want faster performance and transfer. As the iSCSI depends on your network speed. I recommend you to back up your data before moving to FC, you will never know what could inturrept the converting process.
Regard the FC changes, wait and see what the site admin will say, as I have no clue like you
Sure, to make a backup before a coverting process is better, if a convert is possible. But only a backup is much better than a backup and restore of 2 TB of data. The backup takes here 30 hours to a LTO1 drive...
I think that the higher data transfer rate will be result in a better performance because the time for the data to went from the target to the initiator is smaller.
All this is depending to the performance of the raid too, i know...
Sorry to say we don't have a way to convert any of the volumes for interchanging them with FC or iSCSI. So now and most likely for several release you still need to delete and recreate the volumes to assign them as FC or iSCSI.
But if you went out and bought 10gbe you wouldnt have to, problem solved!
I'm currently in a similar boat.. need more than 129mb/s on a single path to a host! multipathing is great if you are threading the data in multiple streams but thats not how a single host works.. and thats what I need.. around 300mb/s to each host. the cost seems to be about 200$ per port on the server and about 500$ more per port on a switch, which theoretically is the evolution in high speed infrastructure, thats not a bad price to pay for something thats still going to be considered smokin' fast 5 years from now. What do you think Todd? have you guys played at all with a pure 10gbe san?
The Neterion's are right in front of me (4 feet to be exact ) - I have the cable to interconnect just need the switch and the time. I might do a short 2 min. video demo of replicating with the 10GbE Neterions in mid Jan. for the replication function. netsyphon I can set 2 systems up to have you check it out or I can provide a web demo - let me know.