I am using open-e on a server that I am attempting to turn into a SAN. My vm host servers are running 2008 Server R2 with Hyper V (not the core installation I installed the OS).
I just created Volume Groups. I want to create disk partitions and put a virtual machine on each one. I want to create iscsi connections so that the VM is on the open-e storage server and my other server with hyper V can connect to it with iscsi connection.
Should I use File or Block I/O for a Hyper V setup? and if it is File what speed it shows slow, medium fast?
What does one do if they have already set up esx servers with file i/o before Open-e V6 was certified with ESX4, before this file i/o was the best way to go.
But using File I/O with plenty of RAM and fast enough CPU (Xeon QC) should benefit from the filesystem caching to improve read speed? So I don't understand how Block I/O would be faster than File I/O in a virtualized environment like ESX or XenServer? We are currently building two systems (one failover) with 5 SAS 15k and 5 SATA disks, of which both arrays will be an iSCSI target for XenServer hosts. Until now, we were convinced File I/O is the way to go. Could you explain a little bit more what new insights you have on this, since before File I/O was recommended?
This was because in the past with DSS V5 we had IET for the Target solution and now we use for the default SCST and in the next release the new version of SCST will be out.
Use the Block IO as we certified with VMware EXS 4.0 with Block IO and the message will be removed from the GUI in the next release, File IO as stated on the link is adding a file
system layer and best to use Block IO as we certified it with VMware.
use block i/o for iscsi if you want to be able to use the volume replication & automatic failover features of open-e, file i/o initialisation is not necesary and the difference in performance is very small