Visit Open-E website
Results 1 to 6 of 6

Thread: Open-E ISCSI with SQL Server

  1. #1

    Default Open-E ISCSI with SQL Server

    I am using Open-E to run an SQL Server VM on an 2008 R2 Hyper-V Cluster. We are experiencing 20 Second to two minute wait times for queries to compete 4-6 times a day. Usually only 1 pause a day correlates to high IO tasks. High IO tasks like backups do not result in long wait times on the SQL server when run manually. Also only the SQL Server slowes down. The other VMs continue functioning normally.

    1 GB ISCSI connection on a dedicated network. 4K jumbo frames. Open-e is on a Atom 330, the Open-e logs have never shown any significant load to CPU (never > than 10%.) We have moved to database to a volume with WB caching enabled with maybe some minor improvement, but it is hard to tell.

    Any Ideas?

  2. #2


    I would like to open this topic up to others as I have heard some Pro's and Con's with SQL as a VM on either Hyper-V or VMware. Some have told me that they would never run SQL as a virtual machine.

    Now if you double the CPU specs for the SQL VM and memory then on the DSS side like what David did add the Write Back option this should almost be enough. But maybe David you might want to use MPIO (you will need to 2 NICs) and possibly a RAID 10 for the dedicated SQL VM (SAS or SSD's). Not sure but increasing some of the Target values like below but test first not in a production system.


    Adjust the target values as follows:
    1. From the console, press CTRL+ALT+W,
    2. Select Tuning options -> iSCSI daemon options -> Target options,
    3. Select the target in question,
    4. change the MaxRecvDataSegmentLength and MaxXmitDataSegmentLength values to the
    maximal required data size (check w/ the initiator to match).

    Anyone else that have some ideas as I would like to add this in our ??

    Thanks David for posting this as this has been in my head as well and I dont have SQL to test (too many other systems testing now).
    All the best,

    Todd Maxwell

    Follow the red "E"
    Facebook | Twitter | YouTube

  3. #3
    Join Date
    Aug 2008


    Are you using file or block i/o volumes?
    You didn't comment on the number of disks in your Open-E box or the number or RAID type or the controller, etc.
    When the time outs occur, does it correspond with any other activity in the environment (e.g. backups, traffic burst, stored procedures running etc).
    Only looking at the CPU utilization on the SAN side is not sufficient. Unfortunately, I feel that the stats on the Open-E side need to be enhanced. Even just giving us the data from "iostat -xk" would be sufficient IMO for disk IO. It gives I/O, bandwidth, queue size, utilization and much better info than vmstat. In short, you are going to have to monitor your consumers of the storage and see what is happening at the time this is going on. You only have a single GB connection and GB Ethernet is very easy to saturate these days so that could also be a contention point. You could use MPIO which was already suggested by Mr. Maxwell.
    Lastly, I have no problems running either MSSQL or Oracle as a VM. We do it all the time and get great results. The problem most folks run into with white box sans is unoptimized storage. When you locate a bunch of different Oses with different access patterns and block sizes on the same physical disks, you are sure to degrade performance. Its a hassle to maintain, but having tiers of storage is what the big shops do. It's moviing the storage area network beyond the "network" and into the infrastructure. I call it "SIN".. Storage Infrastructure Networking". Its where you create tiers of storage optimized for different scenarios and using each tier where it functions best and most afford-ably.
    Lastly, you should be using block io for database servers. Its better to let the db server do it's own caching as opposed to the Linux page cache. Its why most enterprise shops use ASM/raw disk with Oracle as opposed to having to having a baked file system. Let the database server decide what to cache not the file system.

  4. #4

    Default Ssd

    I set up DSS V6 on a virtual machine on a test Hyper-V serving ISCSI Volumes to VMs. I still get significant slowdowns during high IO. I have tweaked the ISCSI settings as given. Microsoft says SQL server can work with ISCSI. Is this likely just a configuration problem with DSS?
    I've dedicated an SSD to SQL on a Virtual 10GB network, block IO, and pass-through disk. I just don't see how this does not perform better. I am seeing average IO queue length of > 300 when running sqliosim. All the problems go away if I set the SQL server to use a dynamic VHD on a local 5900 RPM Advanced Fromat drive.

  5. #5
    Join Date
    Aug 2008


    Are you using iSCSI through the virtualization layer? or is iSCSI lun mounted on the hypervisor?

  6. #6


    iSCSI on the Hypervisor. We've put the database on an SSD on Open-E and the performance is much improved, but still not ideal.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts