Visit Open-E website
Results 1 to 6 of 6

Thread: Best Practice for IOPS and iSCSI - Citrix Xenserver

Thread has average rating 5.00 / 5.00 based on 1 votes.
Thread has been visited 20053 times.

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Lightbulb

    Also, I would have used an Areca card with a full-sized DDR2 DIMM expansion slot, which lets you add up to 4GB of (battery-backed, if you get the battery module) controller memory. This is especially important for failover situations, since I believe that writes are only acknowledged when they are written "to disk" on the destination side (i.e. you can't use system memory to cache writes). This will make the biggest difference in write performance, I think.

  2. #2
    Join Date
    Aug 2008
    Posts
    236

    Default

    I'll add my two cents here.

    There is a huge temptation when we get alot of drives to try and make one array out of them. More spindles = more performance right? But in Linux, this doesn't always work out to our best advantage. Please don't shoot me here, but I've never seen a Linux hit 1GB per second to the disk on the same setup as a Windows systems. I'll give you an example. In one system I had. Areca SAS controller. 16 SATA drives. RAID10. On Windows I could get 1GB per second on 64K seq writes. On linux, the most I could get was about 400MB. Same hardware, etc. Seems like there is an IO barrier in Linux. Blame it on bad drivers. Blame it on the underlying code in Linux. Whatever. So I don't overbuild an array. In addition, having arrays allows for additional performance. Last but not least, multiple volumes means you don't have a single volume taking out all your VMs. .

    My two cents..

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •