Visit Open-E website
Results 1 to 6 of 6

Thread: Best Practice for IOPS and iSCSI - Citrix Xenserver

Thread has average rating 5.00 / 5.00 based on 1 votes.
Thread has been visited 15332 times.
  1. #1

    Default Best Practice for IOPS and iSCSI - Citrix Xenserver

    What do you all recommend for the best performance out of the following hardware:

    4u Supermicro, 16gb ram, dual quad-core 5310's, 24x1TB WD RE3 SATA 7200 rpm hdd's, 3ware 9650SE 24port SATA RAID controller, 2 of the HDD's as global hot-standby hdd's.

    They will be attached to a dozen xen server servers running a bunch of VM's, and I am looking for the best IOPS in general.

    From the 3ware level, should I create 1 big raid-10 raid volume and then a bunch of LV's in open-e, and then luns on top of that? Or should I create smaller 3ware raid10 volumes, and then LV's and luns on top of those?

    I was also thinking of making smaller 3ware raid-0 volumes and using open-e to do the raid10 via software raid...so many options, looking for best practices that will yield the best performance for IOPS.

    We will have 2 of these units in fail-over mode. Each unit also has 8 GE nics for iSCSI to the xen hw pool, so we will most likely bond 2x2ge in multiple bonds and share iscsi over them to increase performance again.

    We will also be installing on a smaller scale 2u, 16gb, 8x1tb RE3, 3ware 9690, dual-quads, 6ge...so looking for advice there as well, again in fail-over mode and using VMWare this time.

    Help!

    Tom

  2. #2

    Default

    Tom,
    we are looking to do something similar, with XenServer (2 32GB hosts) and two NAS boxes. I was thinking we would follow the VMware config instructions to get file and block replication working in optimum manner. I would be glad to hear what anyone has to say on this. Everyone says test, but we dont have the hardware to test on. I emailed pre-sales support about the vmware best practices guide...

  3. #3

    Default

    No feedback on the best setup for best IOPS?

    tom

  4. #4

    Lightbulb

    Well, it kind of depends on your access patterns. For instance, if the vast majority of your your accesses are sequential, it makes sense to set up the drives so that each drive is exported to a seperate lun, with RAID 0 or such for luns that need more storage or performance. If your access is all completely random and you're looking for the most averaged performance (and you don't care too much about guaranteed performance for certain luns), then you should just put all the drives in one big raid set for the best performance. If you have one or two luns that need a guaranteed baseline performance level or are almost completely sequential, then you could just put those luns on a RAID 1 raidset, with the rest in a big RAID 5, 6, or 10 raidset.

    How important is the data? Is a once-in-four-years loss of data (where you have to go back to tape or some other backup) acceptable? If not, you should at least do RAID 5, and probably a RAID 10 or 6, especially if you have more than 4 or 5 drives. RAID 6 is going to be more than enough to stop worrying about random drive failures (but you'll still have to worry about batches of drives failing). Also, remember that RAID is not a backup, and volume replication, since it's like a network RAID 1, is also not a backup.

    Some of our customers use rotating snapshots so they don't have to always go back to tape if they accidentally delete something, but a snapshot is not a real backup, either. Remember, you HAVE to have a backup that is delayed in time and on completely separate hardware. RAID isn't going to save you if your datacenter is flooded (hello, Fargo!) or a recently-fired employee decides to eject a bunch of hard-drives on your SAN on his last day.

  5. #5

    Lightbulb

    Also, I would have used an Areca card with a full-sized DDR2 DIMM expansion slot, which lets you add up to 4GB of (battery-backed, if you get the battery module) controller memory. This is especially important for failover situations, since I believe that writes are only acknowledged when they are written "to disk" on the destination side (i.e. you can't use system memory to cache writes). This will make the biggest difference in write performance, I think.

  6. #6
    Join Date
    Aug 2008
    Posts
    236

    Default

    I'll add my two cents here.

    There is a huge temptation when we get alot of drives to try and make one array out of them. More spindles = more performance right? But in Linux, this doesn't always work out to our best advantage. Please don't shoot me here, but I've never seen a Linux hit 1GB per second to the disk on the same setup as a Windows systems. I'll give you an example. In one system I had. Areca SAS controller. 16 SATA drives. RAID10. On Windows I could get 1GB per second on 64K seq writes. On linux, the most I could get was about 400MB. Same hardware, etc. Seems like there is an IO barrier in Linux. Blame it on bad drivers. Blame it on the underlying code in Linux. Whatever. So I don't overbuild an array. In addition, having arrays allows for additional performance. Last but not least, multiple volumes means you don't have a single volume taking out all your VMs. .

    My two cents..

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •