Visit Open-E website
Results 1 to 8 of 8

Thread: Using DSS as a Fibre SAN: performance tweaks

  1. #1

    Question Using DSS as a Fibre SAN: performance tweaks

    What performance tweaks are available for using DSS as a fibre channel target? We are building a DSS for a customer that is using it as a fibre channel target, and we want the fastest performance possible. If possible, we want to saturate their 1Gb fibre channel link, at the least. Maybe even get into the 2Gb region, where they plan to be eventually. Can the DSS really take advantage of 16GB of RAM? (I really hope so, since that's how much we're stuffing into our box.) Obviously, we have to use a 64-bit kernel for this much RAM. Is the primary kernel good, or should we use the backup one for better performance?

    BTW, they will run about 40 VMs on the box, with about 8 different VMware servers accessing the storage. How well does this work?

  2. #2

    Default

    I know that there are some users that have a similar configuration but overall with Quads parked in the motherboard and set to 64bit mode with 16MB+ of mem - you should be fine.

    So you're on the right path with using this much ram and the 64bit mode as well. Concerning the tuning aspect for the FC HBA, I would leave the FC HBA defaults unless you really know what you're doing. I was reading a forum posting with VMware, cant remember for the life of me....but, the guy stated that most of the tuning will be with the ESX server instead of the FC HBA on the Target side. Though I would enable the Write Back function for the LUN.

    Concerning the kernel version I would not use the older "backup" kernel, stay with the newer kernel.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  3. #3

    Question

    So, for a situation like I described, would it be better to have just one quad-core 3 GHz cpu or 2 quad-core 2.33 GHz cpus? (Both cpus have 12MB of cache)

    Also, does it make sense to increase the cache on the RAID controller to 4GB, instead of just 2GB?

    I'm guessing that it's worth it to increase the RAID card cache and have eight cores (using two buses) instead of just four (using one bus), even if the four core has higher speed (although not twice as high), since we will have lots of connections. We want to saturate a 1Gb fibre interface for sure, and hopefully a 2Gb one, as well. But we don't need the throughput to saturate a 4Gb fibre interface, only the IOps to satisfy the 40 VMs.

    Good to know that there's some tweaking to do on the ESX side of things. We'll try to find out what works best. Anything else that we should know, besides the VMware best practices white paper? Also, is there a second white paper (since the first white paper says it's "part one")?

  4. #4

    Default

    Hi Robobeat

    I would go with the 2 Quad cores and bump up the Cache to 4 GB on the RAID controller
    If your going to use VMware then File I/O will be better
    VMware does not like to use Lun0, please start at Lun 1
    Please separate your Virtual machine operating systems.
    Vmware requires unique LUN numbers within and across all iscsi targets.

  5. #5

    Question

    There's no "file-io" and "block-io" settings when making a fibre channel volume. There's an "initialize," so perhaps it's like file-io, but whatever. There's a place where you select what block size to use (512B, 1024B, 2048B, or 4096B) and whether you want volume replication. I'm assuming that 4096B would be the best setting, usually, right? (Unless you're writing really tiny pieces of data, like with a database...)

    What are the other options that I should be aware of? (Keep in mind that this is fibre channel, not iscsi)

    Also, is it reasonable to expect a little higher throughput with fibrechannel (given the same linespeed) since there isn't the tcp/ip overhead?

    We are planning to use build 3278, unless you recommend the Atlanta version for any good reason.

  6. #6

    Default

    4096B would be best for Microsoft and use 512 block size for VMware as it does not support different block size.

    Sometimes the iSCSI MPIO can get close but overall the FC provides better performance.

    There is someone on the forum who has done some test.

    http://forum.open-e.com/showthread.php?t=557
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  7. #7
    Join Date
    May 2008
    Location
    Hamburg, Germany
    Posts
    108

    Default

    Quote Originally Posted by Robotbeat
    So, for a situation like I described, would it be better to have just one quad-core 3 GHz cpu or 2 quad-core 2.33 GHz cpus? (Both cpus have 12MB of cache)

    Also, does it make sense to increase the cache on the RAID controller to 4GB, instead of just 2GB?

    I'm guessing that it's worth it to increase the RAID card cache and have eight cores (using two buses) instead of just four (using one bus), even if the four core has higher speed (although not twice as high), since we will have lots of connections. We want to saturate a 1Gb fibre interface for sure, and hopefully a 2Gb one, as well. But we don't need the throughput to saturate a 4Gb fibre interface, only the IOps to satisfy the 40 VMs.
    From my experience with a dual-CPU (2 Xeon Quad Core) I'd go for the faster CPU. I've never seen CPU load to increase above 10% so far, and having faster cores helps the kernel (and FS part of it) to do it's work faster. From my observation, the FS runs single-threaded.

    Regards,

    Jens

  8. #8
    Join Date
    May 2008
    Location
    Hamburg, Germany
    Posts
    108

    Default

    Quote Originally Posted by To-M
    4096B would be best for Microsoft and use 512 block size for VMware as it does not support different block size.
    Just for completeness, for others browsing this thread in the future: If running Xen servers, go for 512B blocks - your VMs will bail out otherwise.

    Regards,
    Jens

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •