Visit Open-E website
Results 1 to 7 of 7

Thread: XenServer on DSS or Openfiler

  1. #1

    Default XenServer on DSS or Openfiler

    We are looking to put in a new iscsi SAN based on either DSS or OpenFiler. We are looking for great performance with HA failover. Is anyone currently running DSS with ESX or XenServer? Any feedback? How is performance and reliability?

  2. #2

    Default

    Xenserver and great performance over iscsi are not to be said in one sentence...

    Xenserver doesn't support nic bonding methods that increase throughput, in fact other bonding methods than failover are broken and can in some cases crash the host.
    Your other option for providing more throughput would be multipathing with a multibus policy. Xenserver doesn't support this as well. If you do want to use multipathing (it works ok) you will have to configure your iscsi connections manually instead of using Xencenter.

    Is you have to choose I would suggest using DDS...Openfiler is based on rpath linux and in my opinion it is a home storage toy... Rpath is a hell to maintain and Openfiler itself isn't really enterprise grade stuff, not in features and stability.

  3. #3

    Default

    HA Bonding is supportet in the new 4.1 Release of XenServer (via CLI).

    We use 2 XenServer 4.1 Hosts with two DSS Lite Open-E Servers as ISCSI-Targets. The performance is very good.

  4. #4

    Default

    Quote Originally Posted by Blum
    HA Bonding is supportet in the new 4.1 Release of XenServer (via CLI).

    We use 2 XenServer 4.1 Hosts with two DSS Lite Open-E Servers as ISCSI-Targets. The performance is very good.
    HA bonding is supported indeed...but only mode 1, failover...and that mode doesn't provide extra throughput.

    I guess you don't really need a lot of io performance for your applications as the maximum throughput would be a 100MB/s. In terms of SAN performance this isn't really fast. We have tested iscsi as well using a Equalogic PS300E unit that was easily able to fill the Gbit line to the max, but when cloning some templates and also having some guests doing IO all slowed down during the cloning operations.
    All of our Xenserver boxes have 64GB RAM and run a lot of guests...this isn't doable using one Gbit iscsi connection.

    Open-E for itself seems to perform ok...it's a bit slower than the Equalogic but the pricing is different as well. We have also thought about 10Gbit interfaces, but at the moment 12/24 port 10Gbit switches are still very expensive. For myself I would like to give infiniband a try since this is less expensive and has greater bandwidth...

    At the moment we're fiberchannel solution that provides enough bandwidth to each host. However this is less flexible than using a solution like Open-E combined with fast hardware.

    Best thing to do would be downloading the lite version or the demo and give it a try for yourself...

  5. #5
    Join Date
    Oct 2007
    Location
    Toronto, Canada
    Posts
    108

    Default

    Quote Originally Posted by mxcreep
    ...Open-E for itself seems to perform ok...it's a bit slower than the Equalogic but the pricing is different as well. We have also thought about 10Gbit interfaces, but at the moment 12/24 port 10Gbit switches are still very expensive. For myself I would like to give infiniband a try since this is less expensive and has greater bandwidth...
    Are you sure that Infiniband is faster than 10Gb? My research shows that they run at exactly the same speed, both at a hardware and throughput level.

    As for pricing, your right about that; a Fujitsu XG700 12 port 10Gb CX4 switch has a street price of $7600 USD, while a Qlogic 9024 SilverStorm 24 port switch can be had for $4000.


    Sean

  6. #6

    Default

    Quote Originally Posted by SeanLeyne
    Are you sure that Infiniband is faster than 10Gb? My research shows that they run at exactly the same speed, both at a hardware and throughput level.

    As for pricing, your right about that; a Fujitsu XG700 12 port 10Gb CX4 switch has a street price of $7600 USD, while a Qlogic 9024 SilverStorm 24 port switch can be had for $4000.


    Sean
    Infiniband is also available in 20Gbit versions at this time...but you shouldn't just compare it's networking speed as the protocols used to transport data differ from each other. Infiniband uses a scsi command protocol directly transported over the network connection while iscsi uses the tcp/ip layer. I would suggest to read some about it and compare pricing...you will see infiniband is a great option compared to 10GBe.

  7. #7

    Default

    Hey Guys!!

    Yes I know long over due but we now have the DSS V6 certified w/ XEN!!

    http://hcl.xensource.com/ProductDeta...ge+Software+V6

    http://www.citrix.com/ready/partners/open-e
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •