Visit Open-E website
Results 1 to 5 of 5

Thread: Is this setup okay? 2x DSS plus 2x Xenserver

  1. #1
    Join Date
    Nov 2008
    Posts
    64

    Question Is this setup okay? 2x DSS plus 2x Xenserver

    Hi Open-E community!
    My fist post here..

    I'd love to hear some thoughts about our planned virtualisation project:

    2 Servers for virtualisation running XenServer Enterprise (each having 1 dual NIC)
    2 Servers with Open-E DSS for shared storage in fault tolerant mode (each having 2 dual NICs)

    The main focus is High Availability..

    So afaik we should interconnect the DSS boxes with two ethernet cables.
    The DSS boxes would additionally have (each) one connetion to the dedicated switch where the two Xen Sever will be connected to for this setup.

    The virtualisation boxes are simply connected trough one ethernet cable to the main switch.

    Is this okay? Do we need more cables? More NICs?

    Would it be wise to use a second switch for fault tolerance? If yes, how many NICs would we need on the boxes? We want to have the best HA experience possible in this setup..

    Thanks a lot in advance!

  2. #2
    Join Date
    May 2008
    Location
    Hamburg, Germany
    Posts
    108

    Default

    Quote Originally Posted by Laxity
    [...] The main focus is High Availability..

    So afaik we should interconnect the DSS boxes with two ethernet cables.
    The DSS boxes would additionally have (each) one connetion to the dedicated switch where the two Xen Sever will be connected to for this setup.

    The virtualisation boxes are simply connected trough one ethernet cable to the main switch.

    Is this okay? Do we need more cables? More NICs?

    Would it be wise to use a second switch for fault tolerance? If yes, how many NICs would we need on the boxes? We want to have the best HA experience possible in this setup..

    Thanks a lot in advance!
    Hi Laxity,

    if your trying to run a fully separate "SAN network" and are looking for full redundancy, you should IMO go for (at least) four Ethernet ports for each Xen server - two redundant ports to redundant switches for the SAN network and two redundant ports (again to redundant switches) for the client-side network... plus, optionally, a management port per server (to separate VM traffic from Xen server management traffic).

    HA means to avoid single points of failure - so if the connection from the Xen servers to the DSSes are only through the "dedicated switch", then that switch is a SPOF.

    Do you plan to connect the "dedicated switch" to the rest of the network? Then you might be able to handle link failures through that path.

    Do you plan to run the Xen servers as a "cluster", with every VM runnable on both servers (take.over in case one of the Xen servers fails)?

    How important is *access* to the VMs? You might need to provide redundant access paths to your Xen servers (from the client's point of view).

    How do you define "available"? Is it ok to run all VMs on a single node - ie in case that the link from the client net to one of the Xen servers fail? Does your take-over rule set permit handling such an event?

    With regards

    Jens

  3. #3
    Join Date
    Nov 2008
    Posts
    64

    Default

    Quote Originally Posted by jmo
    if your trying to run a fully separate "SAN network" and are looking for full redundancy, you should IMO go for (at least) four Ethernet ports for each Xen server - two redundant ports to redundant switches for the SAN network and two redundant ports (again to redundant switches) for the client-side network... plus, optionally, a management port per server (to separate VM traffic from Xen server management traffic).
    Yes, I'd love to do it this way. Unfortunately this is to expensive for this project

    Quote Originally Posted by jmo
    HA means to avoid single points of failure - so if the connection from the Xen servers to the DSSes are only through the "dedicated switch", then that switch is a SPOF.
    Unfortunately I seem to have used the budget already.. so we will have to go with the risk of this one switch to fail. Fortunately the SLA we granted to our customer gives us some room for this kind of stuff.

    Quote Originally Posted by jmo
    Do you plan to run the Xen servers as a "cluster", with every VM runnable on both servers (take.over in case one of the Xen servers fails)?
    Yes

    Quote Originally Posted by jmo
    How important is *access* to the VMs? You might need to provide redundant access paths to your Xen servers (from the client's point of view).
    Most important of all, its for some webapps.

    Quote Originally Posted by jmo
    How do you define "available"? Is it ok to run all VMs on a single node - ie in case that the link from the client net to one of the Xen servers fail? Does your take-over rule set permit handling such an event?
    It kinda does, as the failed VM would be restarted on the other node with the same settings (IP Adress etc.)


    Would you have a look at this please?


    This shows a simplified network diagram.

    All boxes are directly connected to the switch. The two DSS servers are also interconnected with 3 cables (Bonding + Heartbeat).
    Does this sound right?

    There will be some more cables, as each server has an IPMI module and the DSS servers have a dedicated NIC on the Areca RAID controller..

    As I see it the SPOF is the switch (as you mentioned). Besides that, does the setup seem okay to you?

    Thank you in advance!

  4. #4
    Join Date
    May 2008
    Location
    Hamburg, Germany
    Posts
    108

    Default

    Quote Originally Posted by Laxity
    Yes, I'd love to do it this way. Unfortunately this is to expensive for this project

    Unfortunately I seem to have used the budget already.. so we will have to go with the risk of this one switch to fail. Fortunately the SLA we granted to our customer gives us some room for this kind of stuff.

    As I see it the SPOF is the switch (as you mentioned). Besides that, does the setup seem okay to you?

    Thank you in advance!
    Laxity,

    in terms of a tight budget and taking into account the schematic nature of the diagram, the setup looks basically ok. I'm missing the second link for each Xen server (you mentioned dual-port NICs) - you could use these for both redundancy and bandwidth.

    Have you thought about using VLANs to separate the (VMs') Internet traffic from the Xen-Server-to-SAN traffic? Your DSS (and the Xen servers, too) are critical resources and not intended for open access - you could try to limit the Internet access to the actual VMs on the Xen servers (and mostly isolate those VMs from the "internal" network) to gain an inch of security.

    And yes, your switch and the Internet connection both are SPOFs. Have you calculated your expected availability yet? I assume you might even get below 95%, depending on the quality of the switch - that's AEC-1. "High" availability unfortunately almost always violates tight budgets.

    With regards,

    Jens

  5. #5
    Join Date
    Nov 2008
    Posts
    64

    Default

    Hi Jens,

    Quote Originally Posted by jmo
    Laxity,
    I'm missing the second link for each Xen server (you mentioned dual-port NICs) - you could use these for both redundancy and bandwidth.
    I will look into this. Redundancy might be difficult because it is a 2port NIC, not two single port NICs. But for bandwith this might be quite interesting, thanks!

    Quote Originally Posted by jmo
    Have you thought about using VLANs to separate the (VMs') Internet traffic from the Xen-Server-to-SAN traffic? Your DSS (and the Xen servers, too) are critical resources and not intended for open access - you could try to limit the Internet access to the actual VMs on the Xen servers (and mostly isolate those VMs from the "internal" network) to gain an inch of security.
    I was planning to do this, yes. The DSS boxes should not be exposed to the "public".

    Quote Originally Posted by jmo
    And yes, your switch and the Internet connection both are SPOFs. Have you calculated your expected availability yet? I assume you might even get below 95%, depending on the quality of the switch - that's AEC-1. "High" availability unfortunately almost always violates tight budgets.
    Well the datacenter guarantees a 99.99% availability of the internet connection and a 1hour response to any hardware failure (this includes our servers and our switch) with a SLA. So I guess even with the SPOF we will be able to do better than 95%.

    Thanks a lot for your information and thoughts, it really helped!!!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •