Visit Open-E website
Results 1 to 7 of 7

Thread: vmware, Hyper-v cluster anyone?

  1. #1

    Default vmware, Hyper-v cluster anyone?

    Hello, my first post here!


    I need to give a customer a price on a harware setup with two clustered hypervisors (Hyper-v or vmware the customer has not desided yet) and a SAN. Open-e appears to be a good option because it works as a SAN/NAS. An should work with any of the hypervisors, right?

    My question(s).
    Anyone running Hyper-v cluster with open-e and is it working well with persistant reservations and CSV?
    Any one running ESXi cluster using iscsi and or NSF and is it working well?

    Any recomendations on disk-controllers (SAS) that Open-e supports?

    Thank you!

  2. #2
    Join Date
    Aug 2008
    Posts
    236

    Default

    I've implemented and supported Hyper-V, and Virtual Iron clusters with Open-E.
    IMO, the Areca raid controllers are the way to go because of the out of band mgmt.

  3. #3

    Default

    what is out-of-band management?

    Why would it matter if the raid controller is fully supported by Open-E?

    I used to implement 1 supporting Hyper-V clusters and the storage I used was Adaptec 5805Z with Intel X25-M SSDs (8x160GB on RAID 5), fast like rocket

    CSV etc shouldn't be a concern at SAN, it's a hypervisor level concern. The only thing about Hyper-V is that, NIC teaming is subject to individual vendors. Suggest you stick to Intel NICs.

    How come your customer haven't decided what to use?

  4. #4

    Default

    We are running multiple four node Hyper-V clusters (QLogic Fiber Channel) against a single DSS V6 storage box. It's holding up very nicely. The storage box has 22 600GB 15K SAS drives, and an Adaptec controller with an Adaptec MaxIQ 64Gb SSD cache.
    Performance and reliability is excellent.
    We moved to DSS from a dedicated EMC CX3-10 SAN.. It was four times as expensive and could not keep up.
    DSS now supports multi path IO on Qlogic HBA and it's pretty bullet proof (so far).

  5. #5
    Join Date
    Jul 2008
    Location
    austria, vienna
    Posts
    137

    Default

    We also use two DSS v6 up40 as iSCSI SAN for our Hyper-V R2 cluster (specs see signature).

    Performance and reliability are good, but we have two issues:

    1. DSS failover breaks Hyper-V VMs on cluster (cluster disks going offline/online and VMs are bluescreening).
    2. After updating to up50 CSV goes to redirected IO because of issues with SCSI-3 persistent reservation. We did downgrade to up40.

    We opened support tickets and are waiting for response ...
    regards,
    Lukas

    descience.NET
    Dr. Lukas Pfeiffer
    A-1140 Wien
    Austria
    www.dotnethost.at

    DSS v6 b4550 iSCSI autofailover with Windows 2008 R2 failover cluster (having still some issues with autofailover).

    2 DSS: 3HE Supermicro X7DBE BIOS 2.1a, Areca ARC-1261ML FW 1.48, 8x WD RE3 1TB, 1x Intel PRO1000MT Dualport, 1x Intel PRO1000PT Dualport.

    2 Windows Nodes: Intel SR1500 + Intel SR1560, Dual XEON E54xx, 32 GB RAM, 6 NICs. Windows Server 2008 R2.

  6. #6
    Join Date
    Aug 2008
    Posts
    236

    Default

    Quote Originally Posted by tingshen
    what is out-of-band management?

    Why would it matter if the raid controller is fully supported by Open-E?

    I used to implement 1 supporting Hyper-V clusters and the storage I used was Adaptec 5805Z with Intel X25-M SSDs (8x160GB on RAID 5), fast like rocket

    CSV etc shouldn't be a concern at SAN, it's a hypervisor level concern. The only thing about Hyper-V is that, NIC teaming is subject to individual vendors. Suggest you stick to Intel NICs.

    How come your customer haven't decided what to use?
    OOB management simply means that the Areca controllers have a dedicated NIC, so I don't have to rely on some software that may crash or become inoperable running on the OS.

  7. #7

    Default

    1. Why would it matter if the raid controller is fully supported by Open-E?
    - I Do not know, thats why I'm asking... Looking through the threads, there are some about controller problems.

    CSV etc shouldn't be a concern at SAN, it's a hypervisor level concern. The only thing about Hyper-V is that, NIC teaming is subject to individual vendors. Suggest you stick to Intel NICs.
    - There are a lot of SAN concerns with CSV... Persistent reservations for one. And there seems to be an issue right now with upd50. That kind of stuff is what I was worrying about.
    - I'm not using teaming but MPIO.

    How come your customer haven't decided what to use?
    -Because they have heard a lot of good things about VMWare. Me on the other hand cannot really justify the price. Have been clustering Hyper-V Servers (using Starwind Enterprise) for a long time now, and it works perfectly.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •