Visit Open-E website
Results 1 to 10 of 13

Thread: OnApp OpenE Active/Passive Failover SAN Hardware

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Default OnApp OpenE Active/Passive Failover SAN Hardware

    Hi Everyone,

    I am ordering the following 2 units to be OnApp SAN (active/passive)

    Anyone has experience with real time replication from Active to Passive and when it failover it is seamless to OnApp VMs?

    Spec:

    dual Intel Xeon Gulftown E5620, 8x 2.4Ghz cores, 16x HT cores, 2x 12M L3
    Supermicro X8DTI-LN4F dual socket 1366 server board, 12x DDR3 DIMM slots
    2x Supermicro SNK-P0037P passive heatsink
    Supermicro 2U SC825 Air Shroud
    12GB (3x 4GB) ECC Registered DDR3-1333, 9x open, 96GB Max
    7x Western Digital 2TB RE4 RAID Edition 7200rpm, 64M buffer
    Adaptec 5805ZQ 8-port hardware RAID-5/6/10 controller, 4GB NAND flash, MaxIO enabled
    1x Intel X25-E 32GB SLC SSD cache for 5805ZQ
    on-board Intel 82576 4x Gb NIC ports
    on-board Matrox G200eW Video, 8M video RAM
    on-board IPMI 2.0 w/KVM-over-IP & dedicated NIC
    no optical disk, no floppy
    SuperMicro SC825TQ-R720LPB 2U Rackmount Chassis, Black
    8x Hot-swap SATA/SAS bays
    Supermicro dual 720watt 80-plus Gold-Level Redundant Power Supplies
    Supermicro 2U Rail Kit

  2. #2
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    failover is seemless....

    only thing i see up front is the WD drives could be faster.
    maybe someone else can give some input as well here.

  3. #3
    Join Date
    Aug 2008
    Posts
    236

    Default

    I think you are overbuying on the CPUs. You really only need a single socket. I'd recommend that you buy storage up front. Growing your HA configuration will require downtime so don't go into production unless you know you have the storage you need at least until your next major maintenance window.
    I'd try to get 10GB for replication traffic. The speed of your replication will limit your maximum throughput. In other words, you can't expect to write at the speed of light if you can only replicate a the speed of sound. The replication is synchronous. The FS isn't notified that the write is completed until it's done so on both nodes.

  4. #4

    Default

    Thank you for the info guys.

    Which 10G NIC is supported by opene?

    Based on what has been recommended i could run a a direct cable between these 2 SANs on a 10G port for replication purpose.

    ISCSI is done through 1G connection to a gigabit switch.

    What do you guys think about the choice of the raid card? How often it failed?

  5. #5
    Join Date
    Aug 2010
    Posts
    404

    Default

    Quote Originally Posted by kellogs
    Thank you for the info guys.

    Which 10G NIC is supported by opene?

    Based on what has been recommended i could run a a direct cable between these 2 SANs on a 10G port for replication purpose.

    ISCSI is done through 1G connection to a gigabit switch.

    What do you guys think about the choice of the raid card? How often it failed?

    Please find the following links that can show you the supported 10G NIC cards and RAID cards too:
    http://www.open-e.com/service-and-su...tibility-list/

    http://www.open-e.com/service-and-su...vanced-search/

    And just as a reminder, if you used a 10Gb NIC card with a 1Gb switch, then it will work as a 1Gb not a 10Gb.

    About how often the RAID Card failed, well that depend on a lot of things, that no one can be sure. Always try to check that there is no more or less power came to the system, that will give your hardware in general a longer life.

  6. #6
    Join Date
    Aug 2008
    Posts
    236

    Default

    Quote Originally Posted by kellogs
    Thank you for the info guys.



    What do you guys think about the choice of the raid card? How often it failed?
    You are welcome...
    I personally like the Areca controllers. I think OOB management for RAID controllers is a must have feature.
    I've had major problems with the Adaptec Storage Manager not sending email based alerts because you have to have the GUI running constantly on a node. In addition, on the Areca controllers I can schedule bi-weekly checks of my RAID5 volumes.

    I have several Areca controllers and haven't had one failure yet.

  7. #7

    Default

    Here is the updated spec

    The 10G port is going to be directly connected to the other SAN unit for replication purpose.

    please comment

    Intel Xeon Sandy Bridge 3U OnApp SAN Server
    Supermicro Dual Socket 1366 Server Board, Intel 5420 X9SCI-LN4
    Intel Xeon Sandy Bridge E3-1230, 4x 3.2Ghz Cores E3-1230

    Supertalent 16-Gig DDR3-1333 ECC unbuffered, 4x 4G

    10x Western Digital 2TB RE4 RAID Edition 7200rpm WD2003FYYS
    Adaptec 16-port SATA-II RAID Controller + BBU Module 51645+ABM800
    Adaptec 64GB MaxCache Performance Kit

    on-board Matrox G200eW Video Adapter, 8-Meg RAM

    on-board Intel 82574 Quad Gigabit NIC Ports
    On-board IPMI 2.0 Adapter w/KVM-over-IP & 3rd NIC

    Intel 10GBE CX4 Dual-port Server Adapter, PCI-E 8-lane EXPX9502CX4
    Supermicro 3U Rackmount Chassis, 3x hot-plug Chassis Fan SC836TQ-R800B

    16x Hot-swap HDD Carriers & 1x16 SATA/SAS Backplane
    Supermicro dual 800-Watt Redundant Power Supplies

    3U Slide Rails/Brackets Mounting Kit

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •