failover is seemless....
only thing i see up front is the WD drives could be faster.
maybe someone else can give some input as well here.
failover is seemless....
only thing i see up front is the WD drives could be faster.
maybe someone else can give some input as well here.
I think you are overbuying on the CPUs. You really only need a single socket. I'd recommend that you buy storage up front. Growing your HA configuration will require downtime so don't go into production unless you know you have the storage you need at least until your next major maintenance window.
I'd try to get 10GB for replication traffic. The speed of your replication will limit your maximum throughput. In other words, you can't expect to write at the speed of light if you can only replicate a the speed of sound. The replication is synchronous. The FS isn't notified that the write is completed until it's done so on both nodes.
Thank you for the info guys.
Which 10G NIC is supported by opene?
Based on what has been recommended i could run a a direct cable between these 2 SANs on a 10G port for replication purpose.
ISCSI is done through 1G connection to a gigabit switch.
What do you guys think about the choice of the raid card? How often it failed?
Originally Posted by kellogs
Please find the following links that can show you the supported 10G NIC cards and RAID cards too:
http://www.open-e.com/service-and-su...tibility-list/
http://www.open-e.com/service-and-su...vanced-search/
And just as a reminder, if you used a 10Gb NIC card with a 1Gb switch, then it will work as a 1Gb not a 10Gb.
About how often the RAID Card failed, well that depend on a lot of things, that no one can be sure. Always try to check that there is no more or less power came to the system, that will give your hardware in general a longer life.
You are welcome...Originally Posted by kellogs
I personally like the Areca controllers. I think OOB management for RAID controllers is a must have feature.
I've had major problems with the Adaptec Storage Manager not sending email based alerts because you have to have the GUI running constantly on a node. In addition, on the Areca controllers I can schedule bi-weekly checks of my RAID5 volumes.
I have several Areca controllers and haven't had one failure yet.
Here is the updated spec
The 10G port is going to be directly connected to the other SAN unit for replication purpose.
please comment
Intel Xeon Sandy Bridge 3U OnApp SAN Server
Supermicro Dual Socket 1366 Server Board, Intel 5420 X9SCI-LN4
Intel Xeon Sandy Bridge E3-1230, 4x 3.2Ghz Cores E3-1230
Supertalent 16-Gig DDR3-1333 ECC unbuffered, 4x 4G
10x Western Digital 2TB RE4 RAID Edition 7200rpm WD2003FYYS
Adaptec 16-port SATA-II RAID Controller + BBU Module 51645+ABM800
Adaptec 64GB MaxCache Performance Kit
on-board Matrox G200eW Video Adapter, 8-Meg RAM
on-board Intel 82574 Quad Gigabit NIC Ports
On-board IPMI 2.0 Adapter w/KVM-over-IP & 3rd NIC
Intel 10GBE CX4 Dual-port Server Adapter, PCI-E 8-lane EXPX9502CX4
Supermicro 3U Rackmount Chassis, 3x hot-plug Chassis Fan SC836TQ-R800B
16x Hot-swap HDD Carriers & 1x16 SATA/SAS Backplane
Supermicro dual 800-Watt Redundant Power Supplies
3U Slide Rails/Brackets Mounting Kit
Looks good. One other piece of advice. I suggest you use something like DRBL or a live CD and run that to do some burn in/performance testing. Use iometer or your favorite tool and get all your benchmarks and performance numbers. This will give you a good idea of what you can reasonably expect when you install Open-E.
enealDC,
Thank you for the tips.
How about CentOS live CD and then use iometer?
It might not have the tool i need on CentOS for benchmarking.
So we test the I/O?