Visit Open-E website
Results 1 to 7 of 7

Thread: Design issues and questions

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Default Design issues and questions

    Hello,

    I have to design a small SAN and I have some interrogations about Open-E capabilities.

    The numbers are 10 to 20TBytes of live data (for virtual machine disks), and 50 to 100TBytes of backups (the backups of the virtual machines). I plan to use a couple of servers (active/passive) for the live data and the same setup for the backups.
    Is DSS V6 able to handle such disk space ?

    The Supermicro hardware I plan to use will allow me to expand the disks by adding JBOD chassis with SAS expanders.
    How will DSS V6 see these new disks ?
    Will I be able to add that space to the previous one (expand it) ? Or will I have to make a new LUN ?


    I plan to use RAID 1+0 of 7200RPM drives (12 drives + spares + OS).
    Is it better/worse/equal to RAID 5+0 of 15k RPM in terms of IOPS ? in terms of seek time ?

    Related to the previous points, the Supermicro cabinet I plan to use has two backplanes, each with a SAS port. So the RAID controller can see 24 disks on one SAS port (4 6Gbps lanes on each port) and 12 disks on the other one. Each backplane can be daisy-chained with additionnal cabinets.
    What kind of performance can I expect from that ?

    I currently use a DSS V6 with BlockIO volumes. To speed up a bit the IOs, I plan to use FileIO on the new setup. I read FileIO is based off XFS and I have a very very bad experience with XFS, memory and failures (power, kernel panics, etc) leading to disk corruption.
    Is FileIO safe for production data ? Is it performing better than BlockIO ?
    As FileIO will use memory for caching, is there a method to estimate how much memory will be needed ? (a rule of thumb maybe?)


    I plan to use 2port and 4port Intel Gbps network interface cards. The models are i350t2 and i350t4. I can't find them on the HCL.
    Are those models supported by DSS V6 ?

    We currently use ARECA controllers (12xx and 1680) but we have faced several issues which caused panic. Something like the ARECA not seeing the disks anymore, or not willing to see a new disk connected.
    Is there a "best" RAID controller manufacturer ?

    I hope I'm asking those questions in the right place.

    Thanks for your help.

  2. #2

    Default

    Is DSS V6 able to handle such disk space ?

    Yes we can handle this amount but we do not have a backup feature for iSCSI volumes only w/ NAS volumes.


    How will DSS V6 see these new disks ?
    Will I be able to add that space to the previous one (expand it) ? Or will I have to make a new LUN ?

    DSS V6 can see the new added capacity, and can be added to the existing volume or you can create a new one.

    I plan to use RAID 1+0 of 7200RPM drives (12 drives + spares + OS).
    Is it better/worse/equal to RAID 5+0 of 15k RPM in terms of IOPS ? in terms of seek time ?

    Hard to tell every setup is so hard to exactly implement as we support over 750 products but these link below might be helpfull and there are others that have posted there speed on the forum here.
    http://blog.open-e.com/what-you-can-expect-from-ssd-2/
    http://blog.open-e.com/random-vs-sequential-explained/


    Is FileIO safe for production data ? Is it performing better than BlockIO ?

    I prefer Block IO again performance can be a wide range. Here is more info on the File IO.
    http://kb.open-e.com/File-IO-Or-Block-IO_342.html

    I plan to use 2port and 4port Intel Gbps network interface cards. The models are i350t2 and i350t4. I can't find them on the HCL.
    Are those models supported by DSS V6 ?

    These should work with our latest DSS V6 build on our site that you can test.

    Is there a "best" RAID controller manufacturer ?
    If we had the logs we would be able to help, Areca 1880 and the SAS LSI controllers are just as good like the 9285 or 9260...
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  3. #3

    Default

    Hello,

    Regarding the backups, I'm sorry, I meant something else.
    I will use a 3rd party software (Veeam Backup & Replication) and what I call the backups-SAN will only have iSCSI targets/LUNs (+ replication).
    The backup part will be done by the 3rd party software only, it pulls the virtual machines drives directly from the live-data-SAN and stores them "locally" (on an iSCSI LUN formatted with NTFS).
    I'm sorry for the misunderstanding.

    Thanks for your links. The SSD endurance is quite scary :-/

    Regarding block I/O, I guess the RAM size is not very important as there is no cache. Am I right ?
    I would be great if we could have read-cache and write-through

    I think our ARECA problems aren't related to the OS, it is happening on different models, different RAID configurations and different host OSes.

    Thanks for your quick answers

  4. #4

    Default

    Quote Originally Posted by openweb
    Hello,

    Regarding block I/O, I guess the RAM size is not very important as there is no cache. Am I right ?
    I would be great if we could have read-cache and write-through
    You are correct that the ram size but Block IO will use the devices cache also with the iSCSI LUN's you can use the Write Back feature and with the RAID controllers Write Cache feature combined it should be able to kick off some good IO.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  5. #5

    Default

    Ok. Thanks again

  6. #6
    Join Date
    Oct 2007
    Location
    Toronto, Canada
    Posts
    108

    Default

    Quote Originally Posted by To-M
    You are correct that the ram size but Block IO will use the devices cache...
    Todd,

    I have always wondered:

    "Why RAM isn't/can't be used as a read cache for Block IO requests by Open-E?"

    There are a number of other SAN solutions which do it.

    It would significantly improve SAN performance, at a very low cost to users (RAM is cheap!).

  7. #7

    Default

    Hey SeanLeyne - not sure if the others are using SCST for the iSCSI Target solution which most likely they wont be able to for the read cache as only the File I/O takes advantage of caching by using the extra memory on the motherboard, Block I/O just uses whatever cache you have on your disk controller but none on your motherboad.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •