Visit Open-E website
Page 1 of 2 12 LastLast
Results 1 to 10 of 13

Thread: some general questions..

  1. #1

    Default some general questions..

    Hi everybody!

    Right now i am a little bit confused regarding the performance problems using block IO.
    Were planning to implement DSS as an iscsi solution for our ESX Servers, including replication to an secondary iscsi DSS.
    We are running 13 ESX servers hosting approx. 100 VMs (incl. 40 SAP R3 Systems), so we really need performance :-)

    For our primary DSS storage (64 bit mode) we want to use: (same server for secondary DSS with less ram and cpu)
    2x Intel QuadCore 2,3GHz
    32GB Ram
    Areca 1280 with 2GB Cache
    24x 1TB Seagate SATA (22 in raid6, 2 hot spare)
    QLogic QLA4052C or QLE4062C
    Intel QuadPort GB Ethernet Adaptor

    So my questions are:
    - regarding performance: is it better to use 8 cpus with 2,3GHz (QuadCore) or 4 cpus with 3GHz (Dualcore)
    - is DSS able to use 32GB Ram for caching in block IO mode?
    - if not, can file IO use that amound of ram for cache and is file IO suitable for large ESX files (>100GB), especially in context of replication?

    Thanks in advance for any information..

    Kind regards,
    Lutz

  2. #2

    Default

    Dear Lutz;

    8 CPU's will overkill the process here, and may cuase some issues. In the other hand 4 dual core CPU's will be more relistic, and should give high performance with specs you have. DSS can use the 32G for the file I/O and the 64 for the block mode. For the ESX, we test it with 50GB and has no problem, most likely it will work as well for the 100GB.

    Please keep in mind that in ver 1.30 (or later) New iSCSI, default volume creation is done in block-IO in contrast to the older versions that was file-IO. Block-IO mode is
    about 30% faster then File-IO.

    Best Regards;
    Shon

  3. #3
    Join Date
    Jun 2007
    Posts
    84

    Default

    At my experience it shows this behaviour:

    In BlockIO the storage performance at the iSCSI initiator is 100% the performance of the RAID. Only a few MB will be used for cache.

    In FileIO the storage performance at the iSCSI initiator is much faster than the performance of the RAID. The complete free memory will be used for cache so the read and write access to the RAID will be optimized.

    In my case i had 6x 750GB S-ATA II in RAID5 and the performance in BlockIO is not usable when access the RAID with more than two Virtual Machines. When i use FileIO the slow S-ATA RAID is much faster.

    There will be changes in the future that there will be used more cache in BlockIO. But at this time is FileIO the only soloution when using VMware ESX.

    BlockIO might be faster then FileIO, but only when access the RAID with one initiator and one task.

    Best regards
    Stefan

  4. #4

    Default

    22 disk raid6 with 1tb drives? whoah.

    think about rebuild times?

    you probably want to divide this up into at least 2-3 arrays rather than 1 giant raid 6.

  5. #5

    Default

    Hi everybody..

    @Raudi: my question pointed to your issue :-) i think we will test some scenarios with block and with file IO

    @netsyphon: yes, whoah :-) we decided to use as many spindles as we can due to high IO requirements. we discussed rebuilding times and we decided that a rebuild up to 30h is OK. right now we have a 12 disk 5 TB raid6 on a slower controller and it takes up to 10h for a complete rebuild with normal workload.

    if this big block is going to be critical we can run all our VMs on an alternative iscsi path with our secondary dss which has replicated volumes from out primary dss

    if anything else is going wrong we have an additional backup server with snapshots of all VMs , so data-loss shouldn't be any problem

    our vms are mostly development systems, so downtime is not really critical for us.

    @shon: we will do some tests with 2x quadcore vs. 2x dualcore xeons, i think its interesting which kind of processor does more IO in multithreaded applications like ESX farming

    thanks for the moment.

    regards, lutz

  6. #6
    Join Date
    Jun 2007
    Posts
    84

    Default

    I think the bottleneck woll be the harddrives not the CPU...

    A few days ago i made a migration of 6x 750GB to 8x 750GB and this takes nearly one week to rebuild. Perhaps a rebuild after a drive failure is faster than a migration... Until now i didn't had a drive failure.

    Best regards
    Stefan

  7. #7

    Default

    Hi Raudi,

    what kind of controller do you use?
    we had similar problems with 3ware products (eg 9550), on high IO situation the r/w performance dropped down to constant 8MB/s for minutes, then rising again for some time, again dropping...

    so we switched from 3ware with raid5 to areca raid6 and we noticed a massive perfromance gain compared to 3ware, even with raid6!

    regards,

    lutz

  8. #8

    Default

    Found others having issues with performance on the 3Ware 9550 as well, not sure if this will help in your incase but others claimed it did when they updated to latest.

    http://www.3ware.com/KB/article.aspx?id=14956
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  9. #9

    Default

    Hi Todd,

    thanks for that interesting info. a little bit too late for us, we switched to areca mid 2006 :-)

    Regards,
    Lutz

  10. #10

    Default

    Hi warmduscher,

    Please update us when you run tests with 2x quadcore vs. 2x dualcore xeons. Lets us know

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •