Visit Open-E website
Page 2 of 2 FirstFirst 12
Results 11 to 18 of 18

Thread: DSS FC is very fast.

  1. #11

    Lightbulb

    These are the results with an iSCSI volume (1Gb eth with no jumbo frames):


    The random write iops never got higher than 117. This is without jumbo frames. My on-board ethernet card doesn't support jumbo frames, but I have a card that does, so I'll use that next. Even so, this is rediculously low performance. There's probably something I did wrong. I'll try to find it. 117 IOPS sounds like it isn't using any of the DSS's system RAM for caching and is just going straight to the disks.

  2. #12

    Default

    Also, BTW, that's with using a Block I/O iSCSI volume. I'll try it next with a File I/O volume.

  3. #13

    Default

    THIS IS OUTSTANDING WORK!!! Just to let you know that the engineers are all looking into your work

    They wanted to know if you can test with the Atlanta version.

    I think I will propose to post your results on our website and see if we can create a section for best performance tests with our customers and partners using FC, iSCSI and NAS.

    Thanks Mass Storage - you guy's ROCK!
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  4. #14

    Lightbulb

    I finished a huge battery of tests using different test file sizes for comparison. For very small test file sizes (10MB and 40MB) with 4k random writes, fibre channel outperforms iscsi up to 80 times!

    The left side is fibre channel, the right is iscsi.


    The left side is fibre channel, the right is iscsi.

    I have more results coming tomorrow.

  5. #15

    Default

    Test setting the iSCSI daemon settings for the Target to the following below. When getting better performance - see if we can replace this map with a current one.

    Please go to the console and enter crtl + alt + w then select Tuning options then iSCSI
    daemon options then Target options then select Target of choice.

    maxRecvDataSegmentLen=262144
    MaxBurstLength=16776192
    Maxxmitdatasegment=262144
    maxoutstandingr2t=8
    InitialR2T=No
    ImmediateData=Yes

    Many are reading this thread so we should keep it shorter, but really good job on the comparison!!
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  6. #16

    Default

    Robotbeat!!

    Check this posting on speeds reported with Open-E DSS Benchmark with PIC!
    Would be good to check with him. I will let him know of your resutls as well.

    Server
    DELL 860 with 6/i
    RAID0 WD GP 750G X2
    Open - DSS Version: 5.0up60.7101.3511 32bit
    FC HBA:Qlogic QLA2344 2Gbps PCI-X in target mode


    http://forum.open-e.com/showthread.php?t=1319
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  7. #17

    Default

    Quote Originally Posted by Robotbeat
    I finished a huge battery of tests using different test file sizes for comparison. For very small test file sizes (10MB and 40MB) with 4k random writes, fibre channel outperforms iscsi up to 80 times!
    Hi Robotbeat,

    there must be something wrong with your setup. look at my results:



    Our DSS is a dual XEON 3,2 GHz, 8 GB RAM, ARC 1280 - 2GB, using block-IO and 1GB iSCSI connections.
    We run 4 1TB SATA drives in Raid 10 configuration, in total 5 volumes with 10 TB.
    This is no test setup, so we have some load (21 ESX Servers, 136 VMs, 44 SAP Test Servers) on this DSS - the values may be not exactly reproducable.

    As you can see, our old and busy setup beats your FC DSS easily in higher file sizes.
    It seems to be more constant compared to your very high values with low file sizes and low values at high file sizes.

    We actually have no issues with bad performance using iSCSI and block-IO. Its not fast as hell but it is very acceptable with high load.

    Best Regards,
    Lutz

  8. #18

    Lightbulb

    Thanks, lufu. It's great to get a comparison point!

    Yeah, there probably is something wrong with my iSCSI set up.

    Part of the problem is that I am using a software RAID 5 using only 3 drives (2+parity).

    Not using jumbo frames, either.

    Both sides are also using pentium 4 chips with netburst and only one thread of sqlio (since the initiator side has only one cpu core). Shouldn't be THAT slow, though. I tried tuning the iSCSI target daemon settings, but I haven't got much better results.

    Part of my whole point was to see how well FC works when just working from cache, since you could load up a 1U server with 100GB of cache instead of investing in lots of SAS drives.

    BTW, I really like the Areca controllers. That's what we've used in our setup.

    Here are the 4K random write IO results from a single drive hooked up to a 3ware controller with 512MB of raid cache and 2GB of system cache exported via 4Gb FC:


    It makes sense that the smaller test files are so high performance and the larger ones are so much slower, since we're limited to a single sata drive on the backend, so we'd be lucky to get more than 60MB/s sustained.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •