Visit Open-E website
Results 1 to 10 of 18

Thread: DSS FC is very fast.

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Lightbulb

    I just ran sqlio while using an iscsi volume (only using a single 1Gb ethernet connection, jumbo frames not enabled). It was pretty slow for random read I/O in cache. In fact, it was about only 470 I/O when I ran it this way (with a 200MB test file):

    sqlio -kR -s60 -frandom -o128 -b4 -LS -Fparam.txt

    The param.txt file has only this in it: "H:\testfile.dat 1 0x0 200", which is where the test file is located (i.e. we are testing the D volume right now, which happens to be iSCSI) and how many threads to run (one) and the bitmask that sets processor affinity (does not apply, since there's only one core) and how big of a test file in megabytes. When I did both the FC volume and the iSCSI volume, I used NTFS with 4096Byte blocksize. The initiator is MS S/W iSCSI 2.08. The iSCSI target is block I/O mounted with writeback cache enabled. The FC volume uses 4096 byte blocksize.

    Whereas when I use a FC volume over a 1Gb FC interface, it's about 14000 random read I/O.

    I've done this a few times now, and it seems quite obvious that FC is far superior for random I/O reads from cache. Granted, I am only able to test one thread at a time right now, since my initiator side is only single-core. Oh well.

    I tried running the same thing on the FC volume right after this test (but first running a sequential sqlio run to put the benchmark file in the DSS system cache, as in the iSCS volume test). I get over 14,000 random read I/Os!

    BTW, the easiest way to make sure the whole benchmark test file is in the cache while using sqlio is to just run sqlio in sequential mode beforehand once or twice for a long enough for it to read the entire test file. If this were not a benchmark but a production environment with a cache large enough for, say, an entire mysql database to fit in, one could just "dd if=/dev/sda1 of=/dev/null bs=4k" to make sure everything is in cache before you start using it. This could be done on the DSS side or the client side. The DSS side would be a little faster, but it doesn't really matter that much, since you'd only rarely have to do it.


    PS, I'll try to see what difference jumbo frames makes, if I can enable it.

  2. #2

    Lightbulb

    Here are some of my results with writing to a fibre channel volume (1Gb link):


    I get over 10000 random write iops. Not bad. But, as the size of the test file goes up, the random write iops drop dramatically.

  3. #3

    Lightbulb

    These are the results with an iSCSI volume (1Gb eth with no jumbo frames):


    The random write iops never got higher than 117. This is without jumbo frames. My on-board ethernet card doesn't support jumbo frames, but I have a card that does, so I'll use that next. Even so, this is rediculously low performance. There's probably something I did wrong. I'll try to find it. 117 IOPS sounds like it isn't using any of the DSS's system RAM for caching and is just going straight to the disks.

  4. #4

    Default

    Also, BTW, that's with using a Block I/O iSCSI volume. I'll try it next with a File I/O volume.

  5. #5

    Default

    THIS IS OUTSTANDING WORK!!! Just to let you know that the engineers are all looking into your work

    They wanted to know if you can test with the Atlanta version.

    I think I will propose to post your results on our website and see if we can create a section for best performance tests with our customers and partners using FC, iSCSI and NAS.

    Thanks Mass Storage - you guy's ROCK!
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  6. #6

    Lightbulb

    I finished a huge battery of tests using different test file sizes for comparison. For very small test file sizes (10MB and 40MB) with 4k random writes, fibre channel outperforms iscsi up to 80 times!

    The left side is fibre channel, the right is iscsi.


    The left side is fibre channel, the right is iscsi.

    I have more results coming tomorrow.

  7. #7

    Default

    Test setting the iSCSI daemon settings for the Target to the following below. When getting better performance - see if we can replace this map with a current one.

    Please go to the console and enter crtl + alt + w then select Tuning options then iSCSI
    daemon options then Target options then select Target of choice.

    maxRecvDataSegmentLen=262144
    MaxBurstLength=16776192
    Maxxmitdatasegment=262144
    maxoutstandingr2t=8
    InitialR2T=No
    ImmediateData=Yes

    Many are reading this thread so we should keep it shorter, but really good job on the comparison!!
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  8. #8

    Default

    Quote Originally Posted by Robotbeat
    I finished a huge battery of tests using different test file sizes for comparison. For very small test file sizes (10MB and 40MB) with 4k random writes, fibre channel outperforms iscsi up to 80 times!
    Hi Robotbeat,

    there must be something wrong with your setup. look at my results:



    Our DSS is a dual XEON 3,2 GHz, 8 GB RAM, ARC 1280 - 2GB, using block-IO and 1GB iSCSI connections.
    We run 4 1TB SATA drives in Raid 10 configuration, in total 5 volumes with 10 TB.
    This is no test setup, so we have some load (21 ESX Servers, 136 VMs, 44 SAP Test Servers) on this DSS - the values may be not exactly reproducable.

    As you can see, our old and busy setup beats your FC DSS easily in higher file sizes.
    It seems to be more constant compared to your very high values with low file sizes and low values at high file sizes.

    We actually have no issues with bad performance using iSCSI and block-IO. Its not fast as hell but it is very acceptable with high load.

    Best Regards,
    Lutz

  9. #9

    Default

    Robotbeat!!

    Check this posting on speeds reported with Open-E DSS Benchmark with PIC!
    Would be good to check with him. I will let him know of your resutls as well.

    Server
    DELL 860 with 6/i
    RAID0 WD GP 750G X2
    Open - DSS Version: 5.0up60.7101.3511 32bit
    FC HBA:Qlogic QLA2344 2Gbps PCI-X in target mode


    http://forum.open-e.com/showthread.php?t=1319
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •