Visit Open-E website
Results 1 to 7 of 7

Thread: disk read/write speeds

  1. #1

    Default disk read/write speeds

    Just wondering what speeds others get to their DSSv6 iSCSI system?

    This is running on a debian linux system with storage on the iSCSI system:

    Write speed:
    ########
    srs:/home/icepick# time dd if=/dev/zero of=./testfile bs=65536 count=65536
    65536+0 records in
    65536+0 records out
    4294967296 bytes (4.3 GB) copied, 86.8138 s, 49.5 MB/s

    real 2m21.071s
    user 0m0.030s
    sys 0m8.060s


    Read speed:
    ########
    srs:/home/icepick# time dd if=./testfile of=/dev/null bs=65536 count=65536
    65536+0 records in
    65536+0 records out
    4294967296 bytes (4.3 GB) copied, 46.5702 s, 92.2 MB/s

    real 0m46.575s
    user 0m0.020s
    sys 0m0.830s
    srs:/home/icepick#

    I'm happy with the read speed, but not the write speed.

  2. #2

    Default additional

    Also interested in small file size of byte size 1


    #### WRITE ####
    srs:/home/icepick# time dd if=/dev/zero of=./testfile bs=1 count=655360
    655360+0 records in
    655360+0 records out
    655360 bytes (655 kB) copied, 2.19343 s, 299 kB/s

    real 0m2.196s
    user 0m0.230s
    sys 0m1.960s
    srs:/home/icepick# time dd if=./testfile of=/dev/null bs=1 count=655360

    #### READ ####
    655360+0 records in
    655360+0 records out
    655360 bytes (655 kB) copied, 1.48815 s, 440 kB/s

    real 0m1.490s
    user 0m0.220s
    sys 0m1.260s
    srs:/home/icepick#

  3. #3

    Default strange

    the first time I ran it, it took 2:21mins to write the file, this time it was quicker

    srs:/home/icepick# time dd if=/dev/zero of=./testfile bs=65536 count=65536
    65536+0 records in
    65536+0 records out
    4294967296 bytes (4.3 GB) copied, 35.6096 s, 121 MB/s

    real 0m35.614s
    user 0m0.030s
    sys 0m7.850s
    srs:/home/icepick# time dd if=./testfile of=/dev/null bs=65536 count=65536
    65536+0 records in
    65536+0 records out
    4294967296 bytes (4.3 GB) copied, 42.8676 s, 100 MB/s

    real 0m42.872s
    user 0m0.010s
    sys 0m0.690s
    srs:/home/icepick#

  4. #4

    Lightbulb

    It greatly depends on your system back end (i.e. software or hardware raid, what kind of raid set, how many hard-drives, etc). Also, there's a difference between "File I/O" and "Block I/O," since they are cached differently. dd is good for real quick tests, but you need something a little different to get a full picture (like I/Os per second).

  5. #5

    Default slow still

    The system has 16gb of memory, 6 GigE links and dual quad core 2.4ghz intel processors.

    I have 4 bonded links in 802.3ad on my DSSv6 system with raid 6 configured on the 8 drives via the 3ware console.

    I have tested across these bonded links to my 2 xenserver hosts and from within a VM I'm getting 200mb/s (mbits not bytes) write and 650mbits/s read, so not even using a full 1Gb link. To ensure there is no problem with the bonds or the switch I have another interface on my Dssv6 linked directly to one of my xenserver hosts and I get the same speeds. I've even tried to use mpio across the bond and the single ethernet and the speed remains the same.

    These speeds are from monitoring the ethernet links via SNMP from both the xenservers and the DSS every 30 seconds

    Surely I should expect at least 1gb/s write / read minimum.
    Disks are datacenter (I think they call them ES, but cant remember) 1TB 7200 sata drives capable of 3Gbps

  6. #6

    Default

    I am no expert but from what I have learned you should really be using MPIO.

  7. #7

    Default

    SATA disks, you are getting a good speed. Try to use Iometer with 1MB 100% seq and 100% read. If you get with RAID6 140MB/s it's ok. :-)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •