Visit Open-E website
Results 1 to 4 of 4

Thread: The write performance using dd and iozone

  1. #1

    Default The write performance using dd and iozone

    I ran the "dd" command and "iozone" tool respectively on a CentOS VM. The write speed with "dd" is only 132MB/S, but "iozone" can get 430MB/s to 530MB/s. I am wondering why there's so much big difference.

    The connection between VM to SAN server is using MPIO, 10GB each. I have disabled the volume replication.

    My Hardware Specs:

    1. RAID/SCSI controllers
    LSI Logic / Symbios Logic LSI MegaSAS 9260 (rev 05)

    2. Network controllers
    Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)
    Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)

    3. Hardware RAID info:
    Attribute name Value
    State: Optimal
    Strip size: 64kB
    Raid Level: RAID6
    Number of disk in one array: 8
    Size: 11438364MB
    Default Write Policy: Write Back
    Read Policy: Readahead
    Access Policy: Read-Write
    Cache Policy: Cached
    Disk cache: Disk's Default

    4. Disk info:
    ATA WDC WD2003FYYS-01D01 WD-WMAY02821178

    ###Test result with dd###

    [root@cloud1vm1 ~]# dd if=/dev/zero of=/var/tmpMnt bs=1024 count=1200000
    1200000+0 records in
    1200000+0 records out
    1228800000 bytes (1.2 GB) copied, 9.32559 seconds, 132 MB/s
    ###Test result with iozone###

    [root@cloud1vm1 current]# ./iozone -a -i 0 -i 1 -s 1024000
    Iozone: Performance Test of File I/O
    Version $Revision: 3.394 $
    Compiled for 32 bit mode.
    Build: linux


    Run began: Mon Jan 9 16:25:10 2012

    Auto Mode
    File size set to 1024000 KB
    Command line used: ./iozone -a -i 0 -i 1 -s 1024000
    Output is in Kbytes/sec
    Time Resolution = 0.000001 seconds.
    Processor cache size set to 1024 Kbytes.
    Processor cache line size set to 32 bytes.
    File stride size set to 17 * record size.
    random random bkwd record stride
    KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
    1024000 4 430789 715592 992844 997117
    1024000 8 498271 790904 1116519 1133799
    1024000 16 512361 846420 1144076 1120124
    1024000 32 515320 845006 1206667 1232230
    1024000 64 514785 887902 1169362 1234239
    1024000 128 535819 858169 1164648 1148076
    1024000 256 508758 818966 1217331 1202289
    1024000 512 516002 907634 1229456 1231678
    1024000 1024 525833 883454 1146670 1149030
    1024000 2048 526764 871387 1141743 1210035
    1024000 4096 527375 891378 1155492 1120416
    1024000 8192 517536 834772 1125750 1124055
    1024000 16384 495983 866431 1113089 1099655

    iozone test complete.

  2. #2
    Join Date
    Oct 2007
    Location
    Toronto, Canada
    Posts
    108

    Default

    I would wonder if "dd" does a random IO test and "iozone" does a sequential IO test.

    There is a significant difference in the performance of the 2 methodologies.

  3. #3

    Default

    Quote Originally Posted by SeanLeyne
    I would wonder if "dd" does a random IO test and "iozone" does a sequential IO test.

    There is a significant difference in the performance of the 2 methodologies.
    I search this from Google, people says "dd only does sequential I/O". The iozone results are sequential read and write too.

    Also, the disk performance test tool in DSS V6 console use "dd" command too. I cannot test this because my volume group was already created.

  4. #4
    Join Date
    Aug 2008
    Posts
    236

    Default

    I've always stressed on this forum that performance testing should not be an afterthought. Meaning, you should test the bare metal hardware using a live cd or a light installation of linux.
    I generally use DRBL to boot an Ubuntu image I've loaded with all the drivers, management tools (e.g LSI/Areca/Adaptec) and storage testing tools (Iometer/dynamo, xdd).
    Knowing what your raw LUN performance is a critical baseline metric.
    That being said, Iozone is significantly different than dd. For one, Iozone can utilize multiple threads and reach a higher queue depth.
    If your SAN is used for Virtualization, your concerns should be how well it performs random IO. Throughput tests are great for bragging rights, but don't typify what you'd see in real life applications. Most OS access patterns are mostly random of varying block sizes and like 60 - 80 % reads. Fire up a Windows VM and load IOMeter and use the file server or web server access pattern. This will give you an apple to compare to other apples. Sites like tomshardware and anandtech frequently test new hdds and raid controllers. You can see how you stack up.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •