Visit Open-E website
Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: iSCSI performance Vs Samba

  1. #1

    Default iSCSI performance Vs Samba

    Hello,
    I have DSS 6 lite installed in a hp box (smartarray 641, 5 scsi HDDs in raid5, dual XEON 3GHZ, 3GB RAM)
    I have configured a volume group.
    Inside the volume group I have an iScsi volume in File i/o (160GB, initialized) and a NAS volume (100GB)
    this box is connected to my Storage LAN using the integrated Broadcom NIC.
    The iSCSI volume is used by my vmware cluster (2 hosts) and is vmfs formatted
    The NAS volume is shared on my network
    If I copy data from a windows box to the NAS volume I have a performance of about 10MB/sec
    If I copy data from the ESX hosts to the iSCSI volume (if I migrate a virtual machine for example) I have a performance of 3MB/sec
    Reading from the iSCSI volume gives same performance than reading from SAN Volume.
    If have tried many combinations of iscsi target parameters.
    I have another linux box with SCST in fileio and same target parameters of e-open than outperforms open-e writing data.
    any idea?

  2. #2

    Default

    Have you tried with single host connected to the iscsi volume?

    by design iscsi is block based and is for single host

  3. #3

    Default

    Make sure that the cache settings on the RAID controller are not set to Write Through but use Write Back. Check the test.log and look for the ifconfig -a and see if the nics are performing at 1000. Also verify the sda or sdb to see what the hdparm speeds are.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  4. #4

    Default

    Nice try...
    The problem is with iSCSI only. with samba performance is OK.
    In any case here are the values you asked for:
    *-----------------------------------------------------------------------------*
    hdparm -t /dev/cciss/c0d1
    *-----------------------------------------------------------------------------*


    /dev/cciss/c0d1:
    Timing buffered disk reads: 148 MB in 3.01 seconds = 49.14 MB/sec

    ethtool eth0
    *-----------------------------------------------------------------------------*

    Settings for eth0:
    Supported ports: [ TP ]
    Supported link modes: 10baseT/Half 10baseT/Full
    100baseT/Half 100baseT/Full
    1000baseT/Half 1000baseT/Full
    Supports auto-negotiation: Yes
    Advertised link modes: 10baseT/Half 10baseT/Full
    100baseT/Half 100baseT/Full
    1000baseT/Half 1000baseT/Full
    Advertised auto-negotiation: Yes
    Speed: 1000Mb/s
    Duplex: Full
    Port: Twisted Pair
    PHYAD: 1
    Transceiver: internal
    Auto-negotiation: on
    Supports Wake-on: g
    Wake-on: d
    Current message level: 0x000000ff (255)
    Link detected: yes

  5. #5

    Default

    Just from my DSS Lite test with my 3Ware I get higher when enabling the cache.

    hdparm -t /dev/sda
    *-----------------------------------------------------------------------------*
    /dev/sda:
    Timing buffered disk reads: 354 MB in 3.00 seconds = 117.99 MB/sec

    Also are these ES (Server ) drives. Try direct connecting to the server. You might want to force the speed to 1000 instead f auto neg in the modify driver feature in the Console Tools.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  6. #6

    Default

    Hello,
    the machine is quite old and 50mb/sec is fine for my purposes.
    the problem is that I have that kind of performance when writing using samba.
    When using ISCSI I have 3MB/sec only!
    The problem is not in the network or disk array (samba is perfoming fast enough), I think the problem is with some parametrt in the scst configuration.

  7. #7

    Default

    Did you enable the WB for the LUN. What are the Target settings for the Max burst length and Xmit....

    Test with the DSS V6 Trial version - latest build is 3535.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  8. #8

    Default

    WB Enabled.
    Tried different settings:
    Default
    The one I have on another linux box with SCST and that shows far better write performance
    the one proposed for vsphere in some other post
    Write performance is always 3MB/sec

  9. #9

    Default

    The other linux box with SCST in fileio what are the performance reads and what is it and was this tested on the same system and can we get the hdparm from there logs with this box.
    Is the linux box 6bit mode what are the specs....

    I believe this might be isolated as I dont have older systems to test with. Can you test with DSS V6 Trial version?

    Also look at the dmesg logs to see if we can see anything that could possibly be going on.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  10. #10

    Default

    The other linux box is my production SAN.
    it is newer (E200i controller, SAS 15k disks, 4GB RAM, hdparam is about 130MB/sec) and I cannot use for testing.
    It is fileio on an entire partition of a logical RAID5 disk.
    I am testing DSS V6 trial (latest build) on an older box to see if its performance is good enough to use in production.
    Since write performance with iSCSI is poor, I tested with samba to ensure that my older hardware is ok.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •