Visit Open-E website
Results 1 to 8 of 8

Thread: Tuning read performance?

  1. #1

    Default Tuning read performance?

    Hello

    I'm getting really good write peformance to DSS (I thought I would post here since I would get more replies).

    However, read performance is lower. It's not much lower, it's averaging around 80% of the read speed.

    I'm testing 4 clients connected to the DSS server using a gigabit switch.

    The RAID hardware in the server is capable of performing reads faster than writes.

    Does anyone have any tips on read peformance?

    Thanks

  2. #2

    Default

    For your information, this is the output from iscsiadm -m session -r 1:

    node.name = iqn.2008-11:dss.target2
    node.tpgt = 1
    node.startup = manual
    iface.hwaddress = default
    iface.iscsi_ifacename = default
    iface.net_ifacename = default
    iface.transport_name = tcp
    iface.initiatorname = <empty>
    node.discovery_address = 100.100.100.10
    node.discovery_port = 3260
    node.discovery_type = send_targets
    node.session.initial_cmdsn = 0
    node.session.initial_login_retry_max = 4
    node.session.cmds_max = 128
    node.session.queue_depth = 32
    node.session.auth.authmethod = None
    node.session.auth.username = <empty>
    node.session.auth.password = <empty>
    node.session.auth.username_in = <empty>
    node.session.auth.password_in = <empty>
    node.session.timeo.replacement_timeout = 120
    node.session.err_timeo.abort_timeout = 15
    node.session.err_timeo.lu_reset_timeout = 20
    node.session.err_timeo.host_reset_timeout = 60
    node.session.iscsi.FastAbort = Yes
    node.session.iscsi.InitialR2T = No
    node.session.iscsi.ImmediateData = Yes
    node.session.iscsi.FirstBurstLength = 262144
    node.session.iscsi.MaxBurstLength = 16776192
    node.session.iscsi.DefaultTime2Retain = 0
    node.session.iscsi.DefaultTime2Wait = 2
    node.session.iscsi.MaxConnections = 1
    node.session.iscsi.MaxOutstandingR2T = 1
    node.session.iscsi.ERL = 0
    node.conn[0].address = 100.100.100.10
    node.conn[0].port = 3260
    node.conn[0].startup = manual
    node.conn[0].tcp.window_size = 524288
    node.conn[0].tcp.type_of_service = 0
    node.conn[0].timeo.logout_timeout = 15
    node.conn[0].timeo.login_timeout = 15
    node.conn[0].timeo.auth_timeout = 45
    node.conn[0].timeo.noop_out_interval = 5
    node.conn[0].timeo.noop_out_timeout = 5
    node.conn[0].iscsi.MaxRecvDataSegmentLength = 131072
    node.conn[0].iscsi.HeaderDigest = None
    node.conn[0].iscsi.DataDigest = None
    node.conn[0].iscsi.IFMarker = No
    node.conn[0].iscsi.OFMarker = No

  3. #3
    Join Date
    Nov 2008
    Location
    Hamburg, Germany
    Posts
    102

    Default

    Hi,

    this is exactly why I am considering choosing Open-E over OpenFiler. I had the same issue with OF, although my reads were significantly slower then the write operations.

    My RAID boxes do certainly exceed the GBit perofrmance both in reads and writes. Both of them are in the 170s MB/sec. range. It may be the case that on iSCSI systems the reads are always slower than writes and I maybe just can't see it since I have a bandwidth cap at the switch level performance. I have noticed that with each distro I tried and I think that this is causes by ietd itself.

    What is the hardware of your server? When I started out with iSCSI I first toll an very aged Dell PE 1650 and that clearly had its limits. Then I switched to a slightly newer PE 1750 with two 2.4 GHz Xeon CPUs and the performance immediately went to full GBit speed on writes but still was at 60 MB/sec on reads on OpenFiler. Then I installed Open-E and all the sudden, both read and write performance were equal.

    Cheers,
    budy
    There's no OS like OS X!

  4. #4

    Default

    Hello

    My hardware:

    Areca ARC-1261ML / 16 channel SATA
    16 x 1TB Western Digital green power
    Quad port Intel PCI Express server adapter
    Running bonding, setup as round-robin (I'm having too many problems with LACP)

    Open-E DSS
    Total bandwidth writing:
    1 client: 114MB/sec
    2 clients: 230MB/sec
    3 clients: 339MB/sec
    4 clients: 401MB/sec

    Total bandwidth read:
    1 client: 103MB/sec
    2 clients: 215MB/sec
    3 clients: 284MB/sec
    4 clients: 352MB/sec

    My client PCs are running Intel server cards as well. Each client is running openSUSE 11.0.

    I've got a feeling its the switch. Performance is better than I expected, a lot better.
    But my setup shows what is possible in pure Ethernet terms.

    I have tried a difference switch and it gave me better read speeds, but introduced a load of problems when using 4 clients at the same time.

    I will continue to tune the setup.

  5. #5

    Default

    Oh yeah, I forgot to mention. Like you, I also notice a large difference in iSCSI performance when using OpenFiler. This despite using the same hardware and both being based on kernel 2.6.25.

    Total write bandwidth for 4 clients: 350MB/sec
    Total read bandwidth for 4 clients: 112MB/sec

    That is using the same round-robin bonding as I do in Open-E.

    No idea why. I like Open-E too much anyway now!

  6. #6
    Join Date
    Nov 2008
    Location
    Hamburg, Germany
    Posts
    102

    Default

    Yeah, I like it too. I am only waiting for the cd install version to be released - none of my servers is able to boot from a USB stick.

    Cheers,
    budy
    There's no OS like OS X!

  7. #7

    Default

    Hello Budy

    I wasn't aware of this. Are they planning to release a CD version that is installable?

    Would prefer to install this to SATA DOM instead of the USB DOM.

  8. #8
    Join Date
    Nov 2008
    Location
    Hamburg, Germany
    Posts
    102

    Default

    Hi,

    yes, at least this is what I have been told.

    Cheers,
    budy
    There's no OS like OS X!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •