Visit Open-E website
Results 1 to 10 of 10

Thread: Some questions concerning virtualservers & iscsi

  1. #1

    Default Some questions concerning virtualservers & iscsi

    Hello

    For the following weeks I'm testing an open-e server to check if it does what i think it should do. The first impressions are realy great. The purpose of the machine will be to store the filesystems of "a lot" virtual servers (xen-based, in total 50-150). Each virtual server will probably get 10-25 GB iscsi targets to use. The server I use has the following specs:
    • 2 x quad 5335 xeon cpu
    • 16 GB ram
    • areca 1231m raid controller
    • 12 500G disks, 11 in raid6 + 1 hotspare ( 4.0TB useable )
    • 2x intel nic in 802.3ad bond


    I use open-e Version: 1.30.DB00000000.2813
    i can get 240M+ traffic in and out of the machine with some basic dd testing. So speedwise i think it's fast enough.

    The questions i have:
    1. Does anyone have experience with having that much xen servers on 1 san server?
    2. Is it useful to have that much memory? i noticed almost nothing is used when using only iscsi.
    3. is there somewhere a guide to tune open-e/iscsitarget ?

  2. #2

    Default

    Sorry to say we don’t have any xen servers to test with. By the way you need to update to DSS 2819 just released. In Console tools go to CTRL ALT W > Tuning Options > iSCSI daemon to tweak parameters - information below for each.


    a) MaxRecvDataSegmentLength - Sets the maximum data segment length that can be received. This value should be set to multiples of PAGE_SIZE. Currently the maximum supported value is 64 * PAGE_SIZE, e.g. 262144 if PAGE_SIZE is 4kB.
    Configuring too large values may lead to problems allocating sufficient memory, which in turn may lead to SCSI commands timing out at the initiator host. The default value is 8192.

    b) MaxBurstLength - Sets the maximum amount of either unsolicited or solicited data the initiator may send in a single burst. Any amount of data exceeding this value must be explicitly solicited by the target. This value should be set to multiples of PAGE_SIZE. Configuring too large values may lead to problems allocating sufficient memory, which in turn may lead to SCSI commands timing out at the initiator host. The default value is 262144.

    c) MaxXmitDataSegmentLength - Sets the maximum data segment length that can be sent. This value actually used is the minimum of MaxXmitDataSegmentLength and the MaxRecvDataSegmentLength announced by the initiator. It should be set to multiples of PAGE_SIZE. Currently the maximum supported value is 64 * PAGE_SIZE, e.g. 262144 if PAGE_SIZE is 4kB. Configuring too large values may lead to problems allocating sufficient memory, which in turn may lead to SCSI commands timing out at the initiator host. The default value is 8192.

    d) DataDigest <CRC32C|None> - If set to "CRC32C" and the initiator is configured accordingly, the integrity of an iSCSI PDU's data segment will be protected by a CRC32C checksum. The default is "None". Note that data digests are not supported during discovery sessions.

    e) MaxOutstandingR2T <value> - Controls the maximum number of data transfers the target may request at once, each of up to MaxBurstLength bytes. The default is 1.

    f) InitialR2T <Yes|No> - If set to "Yes" (default), the initiator has to wait for the target to solicit SCSI data before sending it. Setting it to "No"
    allows the initiator to send a burst of FirstBurstLength bytes unsolicited right after and/or (depending on the setting of ImmediateData together with the command. Thus setting it to "No" may improve performance.

    g) ImmediateData <Yes|No> - This allows the initiator to append unsolicited data to a command. To achieve better performance, this should be set to "Yes".
    The default is "No".

    h) DataPDUInOrder <Yes|No> - It tells initiator if data has to be sent in order. Default is "Yes", which is also recommended.

    i) DataSequencerInOrder <Yes|No> - It tells initiator if data has to be sent in order. Default is "Yes", which is also recommended.

    j) HeaderDigest <CRC32C|None> - If set to "CRC32C" and the initiator is configured accordingly, the integrity of an iSCSI PDU's header segments will be protected by a CRC32C checksum. The default is "None".
    Note that header digests are not supported during discovery sessions.

    k) Wthreads - The iSCSI target employs several threads to perform the actual block I/O to the device. Depending on your hardware and your (expected) workload, the number of these threads may be carefully adjusted. The default value of 8 should be sufficient for most purposes.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  3. #3

    Default

    Thanks for the detailed info and the answers

    I just updated last to 2813, what is the update? You tell me I need to update, is there a problem with 2813? If so, how could i know that if I hadn't posted a question to this forum? Is there a announce - maillinglist with all new versions/updates and need to know information about open-e and its products?

    Do you know what options are important when I need to set when I'm going to connect with a lot of hosts and initiators? I have plenty memory and CPU cycles.

    And you forgot my second question: Is it useful to have that much memory?

    I noticed almost nothing is used when using only iscsi. this is my snmpdump of the mem usage:

    UCD-SNMP-MIB::memTotalReal.0 = INTEGER: 16621052
    UCD-SNMP-MIB::memAvailReal.0 = INTEGER: 15493392
    only 1G of mem is used...

  4. #4

    Default

    In next several months we will be setting up a mailing list for updates, we just need more time to put this in place. On 2819 we modified the block IO.

    As stated before we do not have xen to test with this amount of servers connecting and if you start using Snapshots and replication with many accessing targets you will start to see the usage. So we wont be able to provide you anything on this, very hard to say without testing.

    You will want to test settings on both the iSCSI initiators and our iSCSI daemon settings (must match) for MaxRecvDataSegmentLength & MaxXmitDataSegmentLength to start with.

    I would be concerned with IO on the NIC's (Bond 802.3ad or Balance RR) and settings on the RAID (set for performance).
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  5. #5

    Default

    Is there a way to set those iscsi target tuning values for all targets at once? or something as a default? setting one value takes a while, around a minute. so setting 2 values on 40 targets will take me 80 minutes. And I think i'm going to tune a lot. So i hope you see how that an be a problem...

    what do you mean by you concern. That i should use bonding of should'nt use bonding?

  6. #6

    Default

    http://www.cs.unh.edu/~rdr/pdcn2005.ps i have found to be very useful in understanding ietf.

    Ive got a 3ware 9650se 15-disk raid6 with 4gb of ram. 16gb is a bit much for a storage device imo, but like Tom said if you are doing fancy stuff then maybe you will need it. iometer gives me about 112mb/s both read and write with a single intel pro 1000 MT on both ends.

    also, i wonder if you are getting that kind of performance because you have so much memory.. aka the cache effect, you should try writing files larger than your memory or just take a bit out for testing so you can see what your storage system can do without the aid of RAM from the system.

  7. #7

    Default

    50-150 is a big variance to plan for, and if you were to dedicate 20mb/s (about what an IDE disk is) to each virtual, you would only fit about 6 per interface.

    you may want to look at chelsio 10gbe.

    also the per target iscsi options are brand new.. i never planning on having a lot of targets but I can feel your pain on having to enable Immediate Data on every target!

  8. #8

    Default

    Thanks for that document. I'm going to read it.

    In my iometer testing i've seen a big difference in the sort of traffic you send to the SAN. If you send large sequential blocks the speed is high that's what i did for the figures in my first post (no memory usage change on open-e). When sending a lot of small random block's the performance drops to less than 10MegaBytes/s.

    And that wories me.
    I already have 30 xen servers running on 7 hostsystems with local disks. I've measured the io traffic of those servers and found out that for 95% of the day they are doing virtualy no reads, and continues writes of 10kbytes/s. (I thought it would the other way around, but it is consistent on all xen domu's).

    When i scale those figures to a 100 xen dom-u's I don't know what to expect when i tranfer those xen servers to the san...

    Does anyone know how the measure the sort of disk IO Linux does? For example: 10 1024byte random reads, 28 1M sequential writes, 15 64k random writes, etc..

  9. #9

    Default

    Sorry for the time to enter the changes for the iSCSI daemon settings but in past we had many requests to have this setup for each. Maybe in future release we can add a function to "apply all".
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  10. #10

    Default

    Can you test with DSS build 2819. Then run your test again and send logs to support with same subject as sent before.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •