Visit Open-E website
Results 1 to 10 of 10

Thread: Some questions concerning virtualservers & iscsi

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Default

    Thanks for the detailed info and the answers

    I just updated last to 2813, what is the update? You tell me I need to update, is there a problem with 2813? If so, how could i know that if I hadn't posted a question to this forum? Is there a announce - maillinglist with all new versions/updates and need to know information about open-e and its products?

    Do you know what options are important when I need to set when I'm going to connect with a lot of hosts and initiators? I have plenty memory and CPU cycles.

    And you forgot my second question: Is it useful to have that much memory?

    I noticed almost nothing is used when using only iscsi. this is my snmpdump of the mem usage:

    UCD-SNMP-MIB::memTotalReal.0 = INTEGER: 16621052
    UCD-SNMP-MIB::memAvailReal.0 = INTEGER: 15493392
    only 1G of mem is used...

  2. #2

    Default

    In next several months we will be setting up a mailing list for updates, we just need more time to put this in place. On 2819 we modified the block IO.

    As stated before we do not have xen to test with this amount of servers connecting and if you start using Snapshots and replication with many accessing targets you will start to see the usage. So we wont be able to provide you anything on this, very hard to say without testing.

    You will want to test settings on both the iSCSI initiators and our iSCSI daemon settings (must match) for MaxRecvDataSegmentLength & MaxXmitDataSegmentLength to start with.

    I would be concerned with IO on the NIC's (Bond 802.3ad or Balance RR) and settings on the RAID (set for performance).
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  3. #3

    Default

    Is there a way to set those iscsi target tuning values for all targets at once? or something as a default? setting one value takes a while, around a minute. so setting 2 values on 40 targets will take me 80 minutes. And I think i'm going to tune a lot. So i hope you see how that an be a problem...

    what do you mean by you concern. That i should use bonding of should'nt use bonding?

  4. #4

    Default

    http://www.cs.unh.edu/~rdr/pdcn2005.ps i have found to be very useful in understanding ietf.

    Ive got a 3ware 9650se 15-disk raid6 with 4gb of ram. 16gb is a bit much for a storage device imo, but like Tom said if you are doing fancy stuff then maybe you will need it. iometer gives me about 112mb/s both read and write with a single intel pro 1000 MT on both ends.

    also, i wonder if you are getting that kind of performance because you have so much memory.. aka the cache effect, you should try writing files larger than your memory or just take a bit out for testing so you can see what your storage system can do without the aid of RAM from the system.

  5. #5

    Default

    50-150 is a big variance to plan for, and if you were to dedicate 20mb/s (about what an IDE disk is) to each virtual, you would only fit about 6 per interface.

    you may want to look at chelsio 10gbe.

    also the per target iscsi options are brand new.. i never planning on having a lot of targets but I can feel your pain on having to enable Immediate Data on every target!

  6. #6

    Default

    Thanks for that document. I'm going to read it.

    In my iometer testing i've seen a big difference in the sort of traffic you send to the SAN. If you send large sequential blocks the speed is high that's what i did for the figures in my first post (no memory usage change on open-e). When sending a lot of small random block's the performance drops to less than 10MegaBytes/s.

    And that wories me.
    I already have 30 xen servers running on 7 hostsystems with local disks. I've measured the io traffic of those servers and found out that for 95% of the day they are doing virtualy no reads, and continues writes of 10kbytes/s. (I thought it would the other way around, but it is consistent on all xen domu's).

    When i scale those figures to a 100 xen dom-u's I don't know what to expect when i tranfer those xen servers to the san...

    Does anyone know how the measure the sort of disk IO Linux does? For example: 10 1024byte random reads, 28 1M sequential writes, 15 64k random writes, etc..

  7. #7

    Default

    Sorry for the time to enter the changes for the iSCSI daemon settings but in past we had many requests to have this setup for each. Maybe in future release we can add a function to "apply all".
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  8. #8

    Default

    Can you test with DSS build 2819. Then run your test again and send logs to support with same subject as sent before.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •