Visit Open-E website
Page 1 of 2 12 LastLast
Results 1 to 10 of 17

Thread: Neat comparison of the different scsi target implementations

Thread has average rating 5.00 / 5.00 based on 1 votes.
Thread has been visited 184885 times.
  1. #1

    Lightbulb Neat comparison of the different scsi target implementations

    I found a neat and concise comparison of the various scsi (i.e. iscsi, fibre channel, SRP (infiniband), and iSER) target implementations. They all support at least iscsi, and some support other protocols.

    http://scst.sourceforge.net/comparison.html
    Granted, it's on the SCST page, so it may be a little biased in that regard, but it does seem like it is pretty fair overall.

    From the source code CD I got with our DSS DOMs, it seems that Open-E uses mainly IET (and perhaps linux-iscsi, which is now LIO) for iSCSI targets and SCST for fibre channel targets. It could be that some of the performance increases seen when using open-e's fibrechannel targets vs. open-e's iscsi targets could be because they use different target software (SCST vs. IET).

    Also, it seems that only LIO has an actual patch for persistent reservation (i.e. Windows 2008 clustering support). Is this what open-e is planning to use to fill that feature-hole, or is open-e contributing their own code to these projects? I guess you guys don't really have to tell everyone the details of your plans in this area, but it'd be kind of nice to know.

    Here's some interesting numbers comparing the different filesystems (SCST, IET and STGT using fileio vs. blockio, etc.):
    http://lists.wpkg.org/pipermail/stgt...ch/002856.html

    This guy says he can get 1290MB/s (over 1 gigaByte/sec) with DDR Infiniband using SCST with SRP (from cache, or at least with tmpfs). Since Open-E already uses SCST for FC, maybe infiniband support isn't far behind?
    http://lists.wpkg.org/pipermail/stgt...il/002865.html

    (BTW, ennealDC, this is an example of someone getting over 400MB/s performance on a linux system.)

  2. #2

    Default

    thanks for sharing the information..

  3. #3

    Default Infiniband status

    Hi!

    Is there any update about infiniband support in DSS V6? I mean something faster than IPoIB. Is it really slow as 100MB/s even on 40Gb link? I plan to build storage for ~100 node cluster based on infiniband, and looking for something faster than 1GbE

  4. #4

    Default

    We are looking into this, most likely later part of Q3, you should be getting better speeds then what you are providing, can you send in the logs to our support team so we can take a look at them.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  5. #5

    Default

    Quote Originally Posted by aflinta
    Hi!

    Is there any update about infiniband support in DSS V6? I mean something faster than IPoIB. Is it really slow as 100MB/s even on 40Gb link? I plan to build storage for ~100 node cluster based on infiniband, and looking for something faster than 1GbE
    Using Open-E, I have been getting up to 975MB/sec over 10gb-cx4 with jumbo frames, and 110MB/sec over 1gb.

  6. #6

    Lightbulb

    If Open-E ever supports IPoIB Connected Mode (IPoIB-CM), then IPoIB should be pretty fast. Right now, I think they only support Datagram Mode (DM). The difference is that the max MTU for CM is about 64kB, but only 2kB for DM, so obviously there is much less overhead for Connected Mode.

    We had a customer asking today if we supported 40Gbps Infiniband. We do (not tested), but since we don't have IPoIB-CM, then we are still rather slow (10GbE is faster than 40Gb IPoIB-DM).

    It could also be that Open-E does support IPoIB-CM, but I haven't seen any evidence, yet.

  7. #7

    Default

    Quote Originally Posted by Robotbeat
    If Open-E ever supports IPoIB Connected Mode (IPoIB-CM), then IPoIB should be pretty fast. Right now, I think they only support Datagram Mode (DM). The difference is that the max MTU for CM is about 64kB, but only 2kB for DM, so obviously there is much less overhead for Connected Mode.

    We had a customer asking today if we supported 40Gbps Infiniband. We do (not tested), but since we don't have IPoIB-CM, then we are still rather slow (10GbE is faster than 40Gb IPoIB-DM).

    It could also be that Open-E does support IPoIB-CM, but I haven't seen any evidence, yet.
    Does that mean at the moment if you want more than 100mb/s using open-e than infiniband is not really an option at the moment? I have not bought my hardware yet so I have options but Infiniband is significantly cheaper than 10gb ethernet.

  8. #8

    Default

    mscooper and Robotbeat and everyone else!

    Finally we have Infiniband support for CM (Connected mode).

    We would like to inform you about new "small update" available for all DSS V6. Please send in a support ticket asking for this this small update for the DSS V6 only!.

    This is a new option "Connected Mode" for IPoIB. Until now we were supporting only Datagram mode.

    After applying small update you may change the desired mode under hardware console tools (ALT+CTRL+W) -> Infiniband tuning.

    In order to get the best performance for IPoIB please to switch to "Connected Mode" and change Jumbo Frames to 65520.

    Jumbo Frames can be changed under Hardware Console Tools (ALT+VTRL+W) -> Tuning Options -> Jumbo Frames.


    Datagram vs Connected modes

    In datagram mode, the IB UD (Unreliable Datagram) transport is used
    and so the interface MTU has is equal to the IB L2 MTU minus the
    IPoIB encapsulation header (4 bytes). For example, in a typical IB
    fabric with a 2K MTU, the IPoIB MTU will be 2048 - 4 = 2044 bytes.

    In connected mode, the IB RC (Reliable Connected) transport is used.
    Connected mode takes advantage of the connected nature of the IB
    transport and allows an MTU up to the maximal IP packet size of 64K,
    which reduces the number of IP packets needed for handling large UDP
    datagrams, TCP segments, etc and increases the performance for large
    messages.

    In connected mode, the interface's UD QP is still used for multicast
    and communication with peers that don't support connected mode. In
    this case, RX emulation of ICMP PMTU packets is used to cause the
    networking stack to use the smaller UD MTU for these neighbors
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  9. #9

    Talking IPoIB Connected Mode

    IPoIB-CM!! That's fantastic news. I've just submitted a ticket and cannot wait to test it out. My Windows servers thank you.

    Any chance of SRP support in the future (to speed up the Vmware side of things)?

    Thanks again.

  10. #10

    Default

    Just hit you with the update - check your email - If we see allot of good out come from this then we will look into and start on that, give us some time and lets see how the results are - so far we had good news on it.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •