Visit Open-E website
Page 2 of 2 FirstFirst 12
Results 11 to 20 of 20

Thread: DSS support for Adaptec MaxIQ™ SSD Cache solution

  1. #11
    Join Date
    Nov 2009
    Location
    1792 Waldeck Street Fort Worth, TX 76112
    Posts
    1

    Default

    bookmarked and b back l8er,i need more info, :-)

  2. #12

    Default

    Quote Originally Posted by tingshen
    Robotbeat, it won't allow you to use your own SSD.

    The X25-E they use, is using customized firmware. Unless you can get hold of the firmware itself, and flash it to a fully compatible X25-E drive (let's say X25-M?), you won't be able to leverage on the new cache family

    Given the SSDs I'm holding, I can try to do this trick for you, but for sure, it may violate warranty or even the UELA.

    I got a call few weeks back from Adaptec, for trying this new toy out. It is quite an independent cache. You plug it in, it will auto recognize it as a cache, you plug it out, no more cache, that's all. No impact in your underlying data of the original disk array. My question is, if I am already running quite a number of X25-M SSDs rocketing fast @ up to 2GB/s read & 1GB/s write kind of performance, what is the impact of a single drive cache with 250MB/s read & 170MB/s to my existing set up? Perhaps that depends on the main difference in using a 512MB DDR2 cache vs additional 30GB secondary cache, uhm...sounds like what netapps is pushing their customers to "upgrade" lol!
    Yeah ting, what you said is extreamly true, I tried more time to use our own SSD. But I am helpless on this. Finally I knew through my friend about this.

  3. #13

    Default

    Quote Originally Posted by Robotbeat
    Okay, I have a MaxIQ kit (with an Adaptec card) in for testing with the Open-E. It's really fast. 17,000 random read IOPS (after the cache is warmed up) over just a one gigabit ISCSI connection (with just a simple volume on a single SATA hard drive, besides the cache SSD) with a 15 GB test file.
    Yeah, its working fine. Thanks for your suggestion....

  4. #14

    Default

    Hi mates, thank you very much for all your assistance. All your suggestions are helps me a lot,,,

  5. #15

    Default

    hey guys, I just recalled I got some posting here lol.

    Anyway, we went ahead to get the kit and use it on 5805Z. Unfortunately, under Hyper-V R2, the cache seems a little redundant.....and frankly speaking, it doesn't have any performance benefit for write txn...

    I think it will be damn good for web server, those heavy read type.....

  6. #16

    Default SSD Cache vs. RAM cache

    For the price of a MaxIQ setup (including SLC SSD), I don't get the benefit of a 32gb SSD cache over 32gb of RAM on the DSS machine.

    To me, it seems like the only benefit of an SSD cache is that it persists across reboots. That's nice for a workstation, but it seems useless for a server.

    RAM cache is certainly higher performance than SSD.

  7. #17
    Join Date
    Oct 2007
    Location
    Toronto, Canada
    Posts
    108

    Default

    Quote Originally Posted by rcohen
    For the price of a MaxIQ setup (including SLC SSD), I don't get the benefit of a 32gb SSD cache over 32gb of RAM on the DSS machine.

    To me, it seems like the only benefit of an SSD cache is that it persists across reboots. That's nice for a workstation, but it seems useless for a server.

    RAM cache is certainly higher performance than SSD.
    Remember that DSS doesn't provide any RAM cache for BLOCK IO LUNs/shares, only FILE IO, so in that case the SSD cache provided by MaxIQ would be extremely beneficial.

  8. #18

    Default

    Quote Originally Posted by SeanLeyne
    Remember that DSS doesn't provide any RAM cache for BLOCK IO LUNs/shares, only FILE IO, so in that case the SSD cache provided by MaxIQ would be extremely beneficial.
    Sure, if you disable caching on DSS, then you are totally relying on controller caching. Why would you want to do that?

    All I can imagine is the possibility that the MaxIQ cache algorithm may be better for certain data access patterns. If that is truly happening in real-world applications, it seems like a better solution would be do have some more tunable settings on the DSS cache (size of MRU vs. MFU, etc.)

    If the SSD cache provided more bang for the buck than RAM, that would be different, but that doesn't appear to be the case. Apparently, MLC SSDs aren't suitable for caching, due to performance and reliability issues with cache write patterns.

  9. #19
    Join Date
    Oct 2007
    Location
    Toronto, Canada
    Posts
    108

    Default

    Quote Originally Posted by rcohen
    Sure, if you disable caching on DSS, then you are totally relying on controller caching. Why would you want to do that?
    DDS does not do any caching of BLOCK IO LUNs/shares, period. There is nothing to disable.

  10. #20
    Join Date
    Feb 2009
    Posts
    142

    Default

    Only File I/O takes advantage of caching by using the extra memory on your motherboard, Block I/O just uses whatever cache you have on your disk controller but none on your motherboad. But you can't do autofailover if your doing File I/O.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •