Visit Open-E website
Results 1 to 10 of 19

Thread: 10GB Performance Disaster

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Default 10GB Performance Disaster

    OPEN-E:

    Intel SR2550, Xeon 5320, 6GB RAM, LSI Raid controller, 512MB w/BBWC 8x72GB SAS 10k drives, RAID 5, Intel 10GBE PCIE Ethernet


    SWITCH:

    DLink DXS-3220

    VMWARE:

    Intel ST1550, ESX4.0, Dual XEON 5320, 24GB RAM, NetXen 10GBE Ethernet


    I'm only getting 15meg/sec from a VM across to DSS.

    Anyone have a suggestion on where to start? I submitted a ticket a few days ago, and haven't heard anything.

  2. #2
    Join Date
    Feb 2009
    Posts
    142

    Default

    I would probably stick a single port gigabit card in the server and DSS using point to point cable (no switch) and test that. This way you can split the problem in half by eliminating the 10g cards and switch. If your perfomance increases your can assume its not the DSS box system itself and its down to 10g drivers and the switch.

  3. #3
    Join Date
    Apr 2009
    Posts
    62

    Default

    You say 15 meg/sec. How are you testing this? What are you using and what methods? Any Load during your tests? Some more details will allow some of us to better advise you what to look for.

    I have a very similar setup in terms of base technology and I have been through the highs and lows of 10GbE technology with VMWare and DSS.

  4. #4

    Default

    Quote Originally Posted by 1parkplace
    You say 15 meg/sec. How are you testing this? What are you using and what methods? Any Load during your tests? Some more details will allow some of us to better advise you what to look for.

    I have a very similar setup in terms of base technology and I have been through the highs and lows of 10GbE technology with VMWare and DSS.
    Testing it with Crystal Disk Mark. And right now just trying a sequential read. This is a VERY long journey starting with OF, and not getting 10GBE to run properly, now moving to a supported platform with even worse performance.

    I've now eliminated the switch, and I'm running direct card to card, with the same performance.

    I've switched out the NetXen on the ESX side: I'm now running a Chelsio card. Same exact problem.

    No load whatsoever. I only have 1 Server 2003 VM, 1 vmdk. This is just to get baseline.

  5. #5

    Default

    hi,

    do you have already done this settings?

    http://forum.open-e.com/faq.php?faq=...please_give_me

    do you activate direct cache access in bios on your mainboard for dss and vmware?
    (in some mainboards you find crystal beach to set up direct cache access)

    greetings
    rogerk

  6. #6

    Default

    Quote Originally Posted by rogerk
    hi,

    do you have already done this settings?

    http://forum.open-e.com/faq.php?faq=...please_give_me

    do you activate direct cache access in bios on your mainboard for dss and vmware?
    (in some mainboards you find crystal beach to set up direct cache access)

    greetings
    rogerk
    yes I did.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •