Visit Open-E website
Page 1 of 2 12 LastLast
Results 1 to 10 of 19

Thread: 10GB Performance Disaster

  1. #1

    Default 10GB Performance Disaster

    OPEN-E:

    Intel SR2550, Xeon 5320, 6GB RAM, LSI Raid controller, 512MB w/BBWC 8x72GB SAS 10k drives, RAID 5, Intel 10GBE PCIE Ethernet


    SWITCH:

    DLink DXS-3220

    VMWARE:

    Intel ST1550, ESX4.0, Dual XEON 5320, 24GB RAM, NetXen 10GBE Ethernet


    I'm only getting 15meg/sec from a VM across to DSS.

    Anyone have a suggestion on where to start? I submitted a ticket a few days ago, and haven't heard anything.

  2. #2
    Join Date
    Feb 2009
    Posts
    142

    Default

    I would probably stick a single port gigabit card in the server and DSS using point to point cable (no switch) and test that. This way you can split the problem in half by eliminating the 10g cards and switch. If your perfomance increases your can assume its not the DSS box system itself and its down to 10g drivers and the switch.

  3. #3
    Join Date
    Apr 2009
    Posts
    62

    Default

    You say 15 meg/sec. How are you testing this? What are you using and what methods? Any Load during your tests? Some more details will allow some of us to better advise you what to look for.

    I have a very similar setup in terms of base technology and I have been through the highs and lows of 10GbE technology with VMWare and DSS.

  4. #4

    Default

    Quote Originally Posted by 1parkplace
    You say 15 meg/sec. How are you testing this? What are you using and what methods? Any Load during your tests? Some more details will allow some of us to better advise you what to look for.

    I have a very similar setup in terms of base technology and I have been through the highs and lows of 10GbE technology with VMWare and DSS.
    Testing it with Crystal Disk Mark. And right now just trying a sequential read. This is a VERY long journey starting with OF, and not getting 10GBE to run properly, now moving to a supported platform with even worse performance.

    I've now eliminated the switch, and I'm running direct card to card, with the same performance.

    I've switched out the NetXen on the ESX side: I'm now running a Chelsio card. Same exact problem.

    No load whatsoever. I only have 1 Server 2003 VM, 1 vmdk. This is just to get baseline.

  5. #5

    Default

    hi,

    do you have already done this settings?

    http://forum.open-e.com/faq.php?faq=...please_give_me

    do you activate direct cache access in bios on your mainboard for dss and vmware?
    (in some mainboards you find crystal beach to set up direct cache access)

    greetings
    rogerk

  6. #6

    Default

    Quote Originally Posted by rogerk
    hi,

    do you have already done this settings?

    http://forum.open-e.com/faq.php?faq=...please_give_me

    do you activate direct cache access in bios on your mainboard for dss and vmware?
    (in some mainboards you find crystal beach to set up direct cache access)

    greetings
    rogerk
    yes I did.

  7. #7

    Default

    can you use insteed of VMWare an Windows 2003 or 2008 installtion to access DSS?

    We use Citrix XEN Server and DDS6 with 10GB from Intel with good performance.


    have tested with sqlio and these parameters?
    sqlio.exe -kW -s20 -frandom -o64 -b4 -BH -LS -Fparam.txt
    size of testfile 10MB
    you should get roundabout : 50k IO/s and 200MB/s

    if not i think there is an Hardware Error or some misconfig on VMWare site.

    greetings
    rogerk

  8. #8
    Join Date
    Apr 2009
    Posts
    62

    Default

    If you read the release notes, Chelsio and Netxen are iffy choices for 10gbe NICs. I'm using intel based 2-port 10gig cards and they work great. I had major problems with NetXen cards which I was using before these.

    Are you using Block I/O or File I/O for your Volumes? Either way, check the following and retest:

    From the Console: Ctrl + Alt + W -> Enter Admin Password -> Tuning Options -> iSCSI Daemon Options -> Target Options -> Pick your target

    MaxRecvDataSegmentLength: 8192
    MaxBurstLength 262144
    MaxXmitDataSegmentLength: 8192
    FirstBurstLength: 65536
    DataDigest: None
    MaxOutstandingR2T: 1
    InitialR2T: Yes
    ImmediateData: No
    HeaderDigest: None
    Wthreads: 8

    Have you tried to enable jumbo frames on the switch / DSS / ESX? I found this article relating to this : http://www.interworks.com/blogs/kcul...s-vmware-esx-4

    I am not using Jumbo Frames, however you can try it...

  9. #9

    Default

    Quote Originally Posted by 1parkplace
    If you read the release notes, Chelsio and Netxen are iffy choices for 10gbe NICs. I'm using intel based 2-port 10gig cards and they work great. I had major problems with NetXen cards which I was using before these.

    -----------------------
    Intel on the DSS side, Chelsio on the VM side. (They were NetXen)
    ----------------------------

    Are you using Block I/O or File I/O for your Volumes? Either way, check the following and retest:
    ------------------------------
    File I/O
    -------------------------------

    From the Console: Ctrl + Alt + W -> Enter Admin Password -> Tuning Options -> iSCSI Daemon Options -> Target Options -> Pick your target

    MaxRecvDataSegmentLength: 8192
    MaxBurstLength 262144
    MaxXmitDataSegmentLength: 8192
    FirstBurstLength: 65536
    DataDigest: None
    MaxOutstandingR2T: 1
    InitialR2T: Yes
    ImmediateData: No
    HeaderDigest: None
    Wthreads: 8

    Have you tried to enable jumbo frames on the switch / DSS / ESX? I found this article relating to this : http://www.interworks.com/blogs/kcul...s-vmware-esx-4

    I am not using Jumbo Frames, however you can try it...
    No Jumbo frames....I should still be able to push 300meg /sec without.

  10. #10

    Default

    My DSS box will be here in a few days (finally!). I am going the 10 gb route myself on my dss box and found this thread quite interesting. What I don't understand is why would you NOT enable jumbo frames? The investement to 10 gbe stuff is still enough over 1 gbe, why would you not want to eek out every ounce of performance that 10 gbe has to offer, especially when all it takes is a few keystrokes and mouse clicks?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •