Visit Open-E website
Page 1 of 2 12 LastLast
Results 1 to 10 of 17

Thread: ┐Why my iscsi speed is like limited?

  1. #1

    Unhappy ┐Why my iscsi speed is like limited?

    Hi! i┤m using DSS V6 Lite (6.0up50.8401.4786 64bit) with VMWare esxi 4.0 (171294).

    I have made this guide (documents.open-e.com/Open-_DSS_V6_MPIO_with_ESX4i_4.1.pdf) step by step but i can┤t get a good iscsi speed.
    I have saw this video too and I followed it step by step: http://www.vimeo.com/moogaloop.swf?clip_id=18113062

    I have configured jumbo frames to 7000 and 9000.
    I have used a dedicated switch with jumbo frames enabled.
    Now I ┤m using crossover patchs but i┤m still getting the same performance.

    Also i have configure this: (based on this post: http://forum.open-e.com/showthread.p...ght=slow+iscsi)
    On DSS V6 Lite:
    Here are the settings I made:
    MaxRecvDataSegmentLength=65536
    MaxBurstLength=1047552 (the recommended 16776192 knocked out my connection)
    MaxXmitDataSegmentLength=65536
    FirstBurstLength=523776
    MaxOutstandingR2T=8
    InitialR2T=No
    ImmediateData=Yes

    On ESXi 4:
    iscovery.sendtargets.iscsi.MaxRecvDataSegmentLeng ht = 65536
    node.session.iscsi.FirstBurstLength = 523776
    node.session.iscsi.MaxBurstLength = 1047552
    node.conn[0].iscsi.MaxRecvDataSegmentLength = 65536
    node.session.iscsi.InitialR2T = No
    node.session.iscsi.ImmedateData = Yes
    node.conn[0].tcp.window_size = 65536

    but i cannot improve the speed.
    I don┤t know what i┤m doing wrong.

    In DSS V6 Lite i have 2 dedicated intel PRO 1000 Gt (PCI) nics.
    H/w raid 3ware 9650SE in RAID 10 configuration.
    In esxi the same configuration, 2 dedicated intel PRO 1000 Gt (PCI) nics.

    IOmeter shows a maximun of 25 Mb

    here are some pics:






    Sorry for my english, thank you!!!

  2. #2

    Default

    Two more pics:




    Thanks!

  3. #3
    Join Date
    Sep 2010
    Location
    Albury, New South Wales, Australia, Earth
    Posts
    40

    Default

    From the console whats your output for the following

    esxcfg-vswitch -l
    esxcfg-vmknic -l
    esxcfg-nics -l
    esxcli swiscsi nic list -d vmhba32

    Cheers
    Adam

  4. #4

    Default

    Thanks for you help AdStar

    Here are the commands

    ~ # esxcfg-vswitch -l
    Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
    vSwitch0 64 10 64 1500 vmnic1,vmnic0

    PortGroup Name VLAN ID Used Ports Uplinks
    VM Network 0 6 vmnic0,vmnic1
    Management Network 0 1 vmnic0,vmnic1

    Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
    vSwitch1 64 3 64 9000 vmnic2

    PortGroup Name VLAN ID Used Ports Uplinks
    MPIO-1 0 1 vmnic2

    Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
    vSwitch2 64 3 64 9000 vmnic3

    PortGroup Name VLAN ID Used Ports Uplinks
    MPIO-2 0 1 vmnic3

    ~ # esxcfg-vmknic -l
    Interface Port Group/DVPort IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type
    vmk0 Management Network IPv4 192.168.0.60 255.255.255.0 192.168.0.255 00:25:90:15:d3:6e 1500 65535 true STATIC
    vmk1 MPIO-1 IPv4 192.168.2.200 255.255.255.0 192.168.2.255 00:50:56:72:b6:bd 9000 65535 true STATIC
    vmk2 MPIO-2 IPv4 192.168.3.200 255.255.255.0 192.168.3.255 00:50:56:7d:2a:83 9000 65535 true STATIC

    ~ # esxcfg-nics -l
    Name PCI Driver Link Speed Duplex MAC Address MTU Description
    vmnic0 0d:00.00 e1000e Up 1000Mbps Full 00:25:90:15:d3:6e 1500 Intel Corporation 82573E Gigabit Ethernet Controller
    vmnic1 0f:00.00 e1000e Up 1000Mbps Full 00:25:90:15:d3:6f 1500 Intel Corporation 82573L Gigabit Ethernet Controller
    vmnic2 11:02.00 e1000 Up 1000Mbps Full 00:0e:0c:d8:22:e6 9000 Intel Corporation PRO/1000 GT Desktop Adapter
    vmnic3 11:03.00 e1000 Up 1000Mbps Full 00:1b:21:86:fd:fc 9000 Intel Corporation PRO/1000 GT Desktop Adapter

    If i do esxcli swiscsi nic list -d vmhba32 i get this:

    ~ # esxcli swiscsi nic list -d vmhba32
    Could not find valid OIDs for adapter vmhba32
    Errors:
    List nic failed in IMA.

    But if i do esxcli swiscsi nic list -d vmhba34 i get this:

    ~ # esxcli swiscsi nic list -d vmhba34
    vmk1
    pNic name: vmnic2
    ipv4 address: 192.168.2.200
    ipv4 net mask: 255.255.255.0
    ipv6 addresses:
    mac address: 00:0e:0c:d8:22:e6
    mtu: 9000
    toe: false
    tso: true
    tcp checksum: false
    vlan: true
    link connected: true
    ethernet speed: 1000
    packets received: 70
    packets sent: 11
    NIC driver: e1000
    driver version: 8.0.3.1-NAPI
    firmware version: N/A

    vmk2
    pNic name: vmnic3
    ipv4 address: 192.168.3.200
    ipv4 net mask: 255.255.255.0
    ipv6 addresses:
    mac address: 00:1b:21:86:fd:fc
    mtu: 9000
    toe: false
    tso: true
    tcp checksum: false
    vlan: true
    link connected: true
    ethernet speed: 1000
    packets received: 70
    packets sent: 11
    NIC driver: e1000
    driver version: 8.0.3.1-NAPI
    firmware version: N/A

    Thanks!!!!

  5. #5

    Default

    an update:
    i have setup raid 0 for speed test but i still have the same performance, I do not know which is the bottleneck in my setup.

    thanks

  6. #6

    Default

    Another update:

    The previous tests were made with the target configured in File I/O mode with WB enabled.
    I ran test with WB disable, also in the controller WB was disabled but the performance was too slow.

    Recently i have configured the DSS V6 Lite with Block I/O with WB enabled. Now i have a better performance but it still slow.

    Esxi says that the nics are transmitting about 30.000 Kbps each.

    What other test I can do to figure out what is the problem?

    Thanks!

  7. #7
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    try upgrading to latest V6 b5087, and switch to SCST1.0

  8. #8

    Default

    Gr-R:


    I┤m already using DSS V6: 6.0up55.8101.5087 64bit and SCST1.0.

    The strange thing is that i can move a 41 Gb archive from the esxi to the DSS V6 in less than 10 minutes but, in a VM, IOmeter shows 25 Mb maximun.

    I don┤t know if is the processor (athlonII x2 245, 2.9 ghz) is my bottleneck:



    Thanks!!!

  9. #9
    Join Date
    Sep 2010
    Location
    Albury, New South Wales, Australia, Earth
    Posts
    40

    Default

    Hmm your using a desktop adapter for your MPIO I would carefully check your specs on these adapters. Not saying there is an issue here, just I wouldn't use desktop cards for anything intensive and iSCSI MPIO, the rest of your esxi settings look right.

    What is your switch equipment? Have you check packet through put on your switches etc

    Cheers
    Ad

  10. #10

    Default

    AdStar:

    Now i am using crossover cable between the Open-E and the esxi.
    But if u want to know this is my switch:
    http://www.intellinet-network.com/en...b-smart-switch

    I will try to upgrade the processor and see if the performance improves.

    Thanks!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •