Visit Open-E website
Results 1 to 10 of 17

Thread: Urgent Problem i/o ;(

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Join Date
    Feb 2009
    Posts
    142

    Default

    What model disk controller? Do you have Jumbo Frames enabled? Did you tweak the ISCSI target settings as described in various threads in this forum? Getting between 52 MB/s - 89 MB/s depending on traffic using your command line in a centos 5 vm using LSI 9260 disk controller and RAID 10 Sata Drives and Jumbo Frames,target settings tweaked and using MPIO (not bonded). Have about 32 vms active under Xenserver 6.0

  2. #2

    Default

    Thanks for the answers, I contacted support and make the following changes:

    maxRecvDataSegmentLen=262144
    MaxBurstLength=16776192
    Maxxmitdatasegment=262144
    FirstBurstLength=65536
    DataDigest=None
    maxoutstandingr2t=8
    InitialR2T=No
    ImmediateData=Yes
    headerDigest=None
    Wthreads=8

    and jumbo frames 9000 and my disk SEAGATE ST31000424SS 00069WK36GJR

  3. #3

    Default

    Do you have any additional configuration xenserver?
    What version do you have? payment or free?

  4. #4
    Join Date
    Aug 2010
    Posts
    404

    Default

    What is the build of your Xenserver? is it the latest? as your xen might have an issue with your NIC drivers, did you checked for that!

  5. #5
    Join Date
    Feb 2009
    Posts
    142

    Default

    Quote Originally Posted by javiercampos View Post
    Do you have any additional configuration xenserver?
    What version do you have? payment or free?
    payment or free is the same as far as base functinality of xenserver goes. Prefer MPIO to bonding because you can get close to 2 gig throughput, but in bonding your still only getting 1 gig max throughput to any one server.

  6. #6

    Default

    switch are using some special?

  7. #7
    Join Date
    Feb 2009
    Posts
    142

    Default

    using Dell 6248 with Gigabit ports. You need to make sure if using Jumbo Frames that its supported by your switch. Make sure your packets are not getting fragmented. Use something like this test from your xenservers to your DSS server ISCSI IP's: ping -M do -s 8972 -c 10 10.10.10.1 (where 10.10.10.1 is your dss ip addr) If the packets are fragmenting it will tell you, if they are ok then the repsone will be 10 normal looking ping responses.

    Your Xenserver nics have to have MTU=9000 set and the same on the DSS side. On the Dell 6248 we have to set each port that we want to do Jumbo Frames to an MTU of 9016 which kicks it into Jumbo Frame mode. Check the docs for your switch to see if there is anything you have to do to enable JF's.

  8. #8

    Default

    Hello,
    I configured the server and storage with a crossover cable card directly to the storage server .. and gives the same result.

  9. #9
    Join Date
    Feb 2009
    Posts
    142

    Default

    Is this a production server? or are you in testing phase? if testing you might try using raid 10 instead of raid 5. What disk controller did you say you were using and how much cache does it have. I am assuming your using Block I/O.

    Running out of ideas as to what your issue might be :-)

  10. #10

    Default 100mb

    Quote Originally Posted by javiercampos View Post
    Hello,
    I configured the server and storage with a crossover cable card directly to the storage server .. and gives the same result.
    My guess is that you have a 100mb crossover cable and/or the nic in your host is only negotiating at 100mb. make sure you have a gigabit switch and not just a fast Ethernet switch. If you do not have a gigabit crossover check this link on how to make one. http://logout.sh/computers/net/gigabit/. When you said you are only getting the 9.1MB/sec rates that is the full throughput of 100mb Ethernet. expect 80-90MB/sec for gigabit.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •