Visit Open-E website
Results 1 to 10 of 17

Thread: Urgent Problem i/o ;(

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    Is this iSCSI target? or NAS Volume?
    Have you tried without the bond?

  2. #2

    Default

    ISCSI target.
    Also turn off the bond.

  3. #3
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    Open a support case, so we can see your system logs

  4. #4
    Join Date
    Feb 2009
    Posts
    142

    Default

    What model disk controller? Do you have Jumbo Frames enabled? Did you tweak the ISCSI target settings as described in various threads in this forum? Getting between 52 MB/s - 89 MB/s depending on traffic using your command line in a centos 5 vm using LSI 9260 disk controller and RAID 10 Sata Drives and Jumbo Frames,target settings tweaked and using MPIO (not bonded). Have about 32 vms active under Xenserver 6.0

  5. #5

    Default

    Thanks for the answers, I contacted support and make the following changes:

    maxRecvDataSegmentLen=262144
    MaxBurstLength=16776192
    Maxxmitdatasegment=262144
    FirstBurstLength=65536
    DataDigest=None
    maxoutstandingr2t=8
    InitialR2T=No
    ImmediateData=Yes
    headerDigest=None
    Wthreads=8

    and jumbo frames 9000 and my disk SEAGATE ST31000424SS 00069WK36GJR

  6. #6

    Default

    Do you have any additional configuration xenserver?
    What version do you have? payment or free?

  7. #7
    Join Date
    Aug 2010
    Posts
    404

    Default

    What is the build of your Xenserver? is it the latest? as your xen might have an issue with your NIC drivers, did you checked for that!

  8. #8
    Join Date
    Feb 2009
    Posts
    142

    Default

    Quote Originally Posted by javiercampos View Post
    Do you have any additional configuration xenserver?
    What version do you have? payment or free?
    payment or free is the same as far as base functinality of xenserver goes. Prefer MPIO to bonding because you can get close to 2 gig throughput, but in bonding your still only getting 1 gig max throughput to any one server.

  9. #9

    Default

    switch are using some special?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •