Visit Open-E website
Results 1 to 10 of 11

Thread: ISCSI Slow Performance with 2003 SBS

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Join Date
    Oct 2008
    Posts
    69

    Default ISCSI Slow Performance with 2003 SBS

    Hi,

    I have two servers connected directly by a switch to OpenE for iSCSI LAN, both at Gigabit.

    One is a Windows 2008 Server, i have good performance when copying file on the ISCSI volume (80 - 90 Mo / s write).

    Second is a Windows 2003 SBS and i have really bad performance.

    The ISCSI Target have the same option on OpenE (C+A+W Iscsi Daemon Options Max Values possible), Jumbo Frame are OFF.

    When copying data on my OpenE with SMB using ISCSI LAN from Windows 2003, i have really good performance, it's look like only my ISCSI from Windows 2003 is slow.

    Any idea ?

    Thanks

    NSC

  2. #2

    Default

    hi nsc

    Check the firmware on the nic in the 2003 box, make sure it is up to date.

  3. #3
    Join Date
    Oct 2008
    Posts
    69

    Default

    Hy,

    i did a full update of the server (IBM xSeries 346), reboot and no change...

    Really strange.

  4. #4
    Join Date
    Aug 2008
    Posts
    236

    Default

    You may need to tune your TCP params. Windows by default uses Naggle algo on 2003/2000. Disable that. See the MS iSCSI User Guide under:

    "Addressing Slow Performance with iSCSI Clusters"

  5. #5
    Join Date
    Oct 2008
    Posts
    69

    Default

    Ok thanks, i did, it's better now

    Now i have another question, looks like Bonding is useless. I try RR, XOR and TLB, i get better performance with Active-Backup (Best at 90Mo / s).

    Any info about that ? Look like with 2 servers, i have to break my bonding and use dedicated NIC for each server right ?

  6. #6
    Join Date
    Nov 2008
    Location
    Hamburg, Germany
    Posts
    102

    Default

    That depends also on the switches and how they cope with bonding. There're several posts about this "issue". Bonding had been originally desinged for fault-tolerance, not speed.

    I couldn't get more performance out of a bond as well, so I devided indeed to wire the server through dedicated links to my DSS, or in my case: run multiple links in different networks from my server to my DSS.

    I am starting to seriously thinking of getting a pile of 10 GbE gear.

    Cheers,
    budy
    There's no OS like OS X!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •