Visit Open-E website
Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: ISCSI Slow Performance with 2003 SBS

  1. #1
    Join Date
    Oct 2008
    Posts
    69

    Default ISCSI Slow Performance with 2003 SBS

    Hi,

    I have two servers connected directly by a switch to OpenE for iSCSI LAN, both at Gigabit.

    One is a Windows 2008 Server, i have good performance when copying file on the ISCSI volume (80 - 90 Mo / s write).

    Second is a Windows 2003 SBS and i have really bad performance.

    The ISCSI Target have the same option on OpenE (C+A+W Iscsi Daemon Options Max Values possible), Jumbo Frame are OFF.

    When copying data on my OpenE with SMB using ISCSI LAN from Windows 2003, i have really good performance, it's look like only my ISCSI from Windows 2003 is slow.

    Any idea ?

    Thanks

    NSC

  2. #2

    Default

    hi nsc

    Check the firmware on the nic in the 2003 box, make sure it is up to date.

  3. #3
    Join Date
    Oct 2008
    Posts
    69

    Default

    Hy,

    i did a full update of the server (IBM xSeries 346), reboot and no change...

    Really strange.

  4. #4
    Join Date
    Aug 2008
    Posts
    236

    Default

    You may need to tune your TCP params. Windows by default uses Naggle algo on 2003/2000. Disable that. See the MS iSCSI User Guide under:

    "Addressing Slow Performance with iSCSI Clusters"

  5. #5
    Join Date
    Oct 2008
    Posts
    69

    Default

    Ok thanks, i did, it's better now

    Now i have another question, looks like Bonding is useless. I try RR, XOR and TLB, i get better performance with Active-Backup (Best at 90Mo / s).

    Any info about that ? Look like with 2 servers, i have to break my bonding and use dedicated NIC for each server right ?

  6. #6
    Join Date
    Nov 2008
    Location
    Hamburg, Germany
    Posts
    102

    Default

    That depends also on the switches and how they cope with bonding. There're several posts about this "issue". Bonding had been originally desinged for fault-tolerance, not speed.

    I couldn't get more performance out of a bond as well, so I devided indeed to wire the server through dedicated links to my DSS, or in my case: run multiple links in different networks from my server to my DSS.

    I am starting to seriously thinking of getting a pile of 10 GbE gear.

    Cheers,
    budy
    There's no OS like OS X!

  7. #7
    Join Date
    Oct 2008
    Posts
    69

    Default

    I understand the fault-tolerance with Bonding but if i need more bandwith for iSCSI ?

    My OpenE has 6 NIC Gigabit (2 x Intel Dual Port and 2 from MotherBoard) if i want best performance what is the best ?

    For Performance what is the best ?

    A switch with 802.3 ad or using 1 NIC for each iSCSI volume ?

  8. #8
    Join Date
    Nov 2008
    Location
    Hamburg, Germany
    Posts
    102

    Default

    That's exact the same setup as mine. I am running one of my DSS on a Dell PE1750 with 2x onboard GbE and a PCI-X QuadPort GbE.

    I do have a 802.3ad bond and I have just configured a firther NIC for connecting the 3rd iSCSI volume to one of my Xserves, that has also 4 GbE ports.
    Up to now, I have been connecting my 2 iSCSI Volumes through the bond and never got more than 130 MB/sec, but now I have the 3rd iSCSI volume connected through the additional link and we'll see tomorrow how it works out, when the next full-backup is due.

    Cheers,
    budy
    There's no OS like OS X!

  9. #9
    Join Date
    Aug 2008
    Posts
    236

    Default

    bonding only gives more potential bandwidth if you have many hosts and the result of the XOR function works in your favor. You are going to have to resort to using multiple links or higher bandwidth links...

  10. #10
    Join Date
    Oct 2008
    Posts
    69

    Default

    So without a switch with 802.3ad support, best bonding is XOR ?

    Thanks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •