Visit Open-E website
Results 1 to 8 of 8

Thread: Insufficient speed with iSCSI DSS 6

  1. #1

    Default Insufficient speed with iSCSI DSS 6

    I have a iSCSI test setup with the following hardware:

    open-e 6.0up14.8101.4221 64bit :
    2x Opteron 2378 (2.4 GHz)
    8 GiB Ram
    Areca 1680 Controller
    8x 1TB HDD Raid 5
    2x Intel dualport Gigabit NIC

    ESX 4:
    4x Opteron 2378 (2.4 GHz)
    32 GiB Ram
    2x Intel quadport Gigabit NIC

    Open-e provides one iSCSI target for ESX. Unfortunately i am having severe performance problems. In my opinion, testing und a virtualized Windows XP with IOmeter, i should get around 100 MiB/s read or write performance. Unfortunately, whatever IOmeter settings i try, i can't get substantially more than 50 MiB/s.

    Can anybody provide me with data from his thests or any pointers how to optimize the speed?

  2. #2

    Default

    Try using jumbo frames and separate vlans or switches for iscsi traffic. Are you using MPIO or trunks (or a single connection)?

  3. #3

    Default

    I have already set up jumbo frames on ESX and open-e, which lead to a speed increase of about 5 MiB/s.

    At the moment i am using a single connection. I plan to use MPIO in the future, but i would first like to have everything running at a reasonable speed for a single gigabit ethernet connection.

    As the theoretical maximum for GBe is 125 MiB/s, i think i should at least achive ~100 MiB/s, especiallly at my test setup using a cross link cable.

    Are there any common problems/pitfalls i have to know about?

    I am also unsure about the testing with IOmeter. Sometimes the transfer rate starts at about 12 Mib/s and, within a minute, climbs up to approx 50 Mib/s. Is this behavious expected or perhabs a sign of problems?

  4. #4

    Default

    Sorry for the bump, but i have grown somewhat desperate

    In the meantime i have tried to optimize the configuration, e.g. according to this thread. I got minor speed improvements (about 8 MiB/s), but the overall result is still unsatisfactory.

    Could somebody please tell me if my expectations for open-e were unrealistic or far fetched Shouldn't open-e be able to deliver at least ~100MiB/s out of the box without further tweaking? I think my hardware (mentioned above) is quite powerfull and shouldn't be a issue.

    Could therefore some people post their out of the box performance?
    Many thanks in advance.

  5. #5
    Join Date
    Aug 2008
    Posts
    236

    Default

    have you performed any kind of baseline testing? It seems to me that you are getting ahead of yourself in your testing - going right to testing disk performance of VMS on an iSCSI device before you establish how well the disks perform on the Open-E host itself or how well the VM performs using local storage.

    what is the performance of a single disk in your array? what's the performance of all your disks in the array when combined in a volume?
    what is your baseline I/O performance on your ESX host? How do your VM's perform using local storage?

    when you are performing this kind of integration, time and care must be taken to test each component of the final solution before bringing it together as the final solution.

    it's a lot of work and effort and it's not for the faint of heart. this forum gets a lot of performance related questions. but performance is always relative. you can't expect to get something out of iSCSI that you can't get out of the disks natively .

    that said, your question about delivering performance out of the box is an interesting one. I'd say "YES". You should be able to get that kind of performance out of the box when all the components are working well together. So I'd try and unbundle things and see what the individual performance is of my components.

  6. #6

    Default

    After many hours of evaluation (ruling out everything else), I changed the virtualized testing OS form Windows XP to Server 2003.

    With Server 2003 IOmeter measured ~90 MiB/s for a single GBe Link and ~195 MiB/s for MPIO, wich is not great, but more then sufficient.

    However i have no clue if/why XP differs from 2003 concering iSCSI

  7. #7

    Default

    Another thing to make sure is that your raid controller and drives are compatible. If the drives are not compatible with the raid controller you can have issues. There can be a large number of io ops waiting on drivers and response from disk. Turning off things like caching and tagged command queuing can drop performance significantly. The raid card driver may be doing this if it sees a problem with requests that are failing and piling up.

  8. #8

    Default Slow disk read/writes

    So I have a similar issue.

    I have 4 bonded links in 802.3ad on my DSSv6 system with raid 6 configured on the 8 drives via the 3ware console.

    I have tested across these bonded links to my 2 xenserver hosts and from within a VM I'm getting 200mb/s (mbits not bytes) write and 650mbits/s read, so not even using a full 1Gb link. To ensure there is no problem with the bonds or the switch I have another interface on my Dssv6 linked directly to one of my xenserver hosts and I get the same speeds. I've even tried to use mpio across the bond and the single ethernet and the speed remains the same.

    Surely I should expect at least 1gb/s write / read minimum.
    Disks are datacenter (I think they call them ES, but cant remember) 1TB 7200 sata drives capable of 3Gbps

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •