Open-e provides one iSCSI target for ESX. Unfortunately i am having severe performance problems. In my opinion, testing und a virtualized Windows XP with IOmeter, i should get around 100 MiB/s read or write performance. Unfortunately, whatever IOmeter settings i try, i can't get substantially more than 50 MiB/s.
Can anybody provide me with data from his thests or any pointers how to optimize the speed?
I have already set up jumbo frames on ESX and open-e, which lead to a speed increase of about 5 MiB/s.
At the moment i am using a single connection. I plan to use MPIO in the future, but i would first like to have everything running at a reasonable speed for a single gigabit ethernet connection.
As the theoretical maximum for GBe is 125 MiB/s, i think i should at least achive ~100 MiB/s, especiallly at my test setup using a cross link cable.
Are there any common problems/pitfalls i have to know about?
I am also unsure about the testing with IOmeter. Sometimes the transfer rate starts at about 12 Mib/s and, within a minute, climbs up to approx 50 Mib/s. Is this behavious expected or perhabs a sign of problems?
Sorry for the bump, but i have grown somewhat desperate
In the meantime i have tried to optimize the configuration, e.g. according to this thread. I got minor speed improvements (about 8 MiB/s), but the overall result is still unsatisfactory.
Could somebody please tell me if my expectations for open-e were unrealistic or far fetched Shouldn't open-e be able to deliver at least ~100MiB/s out of the box without further tweaking? I think my hardware (mentioned above) is quite powerfull and shouldn't be a issue.
Could therefore some people post their out of the box performance?
Many thanks in advance.
have you performed any kind of baseline testing? It seems to me that you are getting ahead of yourself in your testing - going right to testing disk performance of VMS on an iSCSI device before you establish how well the disks perform on the Open-E host itself or how well the VM performs using local storage.
what is the performance of a single disk in your array? what's the performance of all your disks in the array when combined in a volume?
what is your baseline I/O performance on your ESX host? How do your VM's perform using local storage?
when you are performing this kind of integration, time and care must be taken to test each component of the final solution before bringing it together as the final solution.
it's a lot of work and effort and it's not for the faint of heart. this forum gets a lot of performance related questions. but performance is always relative. you can't expect to get something out of iSCSI that you can't get out of the disks natively .
that said, your question about delivering performance out of the box is an interesting one. I'd say "YES". You should be able to get that kind of performance out of the box when all the components are working well together. So I'd try and unbundle things and see what the individual performance is of my components.