Visit Open-E website
Results 1 to 9 of 9

Thread: Testing DSS Version 6.0up30.8101.4362 64bit iSCSI Performance

  1. #1

    Question Testing DSS Version 6.0up30.8101.4362 64bit iSCSI Performance

    Hi,

    I've read a lot of posts regarding testing out the performance of DSS and iSCSI using IOMeter and would be grateful if somebody could please share the IOMeter settings used when carrying out such tests. i.e. Access Specifications, 4KB, 32KB, 50% Read etc. etc.

    I believe my DSS SANs are under performing and I would like to try and determine if this is the case or not.

    Many thanks,

    Nik

  2. #2
    Join Date
    Nov 2009
    Posts
    53

    Default

    hey oxleyn,

    try this link, there were made some unofficial performance tests with different systems and IOmeter including the settings.
    The performancetests doesn't belong to the DSS software but a few posts later you can download the .icf (IOMeter) file on the first page.
    Load it with IOmeter and all the settings are done! You just have to pick the specific access specification and run the test. After all you can compare youre results with others.

    hope, that helped.

    LINK:
    http://communities.vmware.com/thread...0&tstart=0

  3. #3

    Default

    Thanks a lot for the URL r3vo.

    I am getting around 95MB/s using the 100% read test which seems low to me considering I have 4 NICs in a bond. Am running the test from within a VM and the ESX host also has 4 NICs too configured with 4 separate VMkernels and a round robin storage access policy. I have also made the tweaks to both DSS iSCSI target and the ESX host.

    How does this performance sound to you guys?

    Thanks again,

    Nik

  4. #4

    Default

    An update: I have delete the bond of 4 NICs on my DSS and just have the 1 NIC dedicated to iSCSI traffic (plus 1 for management). These NICs are on completely seperate switches and subnets too.

    Now when I run the 100% read test I'm getting more like 108MB/s throughput!

    Thanks,

    Nik

  5. #5

    Default

    I think it's better to use MPIO for iscsi than bonding, it's supported nicely by the DSS and you don't need any special settings for nic's and switches.

    Also don't forget that with bonding a tcp/ip session is limited to 1 nic, your test probbaly only used one session so then you can have 25 nic's in a bond, it will still only use one.

  6. #6

    Default

    Thanks for the reply gjtje.

    I have configured MPIO on the ESX side but how do I go about getting the best use out of the multiple NICs I have in my DSS boxes?

  7. #7

    Default

    You'd need atleast 2 nic's, each in a different subnet, on both dss and client for the iscsi bit.

    Don't combine "regular" network traffic with iscsi on the same interface, so if your box has 4 nics that would be 2 for iscsi and 2 for the lan. Ofcourse, different vlans or switches for iscsi would be better but that's not cheap.

  8. #8

    Default

    Hi,

    OK, I'll give you a bit more information on my set-up and hopefully it will enlighten you all!

    I have 3 x ESX hosts, each with 6 NICs. 2 NICs dedicated to LAN traffic and 4 for iSCSI. My DSS boxes have the same configuration although only 1 NIC is currently connected to the LAN for management purposes.

    iSCSI traffic is kept completely seperate from the LAN. At present, the DSS boxes, well, one of them at least during testing (!) is connected to the iSCSI switch via 4 NICs in a 802.3ad bond and 4 ports on the switch are aggregated together for this purpose.

    On the ESX hosts I have 4 VMkernels each with a dedicated NIC which are physically connected to the iSCSI switch using MPIO/Round Robin policy/IOPs=1.

    I do have another iSCSI switch but have not introduced this yet and I am happy to split the VMkernel traffic across the two (i.e. 2 NICs per iSCSI switch) if it's going to boost performance.

    I am currently getting around 200MB/s throughput using a 100% read test in IOmeter.

    I guess I'm wondering can I ever expect to better this or is it as good as it's going to get performance wise?

    Thanks a lot for all your help again,

    Nik

  9. #9
    Join Date
    Aug 2008
    Posts
    236

    Default

    I always encourage simple and stupid testing before trying to bench throughput through VMs.

    For example, take a good platform that you are comfortable with and put it on your storage network and connect it to your targets using the same topology you'd expect your Hypervisor to run on. Then test. I use a series of seq max throughput tests, max i/o tests and then more specific test suites made to simulate database servers or webservers or whatever.

    This establishes your base. You cannot reasonably expect to get more performance out of your VMs than you can your Hypervisor.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •