Visit Open-E website
Results 1 to 10 of 85

Thread: vSphere 4 - Support on the Way?

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    Default

    Todd -
    What I will try is to switch over to IETd tonight (actually in the next couple hours). I will force a failover to SAN2, enable IETd on SAN1, failback and then enable IETd on SAN2.

    Hopefully this should be a seamless transition. Once moved over, do you want me to let it run? Or try those settings you suggested yesterday on the target side?

    As far as breaking the bond, that will be the tricky part and I would like to wait to do that last.

    Also, I have been doing some basic disk testing with CrystalDiskMark (Win) and hdparm (Linux), and I have noticed the volume group on the external PERC controller (RAID6 8x450gb 15k SAS) gets me an average of ~110MB/s Seq Read, and ~65MB/s Seq write in Windows and a buffered disk read of ~180MB/sec in Red Hat Enterprise 5 x64 (hdparm -tT /dev/sda).

    Now, the internal PERC controller (RAID10 6x450gb 15k SAS) gets me a sluggish Seq read of ~50MB/s and a Seq write of ~30MB/s and as low as 5MB/sec. If I do a Storage vMotion to the External PERC, it gets a little better at ~95MB seq read and ~33MB seq write. I'm confused...all controllers have the same setup except the PERC 5/e in SAN2 (refer to my diagram for reference).

    Any thoughts? I thought I had some consistent results, but a couple of the servers that were dropping is on the external array. Doesn't matter where I move them to. If I've confused you, let me know and I might be able to explain better.

    Thanks again for all your help. I will update the IETd switchover later tonight...

    Jason

  2. #2

    Default

    Sorry for not getting back fast enough - very busy times .

    Just let it run after the switch over - let the bond wait so we can trouble shoot one at a time.

    Concerning the performance is was ready to jump on the disk drives but now in the end of your message that squelched that idea, unless firmware and or using 32bit mode made the difference of the drivers on the system - but don't think that is the case. We would have to see the logs to see if there is anything we can find to point us in the right direction. I would think the RAID 10 would do very well.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  3. #3

    Default

    Todd -
    Yes, that's what I thought. I have updated all firmware on both systems, and all RAID controllers, system, and bios are up to date. Now, the only difference in controllers is the PERC 6/e has 512MB (RAID 6 array) and all the others have 256MB.

    Now, I failed over to SAN2, changed the iSCSI from SCST to IETd on SAN1 and failed back. That all went ok...well....almost.

    All the ESX servers lost connections to the datastores on the iscsi volumes. After doing a rescan, the disks show back up but do not have a datastore assigned to it. I can goto Add Datastore and see the LUNS, but this would require them to be reformatted..

    I changed back to SCST to get everything back up. If you see this in time, let me know if you want me to give anything else a shot. I can have temporary interruptions for the next few hours. I can be reached via phone if necessary (PM me for info) and can give you access to shadow me, etc.

    Let me know.

    Thanks,
    Jason

  4. #4

    Default

    I sent you a message - give me your number so i can call you.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  5. #5

    Default

    Done.

    Thanks!

  6. #6

    Default

    (Accidentally posted this in the wrong thread...)


    Todd -
    Thanks again for taking time last night to help me out. I have done what you suggested and killed my bond, changed the IP of the iscsi interface to a different subnet, and then setup the virtual IP again.

    Now, what I noticed was the speed tests were a little more consistent with each other after about 20 minutes (a lot of traffic initially because the VM's were down for about an hour). Of course, if I ran more than one test at a time the gig iscsi interface on the SAN couldn't keep up. I was seeing an average of ~105MB/s seq read, ~55MB/s seq write on both the RAID6 external lun and the RAID10 internal LUN.

    I then created a balance-rr bond with all 4 of the Intel GIG iscsi interfaces. The tests were a little better than the 802.3ad, but not as "bursty." Once again, the speeds vary from lun to lun and when they are run. I have seen the low 90's on the read and even in the 70's on the writes. I think were getting closer. I will let it run more during the day before sending some logs. I have uploaded the logs from both servers when it was in a single, no bond, gig interface.

    I'll keep you posted and possibly get some performance numbers once I can find some consistency.

    http://upload.vmind.com/web-pub/cera...6-18_00-43.zip

    Thanks again!

    Jason

  7. #7

    Default

    Jason - Thanks for the update, seems we are getting closer. The bonds will start kicking in when there are many systems or requests coming then it will do better. You can also try the ALB bond. I heard that is doing well. Looks like we will work on the 802.3ad now.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •