Visit Open-E website
Results 1 to 5 of 5

Thread: Experience with DSS6 and Vsphere 4

  1. #1

    Default Experience with DSS6 and Vsphere 4

    Hallo

    Please Request for feedback

    As your experience with DSS6 and VSphere 4 with Autofailover
    Can I use the Productive
    or has someone in productive use

    Please send feedback

    Thanks

    fraext

  2. #2

    Default

    I've used both DSS v5 and v6 (now in SCST mode) with Autofailover and vSphere 4.0. I'm using Jumbo frames and File I/O mode in DSS. There are some tweaks posted in this thread that fellow forum members have recommended; your mileage may vary.

    My recommendation is to make sure that you pay attention to your physical i/o characteristics. One thing that surprised me was that your write speeds will be dramatically lower with autofailover than without, because with synchronous replication your write cache should be OFF, and this reduces the speed of write i/o to the speed at which the disks can handle it.

    Also, make sure you have a maintenance window during which you can shut down ALL of your VM's if you ever need to do software updates on the DSS or add volumes. You can't do that without stopping the autofailover service, which stops iSCSI, which means your VM's don't see their storage anymore

    Finally, remember that DSS is just software. It's well-designed software but you have to have reliable hardware to go along with it if you are going to rely on it for production use. There's a reason why EMC and HP and Netapp cost a lot - it's the software and the hardware. You can get into a SAN for a lot less with open-e but if you put low-end hardware in your SAN, you'll get an unstable system.

    Just my few thoughts...

  3. #3

    Default

    I'm looking at putting in place something similar. With iSCSI HA fallover - I take it from what your saying that if the primary server writes something to disk, it doesn't acknowledge the write action until the secondary does too? Does the secondary use any kind of caching with its writes so that at least it could have some sort of buffer before slowing down the writes on the primary? Just something im trying to get a grasp on.. What type of connection do you have between primary and secondary servers? 1GbE, 10GbE?

    Thanks!

    - D2G

  4. #4

    Default

    Quote Originally Posted by d2globalinc
    I'm looking at putting in place something similar. With iSCSI HA fallover - I take it from what your saying that if the primary server writes something to disk, it doesn't acknowledge the write action until the secondary does too? Does the secondary use any kind of caching with its writes so that at least it could have some sort of buffer before slowing down the writes on the primary? Just something im trying to get a grasp on.. What type of connection do you have between primary and secondary servers? 1GbE, 10GbE?

    Thanks!

    - D2G
    Right, that's my understanding. I haven't done performance testing with Writeback on vs. writeback off, because I've only used it in an HA failover environment. The recommendation is to have writethrough on (i.e., writeback off) so that in the event of a hardware failure on the primary OR secondary, the data will be consistent. I suppose you could set the secondary to use writeback for its luns, but then you'd have the case where you failover from your primary to secondary, and now your writes don't have the 100% integrity guaranteed by writethrough. That may be an edge case.

    I have two servers (Silicon Mechanics boxes) configured with six network interfaces, two bonded for iscsi, two bonded for replication, and two bonded for management. They're all 1gb ethernet. In my case, I haven't felt like the replication was holding back write performance. The writes per sec are still going to be limited by how many disks you have and what their performance capabilities are. My workload is VMWare virtual machines, and that means it skews more towards random i/o as opposed to sequential, so having the write cache enabled or disabled probably doesn't make that much difference as far as performance goes. If I was using DSS for a heavily database-oriented sequential write workload I would probably notice it more.

    In my environment it bothers me more that I have a box of disks sitting idle instead of contributing their spindles to help improve read performance. I'm considering breaking the replication / HA pair apart, using each DSS individually to split the workload between them, and then looking at some software for the data replication. In other words, put 20 vm guests on DSS-A and 20 vm's on DSS-B, and cross-replicate them so that if either DSS goes down, the replicas are present on the other box. Plus that would allow me to have two independent DSS storage groups so I can use storage vmotion to migrate all of my vm's to one DSS, update the software on the other one, move them the other direction, update, etc. I'm still leery of doing a software update on a HA failover pair while the vm's are powered up and in production.

    Hope that helps,
    James

  5. #5

    Default

    Quote Originally Posted by jisaac
    In other words, put 20 vm guests on DSS-A and 20 vm's on DSS-B, and cross-replicate them so that if either DSS goes down, the replicas are present on the other box.
    I was thinking of doing this same method. I also don't like having an identical box sitting idle just for failover when it could do more good if it was pushing VM's as well.

    I've got Open-E DSS v6 now working with iSCSI and setup MPIO under a test host using ESX4i. The problem I'm having is read performance seems to be off compared to write performance. Write performance seems to use all 3 nics I setup under MPIO - however read performance looks like its only using the bandwidth of about 1. I'm not sure what the heck the problem is but I'm going to post a new thread to see if anyone has a solution.

    Thanks for your input!

    - Shane

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •