Visit Open-E website
Results 1 to 10 of 10

Thread: Upgrading from 4622 to 5087.

  1. #1
    Join Date
    Sep 2010
    Location
    Albury, New South Wales, Australia, Earth
    Posts
    40

    Arrow Upgrading from 4622 to 5087.

    Hi all,

    I'm currently running
    6.0up45.8101.4622 64bit
    and wish to upgrade to
    6.0up55_b5087
    2 x SANS serving VMware ESXi 4.1 hosts.

    Do I have to incrementally upgrade to this build (ie upgrade to 4786 then 5087) or can I just do the 5087 upgrade.

    I'm in a iSCSI failover setup, so I assume I can do the secondary node first (if all goes ok) manually failover to it then do the primary?

    Any other advise for the powers that be?

    Cheers
    Adam

  2. #2
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    Quote Originally Posted by AdStar
    Hi all,

    I'm currently running
    6.0up45.8101.4622 64bit
    and wish to upgrade to
    6.0up55_b5087
    2 x SANS serving VMware ESXi 4.1 hosts.

    Do I have to incrementally upgrade to this build (ie upgrade to 4786 then 5087) or can I just do the 5087 upgrade.

    I'm in a iSCSI failover setup, so I assume I can do the secondary node first (if all goes ok) manually failover to it then do the primary?

    Any other advise for the powers that be?

    Cheers
    Adam
    ftp://software:enterforupdate@ftp.op...b5087.oe_i.iso

    ftp://software:enterforupdate@ftp.op...eadme_5087.txt

    In case of upgrade from version <= 4622 please use following instructions
    - Update the Secondary system first using the software update functionality and reboot.
    * In case of using VMware ESX(i) or MS Hyper-V Initiator system, you need to change the Identification Device (VPD) compatibility to SCST 1.0 on the secondary node. This is located in the Console tools (CTRL+ALT+W -> Tuning options -> SCST subsystem options -> Device Identification (VPD) compatibility -> SCST VPD)
    - Then once the Secondary is running click on the start button in the Failover manager.
    - Now update the Primary system using the software update functionality and reboot.
    * In case of using VMware ESX(i) or MS Hyper-V as Initiator system, change the Identification Device (VPD) compatibility to SCST 1.0 on the Primary node. This is located in the Console tools (CTRL+ALT+W-> Tuning options -> SCST subsystem options -> Device Identification (VPD) compatibility -> SCST VPD)
    - Once the Primary is running go to the Secondary and click on the Sync volumes button in the Failover manager.
    - Then click on the Failback button in the Secondary system.
    - The Primary system now be go back to the active mode and ready for another failover.
    NOTE:
    - In case you are experiencing a problem with the availability of volumes iSCSI or FC after the upgrade from version <=4622, please change the Identification Device (VPD) compatibility to SCST 1.0. This is located in Console the tools (CTRL+ALT+W -> Tuning options -> SCST subsystem options -> Device Identification (VPD) compatibility -> SCST VPD)
    - In order to run system in rescue mode please add proper parameter to kernel command line. This can be done while system booting by following steps below :
    1. reboot the system
    2. while the system booting press "Tab" key on "Select version" screen
    3. type "rescue_mode" and press "Enter" key (in order to run the system without splash)
    or
    4. type "rescue_mode=no_mount_lv" and press "Enter" key (in order to run the system without mounting logical volumes)

  3. #3
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    http://www.open-e.com/library/webcasts-and-videos/

    2010.10 How to update DSS V6 with Auto Failover - EN

  4. #4
    Join Date
    Sep 2010
    Location
    Albury, New South Wales, Australia, Earth
    Posts
    40

    Default

    Excellent, I was just reading the readme when I posted (should have RTFM first).

    Thanks for the responce, just waiting on my manual backup to complete and I will upgrade.

    Cheers
    Adam

  5. #5
    Join Date
    Sep 2010
    Location
    Albury, New South Wales, Australia, Earth
    Posts
    40

    Default

    One more question:

    Is it ok to run in a mixed version in failover for a while (a couple of days).
    Upgrade the secondary, manually failover to it (sync disks straigh away so they stay in sync), let the system run for a couple of days, if anything goes wrong/south fail back to the primary (which we know is a good working system)

    Is this ok to do?

    Cheers
    Ad

  6. #6
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    Quote Originally Posted by AdStar
    One more question:

    Is it ok to run in a mixed version in failover for a while (a couple of days).
    Upgrade the secondary, manually failover to it (sync disks straigh away so they stay in sync), let the system run for a couple of days, if anything goes wrong/south fail back to the primary (which we know is a good working system)

    Is this ok to do?

    Cheers
    Ad
    Not recommended.
    when something goes wrong, its not usually as easy as you might think ..

  7. #7
    Join Date
    Sep 2010
    Location
    Albury, New South Wales, Australia, Earth
    Posts
    40

    Default

    My concern here is I'm going from SCST 1.0 to 2.0 right?

    Upgrading the secondary, rebooting it, then starting failover services will put it back as a secondary node (andthe disk should be in sync) (which should all go ok), I then failover (hold my breath and hope VMWare plays nice with the new build) if all is ok then can't I run in this setup (hit the sync disk button, so drbd keeps the primary node data in sync) for a day or so, incase something shows up.

    If I go ahead and upgrade the primary I have no fall back if something does go wrong after a "little" time. I'm then stuck with a dead system, hosting down and having to pay $300 a pop per machine to get support on correcting it. Where I can leave the primary on a known working version (SCST 1.0) and just fail back, at least I don't have a downed environment and I can then work issues though with you guys... calmly.

    Yes I'm working on a worst case senario but isn't that what we plan for....

    Cheers
    Ad

  8. #8
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    Using the instructions above, you will have minimal downtime, and minimal data to sync. Your way, you may need to sync the entire VG.
    SCST 2 has been available and in use since build 4786, as well as in ESX4.x. So it plays just fine.

  9. #9
    Join Date
    Sep 2010
    Location
    Albury, New South Wales, Australia, Earth
    Posts
    40

    Default

    Just a quick update, upgrade worked flawlessly, primary and secondary nodes have been upgraded.

    Everything seems to be nice and stable, will run some iometer checks next maintenance window.

    Thanks for your help Gr-R

    Cheers
    Ad

  10. #10
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    Quote Originally Posted by AdStar
    Just a quick update, upgrade worked flawlessly, primary and secondary nodes have been upgraded.

    Everything seems to be nice and stable, will run some iometer checks next maintenance window.

    Thanks for your help Gr-R

    Cheers
    Ad
    woot !

    Let us know how your test turn out.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •