Friday, February 18, 2011

Installing patch 11.2.0.2 to Oracle 11.2.0.1 when the Clusterware and RDBMS owners are different.

Today we will patch our Grid Infrastructure (GI) and database (RDBMS) homes from 11.2.0.1 to 11.2.0.2. Be ready, this is a relatively long process.

Oracle Grid Infrastructure and Database Patch from 11.2.0.1 to 11.2.0.2

Important changes to Oracle patches.

Required Pre-Patch Steps to Upgrade Grid Infrastructure and ASM

Quick links to update information.
Unset ORA_CRS_HOME if it is set. You should never use ORA_CRS_HOME as an environment variable except under explicit direction from Oracle Support.

sudo su - grid; env | grep -i ORA_CRS_HOME; exit 
sudo su - oracle; env | grep -i ORA_CRS_HOME 

To upgrade existing 11.2.0.1 Oracle Grid Infrastructure installations to Oracle Grid Infrastructure 11.2.0.2, you must first do at least one of the following:

a) Patch the Oracle Grid Infrastructure 11.2.0.1 home with the 9413827 and 9706490 patches.
b) Install Oracle Grid Infrastructure Patch Set 1 (GI PSU1) or Oracle Grid Infrastructure Patch Set 2 (GI PSU2).

We will install the latest OPatch, the GI PSU2 and the Database (RDBMS) PSU.

Install Latest OPatch

You must use the OPatch utility version 11.2.0.1.3 or later to apply all the other patches in this document.

So Download the latest OPatch as patch 6880880 from MOS.

Check the SHA1 digest from patch 6880880. Note that I place all patches into a central NFS repository that all Oracle machines can access. This repository is called the staging area /nfs/install/oracle. All Oracle machines have Autofs configured with this mount point.

openssl dgst -sha1 ~/Downloads/p6880880_112000_Linux-x86-64.zip
echo "81B82424E2BC4C3B13436FC66509A71BACDB5AB8" > /nfs/install/oracle/opatch/p6880880_112000_Linux-x86-64.zip.sha1

If the SHA1 digest from the downloaded file is the same as the one found on MOS, then place the patch in the staging area.

mkdir -p /nfs/install/oracle/opatch
mv ~/Downloads/p6880880_112000_Linux-x86-64.zip /nfs/install/oracle/opatch

Node 1 OPatch Installation

Connect to the first node and update OPatch in both the GI and RDBMS homes.

Node 1 GI OPatch

ssh node1.domain.com
sudo su - grid
opatch version
unzip /nfs/install/oracle/opatch/p6880880_112000_Linux-x86-64.zip -d $CRS_HOME
opatch version
exit

Node 1 RDBMS OPatch

sudo su - oracle
opatch version
unzip /nfs/install/oracle/opatch/p6880880_112000_Linux-x86-64.zip -d $ORACLE_HOME
opatch version
exit

Node N OPatch Installation

Do the same on all the other nodes on your cluster.

Node N RDBMS OPatch

ssh nodeN.domain.com
sudo su - grid
opatch version
unzip /nfs/install/oracle/opatch/p6880880_112000_Linux-x86-64.zip -d $CRS_HOME
opatch version
exit

Node N RDBMS OPatch

sudo su - oracle
opatch version
unzip /nfs/install/oracle/opatch/p6880880_112000_Linux-x86-64.zip -d $ORACLE_HOME
opatch versionexit

Grid Infrastructure PSU Patch 9655006

MOS Doc ID for PSU Patch 9655006.

    Download patch 9655006 from MOS in Patches & Updates section. Move the patch to the staging area and check the SHA1 digest.

    mkdir -p /nfs/install/oracle/patches/p9655006
    mv ~/Downloads/p9655006_112010_Linux-x86-64.zip
    /nfs/install/oracle/patches/p9655006
    echo "168929504917698896B0E58018A72606384FD51F" > /nfs/install/oracle/patches/p9655006/p9655006_112010_Linux-x86-64.zip.sha1

    Extract the patch.
     
    cd /nfs/install/oracle/patches/p9655006
    unzip p9655006_112010_Linux-x86-64.zip

    Connect to the first node.
     
    ssh node1.domain.com

    Switch to the GI owner grid and validate the inventory.
     
    sudo su - grid
    opatch lsinventory

    Determine whether any currently installed one-off patches conflict with the PSU patch.
    cd /nfs/install/oracle/patches/p9655006
    opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./9655006
    opatch prereq CheckConflictAgainstOHWithDetail -oh $CRS_HOME -phBaseDir ./9655006

    Make sure the GI stack is running on all nodes in the cluster.
    crsctl status resource -t
    exit
    
    
    If all looks good, then patch the GI home as root.
    sudo su - root
    export ORACLE_HOME="/u01/app/11.2.0/grid" && echo $ORACLE_HOME
    export PATH="$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin"
    echo $PATH
    opatch auto /nfs/install/oracle/patches/p9655006 -oh $ORACLE_HOME
    exit

    Once this is done, connect to the all the other nodes and patch their own GI home.
    ssh nodeN.domain.com

    Switch to the root user and patch the GI home.
    sudo su - root
    export ORACLE_HOME="/u01/app/11.2.0/grid" && echo $ORACLE_HOME
    export PATH="$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin"
    echo $PATH
    opatch auto /nfs/install/oracle/patches/p9655006 -oh $ORACLE_HOME
    exit

    Once all GI are patched on all nodes, then clean-up the patch directory.

    rm -rf /nfs/install/oracle/patches/p9655006/{9654983,9655006,bundle.xml,README.*}

    Required Tasks After Oracle Grid Infrastructure Upgrades

    Database PSU Patch 9654983

    MOS Doc ID on Patch 9654983
    Download patch 9654983 from MOS. Check the patch's SHA1 digest then move the patch to the staging area.

    openssl dgst -sha1 ~/Downloads/p9654983_112010_Linux-x86-64.zip
    mkdir -p /nfs/install/oracle/patches/p9654983
    echo "727812046AF4FC749B8A0088DA4BE37D137D47C5" > /nfs/install/oracle/patches/p9654983/p9654983_112010_Linux-x86-64.zip.sha1
    mv ~/Downloads/p9654983_112010_Linux-x86-64.zip /nfs/install/oracle/patches/p9654983

    Extract the patch.

    cd /nfs/install/oracle/patches/p9654983
    unzip p9654983_112010_Linux-x86-64.zip

    Connect to the first node.
    ssh node1.domain.com

    Determine whether any currently installed one-off patches conflict with the PSU patch.
    sudo su - oracle
    cd /nfs/install/oracle/patches/p9654983
    opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./9654983

    Shutdown all instances running on this node.
    srvctl stop home -o $ORACLE_HOME -s ~/p9654983.status -n `hostname`

    Now patch the RDBMS home.

    WARNING: While running the opatch apply command, it will try and update the other nodes as well. Make sure you start the instances on the first node then stop them on all the other nodes before you let OPatch apply the patch!

    cd /nfs/install/oracle/patches/p9654983/9654983opatch apply

    From another window, that gives:

    srvctl start instance -d racdev -i racdev1
    srvctl start instance -d ractest -i ractest1
    ssh node2.domain.com
    srvctl stop instance -d racdev -i racdev2
    srvctl stop instance -d ractest -i ractest2
    ssh nodeN.domain.com
    srvctl stop instance -d racdev -i racdevN
    srvctl stop instance -d ractest -i ractestN
    

    Once ALL nodes are updated then load the modified SQL files in ALL the databases running from the cluster. Perform these steps on only one node.

    sudo su - oracle
    cd $ORACLE_HOME/rdbms/admin
    export ORACLE_SID=racdev1
    rlwrap sqlplus '/ as sysdba'
    SQL> @catbundle.sql psu apply
    SQL> exit
    export ORACLE_SID=ractest1
    rlwrap sqlplus '/ as sysdba'
    SQL> @catbundle.sql psu apply
    SQL> exit

    GI and RDBMS 11.2.0.2 Patch 10098816

    First, download the first four files from Patchset 10098816 - 11.2.0.2.0 PATCH SET FOR ORACLE DATABASE SERVER. There are quite a few patch files. Here's a list of what they provide: 

    List of files from Patchset 10098816
    1. Oracle Database file 1 of 2 (includes Oracle Database and Oracle RAC) - p10098816_112020_platform_1of7.zip
    2. Oracle Database file 2 of 2 (includes Oracle Database and Oracle RAC) - p10098816_112020_platform_2of7.zip
    3. Oracle Grid Infrastructure (includes Oracle ASM, Oracle Clusterware, and Oracle Restart) - p10098816_112020_platform_3of7.zip
    4. Oracle Database Client - p10098816_112020_platform_4of7.zip
    5. Oracle Gateways - p10098816_112020_platform_5of7.zip
    6. Oracle Examples - p10098816_112020_platform_6of7.zip
    7. Deinstall - p10098816_112020_platform_7of7.zip
    So we need the first four files and the last one (in case we have to remove the patch). Place the new files in the patch repository.
    mkdir /nfs/install/oracle/patches/p10098816
    mv ~/Downloads/p10098816_* /nfs/install/oracle/patches/p10098816

    Get all the SHA1 digest from the web page and add them to the patch repository.

    cd /nfs/install/oracle/patches/p10098816
    echo "378EEA2C7B51CB90D810C19DDEF7B45571F94EEF" > p10098816_112020_Linux-x86-64_1of7.zip.sha1
    echo "A03BD03445D73663E6BF86E16D8CA55ACE7D3236" > p10098816_112020_Linux-x86-64_2of7.zip.sha1
    echo "960B0C1EA19CECAC14753BC367F8D4C65B3A5482" > p10098816_112020_Linux-x86-64_3of7.zip.sha1
    echo "5A076D0541EF2B814126EB340F85B0680365CE8F" > p10098816_112020_Linux-x86-64_7of7.zip.sha1
    

    Check the SHA1 results and compare them with the ones found on the website.

    for i in `ls -1 p10098816_11202* | grep -v sha1`; do echo $i; cat $i.sha1; sha1sum $i | cut -d' ' -f1 | tr [:lower:] [:upper:]; echo; done

    Grid Infrastructure Patch 10098816 Installation

    Extract the new files. The GI patch installer is stored in the file number 3 of 7.
    cd /nfs/install/oracle/patches/p10098816
    unzip p10098816_112020_Linux-x86-64_3of7.zip

    Connect to all nodes and give write permission on /u01/app to the oinstall group.
    ssh node1.domain.com
    sudo chmod g+w /u01/app
    ssh nodeN
    sudo chmod g+w /u01/app
    exit
    exit
    

    Connect to a node as the grid owner with X11 forwarding enabled then unset some environment variables.
    ssh -X grid@nodeN.domain.com
    unset ORA_CRS_HOME; 
    unset ORACLE_BASE; 
    unset ORACLE_HOME; 
    unset ORACLE_SID

    Run the installer while all other Oracle RAC instances are running.
    cd /nfs/install/oracle/patches/p10098816/grid
    ./runInstaller

    Once the upgrade is over, check the CRS daemons.
    crsctl check cluster -all
    crsctl query crs softwareversion node1
    crsctl query crs softwareversion nodeN
    

    Database Patch 10098816 Installation

    Database Upgrade Guide
    Extract the new files. RDBMS installer is stored in both files 1 and 2 out of 7.
    cd /nfs/install/oracle/patches/p10098816
    unzip p10098816_112020_Linux-x86-64_1of7.zip
    unzip p10098816_112020_Linux-x86-64_2of7.zip
    

    Connect to a node as the grid owner with X11 forwarding enabled then unset some environment variables.
    ssh -X oracle@nodeN.domain.com
    unset ORA_CRS_HOME; 
    unset ORACLE_BASE; 
    unset ORACLE_HOME; 
    unset ORACLE_SID
    

    Run the installer while all other Oracle RAC instances are running.


    WARNING: The RDBMS OUI 11.2.0.2 installer will run DBUA which will shutdown the Oracle database!

    When you run the 11.2.0.2 installer, it will start Database Upgrade Assistant (DBUA) which will cause the selected database to be shutdown. Be aware of this.


    cd /nfs/install/oracle/patches/p10098816/database
    ./runInstaller
    

    DBUA Logs and Result DBUA creates logs in the /u01/app/oracle/cfgtoollogs/dbua directory. You can watch it's progress from these files. You can also check the progress of DBUA with a tail -F /u01/app/oracle/diag/rdbms/$RDBMS_NAME/$ORACLE_SID/trace/alert_$ORACLE_SID.log. To see the output from DBUA, check the /u01/app/oracle/cfgtoollogs/dbua/$RDBMS_NAME/upgrade1/UpgradeResults.html file.

    Post Database Patch 10098816 Procedure

    Post Database Patch Procedures

    Update GI User Environment

    Once DBUA finished, one must update the GI environment to point to the new ORACLE_HOME that was created during the installation.

    WARNING: Symbolic links DO NOT work. I've tried to create a symbolic link to the new ORACLE_HOME that was created in the grid user's environments on all nodes. The trick was to change the user's environments with the symbolic link and simply change this symbolic link when we will install 11.2.0.3. But, alas, that doesn't work. When you try to connect to the ASM instance via SQL*Plus, you get the Connected to an idle instance error...
     
    sudo su - grid
    rcsdiff ~/.bash_profile
    co -l ~/.bash_profile
    vi ~/.bash_profile
    

    Just change the old GRID_HOME from the old location /u01/app/11.2.0/grid to the new location 

    /u01/app/11.2.0.2/grid.
    ci -u ~/.bash_profile

    Test the new file.
    source ~/.bash_profile
    crsctl check cluster -all
    

    It all is good, then send this new file over to the other nodes then logout of the grid environment.
    scp ~/.bash_profile nodeN:~/
    exit
    

    Update RDBMS User Environment

    Now that our grid user's environment is updated, we must do the same with the oracle user.

    WARNING: Symbolic links DO NOT work. Again, same as for the grid user, using a symbolic link doesn't work.


    sudo su - oracle

    Then we change the environment variables.
    rcsdiff ~/.bash_profile
    co -l ~/.bash_profile
    vi ~/.bash_profile
    

    Again, the point is simply to change all paths where from the old one (i.e. /u01/app/oracle/product/11.2.0) to the new one (i.e. /u01/app/oracle/product/11.2.0.2).
    ci -u ~/.bash_profile

    Test the new environment.
    source ~/.bash_profile
    srvctl status server -n node1,nodeN -a
    

    If it looks good, then ship the file to all the other nodes.
    scp ~/.bash_profile nodeN:~/
    exit
    

    Update /etc/oratab and Scripts

    Since the ORACLE_HOME path has changed, we need to double check the /etc/oratab file on all the nodes and all scripts.

    WARNING: Don't copy the same /etc/oratab to all cluster nodes! Don't forget that all nodes DO NOT HAVE THE SAME /etc/oratab file. That is because the ASM instances are not the same on each nodes. So you can't just scp(1) the same oratab to all nodes. DBUA normally goes and updates /etc/oratab on all nodes. It will create a backup file. Mine was located in $ORACLE_HOME/srvm/admin/oratab.bak.`hostname`.

    Check the first node's /etc/oratab.
    ssh node1.domain.com
    cat /etc/oratab
    

    Do it again on the other nodes.
    ssh nodeN.domain.com
    cat /etc/oratab
    

    WARNING: Modify all your scripts! Make sure all your local scripts and database creation scripts now all use the NEW ORACLE_HOME. Otherwise you're in for a whole new set of errors...

    If you have more then one database on this RAC system, you must upgrade them via DBUA. So run it again.Run DBUA Again

    xhost +node1.domain.com
    ssh -X oracle@node1.domain.com
    dbua &
    

    DBUA creates logs in the /u01/app/oracle/cfgtoollogs/dbua directory. You can watch it's progress from these files. You can also check the progress of DBUA with a tail -F /u01/app/oracle/diag/rdbms/$RDBMS_NAME/$ORACLE_SID/trace/alert_$ORACLE_SID.log

    To see the output from DBUA, check the /u01/app/oracle/cfgtoollogs/dbua/$RDBMS_NAME/upgrade1/UpgradeResults.html file. 

    Once DBUA is finished, you will find the following message at your prompt where you started dbua: Database upgrade has been completed successfully, and the database is ready to use. 
    Check the DBUA PreUpgradeResults.html and UpgradeResults.html files for all of your databases running on this RAC setup. In this blog example, we have two databases: racdev and ractest. Be sure to follow any warnings or recommendations in those files. Notably, the UpgradeResults.html will tell which log files to check and where they are stored.
    sudo su - oracle
    lynx $ORACLE_BASE/cfgtoollogs/dbua/racdev/upgrade1/PreUpgradeResults.html
    lynx $ORACLE_BASE/cfgtoollogs/dbua/racdev/upgrade1/UpgradeResults.html
    lynx $ORACLE_BASE/cfgtoollogs/dbua/ractest/upgrade1/PreUpgradeResults.html
    lynx $ORACLE_BASE/cfgtoollogs/dbua/ractest/upgrade1/UpgradeResults.html
    

    Upgrade the RMAN Recovery Catalog

    How to Upgrade the Recovery Catalog For complete information about upgrading the recovery catalog and the UPGRADE CATALOG command, see the Oracle Database Backup and Recovery User's Guide 11g Release 2 (11.2) Chapter 13 - Managing a Recovery Catalog

    Check Recommended Tasks After Database Upgrades

    Oracle Database Upgrade Guide 11g Release 2 (11.2) Chapter 4 - Recommended Tasks After Database Upgrades

    Reinstall the Latest OPatch

    When we installed the latest 11.2.0.2 GI and RDBMS version, OPatch was reinstalled too. But at a version lower than the one currently on MOS. So reinstall OPatch in both GI and RDBMS on all nodes. See the documentation above on how to do so.

    Remove Old Version

    Once you're satisfied with the new version of the GI and RDBMS software, you can remove the old version to save some disk space. Just issue a rm -rf command on the old home. For the GI home, be sure to run the rm command as root.

    That's it. I hope everything went well for you.

    David

    No comments:

    Post a Comment

    Note: Only a member of this blog may post a comment.