A lot of people use the graphical way to upgrade Oracle software from one version to another. While there is nothing to say against that the same can be done without any graphical tools. This post outlines the steps to do so.

Currently my cluster is running Grid Infrastructure 12.1.0.1 without any PSU applied. The node names are racp1vm1 and racp2vm2:

[grid@racp1vm2 ~]$ /u01/app/12.1.0/grid_1_0/bin/crsctl query crs softwareversion
Oracle Clusterware version on node [racp1vm2] is [12.1.0.1.0]
[grid@racp1vm2 ~]$ /u01/app/12.1.0/grid_1_0/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.1.0]
[grid@racp1vm2 ~]$ /u01/app/12.1.0/grid_1_0/bin/crsctl query crs softwareversion racp1vm2
Oracle Clusterware version on node [racp1vm2] is [12.1.0.1.0]

Obviously the first step is to copy the source files to all the nodes and to extract it:

[grid@racp1vm1 ~]$ cd /u01/app/oracle/software/
[grid@racp1vm1 software]$ ls -la
total 2337928
drwxr-xr-x. 2 grid oinstall       4096 Nov  5 15:14 .
drwxr-xr-x. 3 grid oinstall       4096 Nov  5 14:24 ..
-rw-r--r--. 1 grid oinstall 1747043545 Nov  5 15:14 linuxamd64_12102_grid_1of2.zip
-rw-r--r--. 1 grid oinstall  646972897 Nov  5 15:14 linuxamd64_12102_grid_2of2.zip
[grid@racp1vm1 software]$ unzip linuxamd64_12102_grid_1of2.zip
[grid@racp1vm1 software]$ unzip linuxamd64_12102_grid_2of2.zip
[grid@racp1vm1 software]$ rm linuxamd64_12102_grid_1of2.zip linuxamd64_12102_grid_2of2.zip

It is necessary to create the path for new ORACLE_HOME before the installation as the /u01/app/12.1.0 is locked (that is owned by root and not writable by oinstall which is the default Grid Infrastructure behavior):

[root@racp1vm1 app] mkdir /u01/app/12.1.0/grid_2_0
[root@racp1vm1 app] chown grid:oinstall /u01/app/12.1.0/grid_2_0
[root@racp1vm1 app] ssh racp1vm2
root@racp1vm2's password: 
Last login: Thu Nov  5 14:52:57 2015 from racp1vm1
[root@racp1vm2 ~] mkdir /u01/app/12.1.0/grid_2_0
[root@racp1vm2 ~] chown grid:oinstall /u01/app/12.1.0/grid_2_0

I’ll use my favorite method for installing the binaries only without doing any configuration:

[grid@racp1vm1 software]$ cd grid
./runInstaller 
      ORACLE_HOSTNAME=racp1vm1.dbi.lab 
      INVENTORY_LOCATION=/u01/app/oraInventory 
      SELECTED_LANGUAGES=en 
      oracle.install.option=UPGRADE 
      ORACLE_BASE=/u01/app/grid 
      ORACLE_HOME=/u01/app/12.1.0/grid_2_0 
      oracle.install.asm.OSDBA=asmdba 
      oracle.install.asm.OSOPER=asmoper 
      oracle.install.asm.OSASM=asmadmin 
      oracle.install.crs.config.clusterName=racp1vm-cluster 
      oracle.install.crs.config.gpnp.configureGNS=false 
      oracle.install.crs.config.autoConfigureClusterNodeVIP=true 
      oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS 
      oracle.install.crs.config.clusterNodes=racp1vm1:,racp1vm2: 
      oracle.install.crs.config.storageOption=LOCAL_ASM_STORAGE 
      oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=NORMAL 
      oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=NORMAL 
      oracle.install.asm.diskGroup.name=CRS 
      oracle.install.asm.diskGroup.AUSize=1 
      oracle.install.crs.config.ignoreDownNodes=false 
      oracle.install.config.managementOption=NONE 
      -ignoreSysPrereqs 
      -ignorePrereq 
      -waitforcompletion 
      -silent

If the above runs fine the output should look similar to this:

As a root user, execute the following script(s):
	1. /u01/app/12.1.0/grid_2_0/rootupgrade.sh

Execute /u01/app/12.1.0/grid_2_0/rootupgrade.sh on the following nodes: 
[racp1vm1, racp1vm2]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node.

Successfully Setup Software.
As install user, execute the following script to complete the configuration.
	1. /u01/app/12.1.0/grid_2_0/cfgtoollogs/configToolAllCommands RESPONSE_FILE=

 	Note:
	1. This script must be run on the same host from where installer was run. 
	2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).

Time for the upgrade on the first node:

[root@racp1vm1 ~] /u01/app/12.1.0/grid_2_0/rootupgrade.sh
Check /u01/app/12.1.0/grid_2_0/install/root_racp1vm1.dbi.lab_2015-11-23_12-32-03.log for the output of root script

The contents of the logfile should look similar to this:

Check /u01/app/12.1.0/grid_2_0/install/root_racp1vm2.dbi.lab_2015-11-23_12-31-02.log for the output of root script
[root@racp1vm1 ~]# tail -f /u01/app/12.1.0/grid_2_0/install/root_racp1vm1.dbi.lab_2015-11-23_12-32-03.log
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid_2_0/crs/install/crsconfig_params
2015/11/23 12:32:03 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 12:32:25 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 12:32:28 CLSRSC-464: Starting retrieval of the cluster configuration data

2015/11/23 12:32:32 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2015/11/23 12:32:33 CLSRSC-363: User ignored prerequisites during installation

2015/11/23 12:32:41 CLSRSC-515: Starting OCR manual backup.

2015/11/23 12:32:42 CLSRSC-516: OCR manual backup successful.

2015/11/23 12:32:45 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode

2015/11/23 12:32:45 CLSRSC-482: Running command: '/u01/app/12.1.0/grid_1_0/bin/crsctl start rollingupgrade 12.1.0.2.0'

CRS-1131: The cluster was successfully set to rolling upgrade mode.
2015/11/23 12:32:50 CLSRSC-482: Running command: '/u01/app/12.1.0/grid_2_0/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/12.1.0/grid_1_0 -oldCRSVersion 12.1.0.1.0 -nodeNumber 1 -firstNode true -startRolling false'

ASM configuration upgraded in local node successfully.

2015/11/23 12:32:53 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode

2015/11/23 12:32:53 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2015/11/23 12:34:14 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2015/11/23 12:36:53 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2015/11/23 12:41:00 CLSRSC-472: Attempting to export the OCR

2015/11/23 12:41:00 CLSRSC-482: Running command: 'ocrconfig -upgrade grid oinstall'

2015/11/23 12:41:03 CLSRSC-473: Successfully exported the OCR

2015/11/23 12:41:08 CLSRSC-486: 
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.

2015/11/23 12:41:08 CLSRSC-541: 
 To downgrade the cluster: 
 1. All nodes that have been upgraded must be downgraded.

2015/11/23 12:41:08 CLSRSC-542: 
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.

2015/11/23 12:41:08 CLSRSC-543: 
 3. The downgrade command must be run on the node racp1vm2 with the '-lastnode' option to restore global configuration data.

2015/11/23 12:41:39 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR. 
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2015/11/23 12:41:44 CLSRSC-474: Initiating upgrade of resource types

2015/11/23 12:41:50 CLSRSC-482: Running command: 'upgrade model  -s 12.1.0.1.0 -d 12.1.0.2.0 -p first'

2015/11/23 12:41:50 CLSRSC-475: Upgrade of resource types successfully initiated.

2015/11/23 12:41:52 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Now we can do the same thing on the second node:

[root@racp1vm2 ~] /u01/app/12.1.0/grid_2_0/rootupgrade.sh
Check /u01/app/12.1.0/grid_2_0/install/root_racp1vm2.dbi.lab_2015-11-23_13-43-30.log for the output of root script

Again, have a look at the logfile to confirm that everything went fine:

Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid_2_0/crs/install/crsconfig_params
2015/11/23 13:43:30 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 13:43:30 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 13:43:39 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 13:43:50 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 13:43:51 CLSRSC-464: Starting retrieval of the cluster configuration data

2015/11/23 13:43:55 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2015/11/23 13:43:55 CLSRSC-363: User ignored prerequisites during installation

ASM configuration upgraded in local node successfully.

2015/11/23 13:44:03 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2015/11/23 13:45:26 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2015/11/23 13:46:36 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2015/11/23 13:50:05 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR. 
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2015/11/23 13:50:10 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded

2015/11/23 13:50:10 CLSRSC-482: Running command: '/u01/app/12.1.0/grid_2_0/bin/crsctl set crs activeversion'

Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2015/11/23 13:51:19 CLSRSC-479: Successfully set Oracle Clusterware active version

2015/11/23 13:51:25 CLSRSC-476: Finishing upgrade of resource types

2015/11/23 13:51:31 CLSRSC-482: Running command: 'upgrade model  -s 12.1.0.1.0 -d 12.1.0.2.0 -p last'

2015/11/23 13:51:32 CLSRSC-477: Successfully completed upgrade of resource types

2015/11/23 13:51:55 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

That’s it. We can confirm the version by issuing:

[root@racp1vm2 ~] /u01/app/12.1.0/grid_2_0/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
[root@racp1vm2 ~] /u01/app/12.1.0/grid_2_0/bin/crsctl query crs softwareversion
Oracle Clusterware version on node [racp1vm2] is [12.1.0.2.0]

Hope this helps.