Friday, February 27, 2015

Recovering SPFILE
when ASM instance is down
or
ASM disks is not visible




How is it possible to start the ASM instance if the SPFILE itself is stored in an ASM disk group?

Starting from version 11g Release 2, the ASM spfile is stored automatically in the first disk group created during Grid Infrastructure installation:


Ø  Since Voting & OCR Disk resides in ASM Diskgroups, but as per startup sequence OCSSD starts first before than ASM, how is it possible?               

It can read the voting disk information from ( gpnp) profile 

Ø  What is gpnp profile?  What does gpnp profile contain?

The GPnP profile is a small XML file located in GRID_HOME/gpnp//profiles/peer under the name profile.xml. It is used to establish the correct global personality of a node. Each node maintains a local copy of the GPnP Profile and is maintained by the GPnP Deamon (GPnPD) .

GPnP Profile is used to store necessary information required for the startup of Oracle Clusterware like  SPFILE location, ASM Diskstring  etc.  It contains various attributes defining node personality.

                - Cluster name
                - Network classifications (Public/Private)
                - Storage to be used for CSS
                - Storage to be used for ASM : SPFILE location, ASM DiskString  etc
                - Digital signature information

                The order of searching the ASM SPfile is
                - GPnP profile
                - ORACLE_HOME/dbs/spfile
                - ORACLE_HOME/dbs/init

[oracle@rac128 peer]$ . oraenv
ORACLE_SID = [+ASM1] ?

[oracle@rac128 grid]$ cd $GRID_HOME/gpnp/rac128/profiles/peer/
[oracle@rac128 peer]$ pwd
/grid/app/11.2.0/grid/gpnp/rac128/profiles/peer

[oracle@rac128 peer]$ ls

profile.old  profile_orig.xml  profile.xml


 [oracle@rac128 peer]$ gpnptool getpval -asm_dis

Warning: some command line parameters were defaulted. Resulting command line:
         /grid/app/11.2.0/grid/bin/gpnptool.bin getpval -asm_dis -p=profile.xml -o-

/dev/oracleasm/disks/*


[oracle@rac128 dbs]$ asmcmd -p

ASMCMD [+] > spget
+DG_VOTE1/rac-cluster/asmparameterfile/registry.253.871462355

ASMCMD [+] > lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576      8150     5982                0            5982              0             N  DG_CCB/
MOUNTED  EXTERN  N         512   4096  1048576      7844     7749                0            7749              0             N  DG_FLASH/
MOUNTED  EXTERN  N         512   4096  4194304       980      548                0             548              0             Y  DG_VOTE1/

ASMCMD [+] > lsdsk -G DG_VOTE1
Path
/dev/oracleasm/disks/ASMDISK01



[oracle@rac129 ~]$ kfed read /dev/oracleasm/disks/ASMDISK01 | grep -E  'spf|ausize'

kfdhdb.ausize:                  4194304 ; 0x0bc: 0x00400000
kfdhdb.spfile:                       16 ; 0x0f4: 0x00000010
kfdhdb.spfflg:                        1 ; 0x0f8: 0x00000001



[oracle@rac129 ~]$ dd if=/dev/oracleasm/disks/ASMDISK01 of=spfileASM_DEMO.ora skip=16 bs=4M

229+1 records in
229+1 records out
961018880 bytes (961 MB) copied, 39.3905 seconds, 24.4 MB/s


[oracle@rac129 ~]$ strings spfileASM_DEMO.ora

+ASM1.asm_diskgroups='DG_CCB','DG_FLASH'#Manual Mount
+ASM2.asm_diskgroups='DG_CCB','DG_FLASH'#Manual Mount
*.asm_diskstring='/dev/oracleasm/disks/*'
*.asm_power_limit=1
*.diagnostic_dest='/grid/app/oracle'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'
60000
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--~ACTION_FAILURE_TEMPLATE=~ACTION_SCRIPT=~ACTIVE_PLACEMENT=1~AGENT_FILENAME=%CRS_HOME%/bin/ora                          agent%CRS_EXE_SUFFIX%~AUTO_START=restore~BASE_TYPE=ora.cluster_resource.type~CARDINALITY=2~CHECK_INTERVAL=1~CHECK_
TIMEOUT=30~CLUSTER_DATABASE=true~DATABASE_TYPE=RAC~DB_UNIQUE_NAME=peehu~DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_                          NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) ELEMENT(DATABASE_TYPE= %DATABASE_TYPE%                          )~DEGREE=1~DESCRIPTION=Oracle Database resource~ENABLED=1~FAILOVER_DELAY=0~FAILURE_INTERVAL=60~FAILURE_THRESHOLD=1~GEN_AUDIT_FILE_DEST=/u01/ap                          p/oracle/admin/peehu/adump~GEN_START_OPTIONS=~GEN_START_OPTIONS@SERVERNAME(rac128)=nomount~GEN_START_OPTIONS@SERVERNAME(rac129)=open~GEN_USR_O                          RA_INST_NAME=~GEN_USR_ORA_INST_NAME@SERVERNAME(rac128)=peehu1~GEN_USR_ORA_INST_NAME@SERVERNAME(rac129)=peehu2~HOSTING_MEMBERS=~INSTANCE_FAILOVER=0~LOAD=1~LOGGING_LEVEL=1~MANAGEMENT_POLICY=AUTOMATIC~NAME=ora.peehu.db~NLS_LANG=~NOT_RESTARTING_TEMPLATE=~OFFLINE_CHECK_INTERVAL=0~ONLINE_RELOCATION_TIMEOUT=0~ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1~ORACLE_HOME_OLD=~PLACEMENT=restricted~PROFILE_CHANGE_TEMPLATE=~RESTART                          _ATTEMPTS=2~ROLE=PRIMARY~SCRIPT_TIMEOUT=60~SERVER_POOLS=ora.peehu~SPFILE=+DG_CCB/peehu/spfilepeehu.ora~START_DEPENDENCIES=hard(ora.DG_CCB.dg)  weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns) pullup(ora.DG_CCB.dg)~START_TIMEOUT=600~STATE_C                          HANGE_TEMPLATE=~STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DG_CCB.dg)~STOP_TIMEOUT=600~TYPE=ora.database.type~TYPE_ACL=owner:ora                          cle:rwx,pgrp:oinstall:rwx,other::r--~TYPE_NAME=ora.database.type~TYPE_VERSION=3.2~UPTIME_THRESHOLD=1h~USR_ORA_DB_NAME=peehu~USR_ORA_DOMAIN=ora                          .com~USR_ORA_ENV=~USR_ORA_FLAGS=~USR_ORA_INST_NAME=~USR_ORA_INST_NAME@SERVERNAME(rac128)=peehu1~USR_ORA_INST_NAME@SERVERNAME(rac129)=peehu2~US                          R_ORA_OPEN_MODE=open~USR_ORA_OPI=false~USR_ORA_STOP_MODE=immediate~VERSION=11.2.0.3.0~
CARDINALITY_ID=0~CARDINALITY_ID@CARDINALITYID(1)=1~CARDINALITY_ID@CARDINALITYID(2)=2~CREATION_SEED=82~DEGREE_ID=0~DEGREE_ID@CARDINALITYID(1)=1   ~DEGREE_ID@CARDINALITYID(2)=1~ID=ora.peehu.db~ID@CARDINALITYID(1)=ora.peehu.db 1 1~ID@CARDINALITYID(2)=ora.peehu.d
b 2 1~LAST_SERVER=~LAST_SERVER@CARDINALITYID(1)=rac128~LAST_SERVER@CARDINALITYID(2)=rac129~TARGET=8~TARGET@CARDINALITYID(1)=7~TARGET@CARDINALITYID(2)=7~
oracle
root
asmadmin
oinstall
root

Sunday, February 15, 2015

Applying PSU 11.2.0.3.5 to Grid Infrastructure and DB Home


Applying PSU 11.2.0.3.5
to Grid Infrastructure and DB Home

Patch 14727347: GRID INFRASTRUCTURE PATCH SET UPDATE 11.2.0.3.5 (INCLUDES DB PSU 11.2.0.3.5)

Replacement Option
GRID INFRASTRUCTURE PATCH SET UPDATE 11.2.0.3.6 (INCLUDES DB PSU 11.2.0.3.6)
Patch
GRID INFRASTRUCTURE PATCH SET UPDATE 11.2.0.3.8 (INCLUDES DB PSU 11.2.0.3.8)
Patch


Applying PSU 11.2.0.3.5 is pretty simple and straight forward for RAC as well as Non-RAC Environment. Below are the step by step documents for your reference.
Details :
·         Operating System                           :   Oracle Enterprise Linux  (OEL 5.10 –x64 )
(Linux rac132.ora.com 2.6.39-400.209.1.el5uek #1 SMP Tue Sep 10 20:33:17 PDT 2013 x86_64 x86_64 x86_64 GNU/Linux)
·         Database                                           :   11.2.0.3 (Grid Infrastructure and Oracle RDBMS)
·         Patchset Update (PSU)                   :   11.2.0.3.5 (p14727347_112030_Linux-x86-64-Grid-DB-PSU.zip)
·         OPatch                                               :   patch 6880880   -  11.2.0.3.3 (p6880880_112000_Linux-x86-64.zip)

1.        We must use OPatch utility version 11.2.0.3.3 or later to apply this patch. Must apply latest OPatch for Grid Infrastructure as well as RDBMS on all available RAC Nodes.  In my case we have two node of Cluster  (RAC131,RAC132)

[oracle@rac131]$  cd /SOFTWARE
[oracle@rac131]$  unzip –d p6880880_112000_Linux-x86-64.zip $ORACLE_HOME

[oracle@rac131]$  $ORACLE_HOME/OPatch/opatch version
Invoking OPatch 11.2.0.3.3
OPatch Version: 11.2.0.3.3
OPatch succeeded.

We can download by wget utility (as recommended by Oracle)
So we need to first download
I used following wget command to directly download the patch to Oracle Support.

wget --http-user="amitksri14@gmail.com" --http-password="xxxxxxx" --output-document="p6880880_112000_Linux-x86-64.zip"


2.       Opatch requires ocm.rsp file. 

Note 966023.1 describes How To Create An OCM Response File For Opatch Silent Installation. Let's create one for our setup in current directory. I didn't specify my MOS credentials as this is only used for demo purpose.

[oracle@rac131]$  $ORACLE_HOME/OPatch/ocm/bin/emocmrsp   -no_banner
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:  Y
The OCM configuration response file (ocm.rsp) was successfully created.

3.       Oracle recommends to run following command to verify consistency of all homes being patched.

$ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME

In case there is some issue with inventory, it will report error. In that case you should first fix the error before proceeding with patching.
This will also give the list of currently applied patches.
4.        In case you have configured DB Console for RAC, it should be stopped before patching

As the Oracle RAC database home owner execute:
$ $ORACLE_HOME/bin/emctl stop dbconsole

5.       Let's download the patch 14727347- GRID INFRASTRUCTURE  PATCH  SET  UPDATE 11.2.0.3.5 (INCLUDES DB PSU 11.2.0.3.5)
Password : xxxxxxxxx

read username
read -s h_passwd
wget --http-user="$username" --http-password="$h_passwd" --output-document="p14727347_112030_Linux-x86-64.zip" "https://updates.oracle.com/Orion/Services/download/p14727347_112030_Linux-x86-64.zip?aru=15680307&patch_file=p14727347_112030_Linux-x86-64.zip"

6.        Finally we come to patching step. If you are not using shared homes and ACFS, then patching is straight forward i.e use opatch auto. In case you are using ACFS or Shared home, then refer to patch readme here

You need to run this command as root from one node at a time and then after second node and so on.
                                                                                                                                                                       
This will shutdown instance followed by Clusterware shutdown. Finally it will patch both Grid home and database. Below are the screen of execution details on test environment.
[root@rac131 SOFTWARE]# opatch auto /SOFTWARE/ -ocmrf /SOFTWARE/ocm.rsp

Executing /grid/app/11.2.0/grid/perl/bin/perl /grid/app/11.2.0/grid/OPatch/crs/patch11203.pl -patchdir / -patchn SOFTWARE -ocmrf /SOFTWARE/ocm.rsp -paramfile /grid/app/11.2.0/grid/crs/install/crsconfig_params

This is the main log file: /grid/app/11.2.0/grid/cfgtoollogs/opatchauto2015-02-15_16-01-45.log

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/grid/app/11.2.0/grid/cfgtoollogs/opatchauto2015-02-15_16-01-45.report.log

2015-02-15 16:01:45: Starting Clusterware Patch Setup
Using configuration parameter file: /grid/app/11.2.0/grid/crs/install/crsconfig_params

Stopping RAC /u01/app/oracle/product/11.2.0/dbhome_1 ...
Stopped RAC /u01/app/oracle/product/11.2.0/dbhome_1 successfully

patch //SOFTWARE/15876003/custom/server/15876003  apply successful for home  /u01/app/oracle/product/11.2.0/dbhome_1
patch //SOFTWARE/14727310  apply successful for home  /u01/app/oracle/product/11.2.0/dbhome_1

Stopping CRS...
Stopped CRS successfully

patch //SOFTWARE/15876003  apply successful for home  /grid/app/11.2.0/grid
patch //SOFTWARE/14727310  apply successful for home  /grid/app/11.2.0/grid

Starting CRS...
CRS-4123: Oracle High Availability Services has been started.

Starting RAC /u01/app/oracle/product/11.2.0/dbhome_1 ...
Started RAC /u01/app/oracle/product/11.2.0/dbhome_1 successfully

opatch auto succeeded.


[root@rac132 PSU]# opatch auto /SOFTWARE/PSU/  -ocmrf /SOFTWARE/ocm.rsp
Executing /grid/app/11.2.0/grid/perl/bin/perl /grid/app/11.2.0/grid/OPatch/crs/p                                                                                        pp/11.2.0/grid/crs/install/crsconfig_params

This is the main log file: /grid/app/11.2.0/grid/cfgtoollogs/opatchauto2015-02-1

This file will show your detected configuration and all the steps that opatchaut
/grid/app/11.2.0/grid/cfgtoollogs/opatchauto2015-02-15_16-45-23.report.log

2015-02-15 16:45:23: Starting Clusterware Patch Setup
Using configuration parameter file: /grid/app/11.2.0/grid/crs/install/crsconfig_

Stopping RAC /u01/app/oracle/product/11.2.0/dbhome_1 ...
Stopped RAC /u01/app/oracle/product/11.2.0/dbhome_1 successfully

patch /SOFTWARE/PSU/15876003/custom/server/15876003  apply successful for home
patch /SOFTWARE/PSU/14727310  apply failed  for home  /u01/app/oracle/product/11

Stopping CRS...
Stopped CRS successfully

patch /SOFTWARE/PSU/15876003  apply successful for home  /grid/app/11.2.0/grid
patch /SOFTWARE/PSU/14727310  apply successful for home  /grid/app/11.2.0/grid

Starting CRS...
CRS-4123: Oracle High Availability Services has been started.


Starting RAC /u01/app/oracle/product/11.2.0/dbhome_1 ...
Started RAC /u01/app/oracle/product/11.2.0/dbhome_1 successfully

opatch auto succeeded.

patch /home/oracle/Patch_PSU/15876003 apply successful for home /oragrid/11.2.0.3/grid
patch /home/oracle/Patch_PSU/14727310 apply failed for home /oragrid/11.2.0.3/grid
CRS-4123: Oracle High Availability Services has been started.

77.  Run catbundle.sql from each database running from DB home. In RAC this needs to be done only one node
@catbundle.sql psu apply
[oracle@rac131 dbhome_1]$ . oraenv
ORACLE_SID = [oracle] ? yashi1
[oracle@rac131 admin]$ cd $ORACLE_HOME/rdbms/admin
[oracle@rac131 admin]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Sun Feb 15 17:17:38 2015

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> @catbundle.sql psu apply

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

Generating apply and rollback scripts...
Check the following file for errors:
/u01/app/oracle/cfgtoollogs/catbundle/catbundle_PSU_YASHI_GENERATE_2015Feb15_17_19_00.log
Apply script: /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/admin/catbundle_PSU_YASHI_APPLY.sql
Rollback script: /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/admin/catbundle_PSU_YASHI_ROLLBACK.sql

PL/SQL procedure successfully completed.

Executing script file...
...
...
...
...

SQL>
SQL> ALTER SESSION SET current_schema = SYS;

Session altered.

SQL> PROMPT Updating registry...
Updating registry...
SQL> INSERT INTO registry$history
  2    (action_time, action,
  3     namespace, version, id,
  4     bundle_series, comments)
  5  VALUES
  6    (SYSTIMESTAMP, 'APPLY',
  7     SYS_CONTEXT('REGISTRY$CTX','NAMESPACE'),
  8     '11.2.0.3',
  9     5,
 10     'PSU',
 11     'PSU 11.2.0.3.5');

1 row created.

SQL> COMMIT;

Commit complete.

SQL> SPOOL off
SQL> SET echo off
Check the following log file for errors:
/u01/app/oracle/cfgtoollogs/catbundle/catbundle_PSU_YASHI_APPLY_2015Feb15_17_19_07.log


Check the following log files in $ORACLE_BASE/cfgtoollogs/catbundle for any errors:
catbundle_PSU__APPLY_.log
catbundle_PSU__GENERATE_.log

As you can see patching process is simplified as compared to 10.2 patch bundles when you needed to execute some scripts as root and then as oracle .