Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
139 views65 pages

Install DR

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 65

Origem, primaria

Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

ORACLE_SID=SULPRD1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOSTNAME=sgtdracdb11prd.spms.local
ORACLE_TERM=xterm
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/db_1

OPatch]$ opatch lspatches


18150578;
24917954;OJVM PATCH SET UPDATE 11.2.0.4.170117
23054319;OCW Patch Set Update : 11.2.0.4.160719 (23054319)
24006111;Database Patch Set Update : 11.2.0.4.161018 (24006111)
19692824;
OPatch]$ opatch lsinventory|grep "Patch description"
Patch description: "OJVM PATCH SET UPDATE 11.2.0.4.170117"
Patch description: "OCW Patch Set Update : 11.2.0.4.160719 (23054319)"
Patch description: "Database Patch Set Update : 11.2.0.4.161018 (24006111)"

--select patch_id, version, status, Action,Action_time from dba_registry_sqlpatch order by


action_time;
SELECT TO_CHAR(action_time, 'DD-MON-YYYY HH24:MI:SS') AS action_time, action, version, id,
comments, bundle_series FROM sys.registry$history ORDER by action_time;
ACTION_TIME ACTION VERSION ID COMMENTS
BUNDLE_SERIES
-------------------- ------------------------------ ------------------------------ ---------- --------------------------------------
-----------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------- ------------------------------
23-FEB-2017 17:07:53 APPLY 11.2.0.4 161018 PSU 11.2.0.4.161018
PSU
23-FEB-2017 17:58:24 APPLY 24917954 Patch 24917954 applied
23-FEB-2017 17:58:24 APPLY 11.2.0.4.170117OJVMPSU 0 OJVM PSU post-
install
select
xmltransform(dbms_qopatch.is_patch_installed('29494060'),dbms_qopatch.get_opatch_xslt)
"Patch installed?" from dual;
Para instalar
Primary Note for Database Proactive Patch Program (Doc ID 888.1):
Oracle database 11.2.0.4 + Patch 31720776: COMBO OF OJVM COMPONENT 11.2.0.4.201020
DBPSU + DBPSU 11.2.0.4.201020
-rwxr-x---. 1 oracle oinstall 1395582860 Apr 6 2022 p13390677_112040_Linux-x86-64_1of7.zip
-rwxr-x---. 1 oracle oinstall 1151304589 Apr 6 2022 p13390677_112040_Linux-x86-64_2of7.zip
-rwxr-x---. 1 oracle oinstall 440662115 Apr 6 2022 p31720776_112040_Linux-x86-64.zip

Oracle Database 19c Proactive Patch Information (Doc ID 2521164.1)


-rwxr-x---. 1 grid oinstall 2889184573 Apr 6 2022 LINUX.X64_193000_grid_home.zip

--17-Jan-2023 GI Release Update 19.18.0 <Patch 34762026>


--Patch 34762026 - GI Release Update 19.18.0.0.230117
--https://updates.oracle.com/Orion/Services/download?type=readme&aru=25078638
--p34762026_190000_Linux-x86-64.zip

--17-Jan-2023 OJVM Release Update 19.18.0 Patch 34786990


--https://updates.oracle.com/Orion/Services/download?type=readme&aru=25032666

17-Jan-2023 Combo OJVM RU 19.18.0.0.230117 and GI RU 19.18.0.230117 Patch 34773504


https://updates.oracle.com/Orion/Services/download?type=readme&aru=25080291
p34773504_190000_Linux-x86-64.zip

/etc/hosts

10.105.8.25 sgtd01-prd-bd01.spms.min-saude.pt sgtd01-prd-bd01


10.105.8.26 sgtd01-prd-bd02.spms.min-saude.pt sgtd01-prd-bd02

10.105.8.27 sgtd01-prd-bd01-vip.spms.min-saude.pt sgtd01-prd-bd01-vip


10.105.8.28 sgtd01-prd-bd02-vip.spms.min-saude.pt sgtd01-prd-bd02-vip

172.30.0.28 sgtd01-prd-bd01-priv.spms.min-saude.pt sgtd01-prd-bd01-priv


172.30.0.29 sgtd01-prd-bd02-priv.spms.min-saude.pt sgtd01-prd-bd02-priv
----

root@sgtd01-prd-bd02:~# nslookup sgtd01-prd-scan


Server: 127.0.0.1
Address: 127.0.0.1#53

Name: sgtd01-prd-scan.spms.min-saude.pt
Address: 10.105.8.29
Name: sgtd01-prd-scan.spms.min-saude.pt
Address: 10.105.8.30
Name: sgtd01-prd-scan.spms.min-saude.pt
Address: 10.105.8.31

-----------------------------------------------------------------------------------------------------------------------------------
Discos
root@sgtd01-prd-bd01:~# oracleasm listdisks
DATA01
DATA02
FLASH01
FLASH02
MGMT01
MGMT02
OCR01
OCR02
OCR03
root@sgtd01-prd-bd01:~# rpm -qa | grep cvuqdisk
cvuqdisk-1.0.10-1.x86_64
root@sgtd01-prd-bd01:~# oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=grid
ORACLEASM_GID=asmadmin
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_SCAN_DIRECTORIES=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"
root@sgtd01-prd-bd01:~# lsblk -f

bash_profile

vi .bash_profile

# User specific environment and startup programs

ORACLE_BASE=/app/grid
export ORACLE_BASE
ORACLE_HOME=/app/19.3.0/grid
export ORACLE_HOME
ORACLE_SID=+ASM1
export ORACLE_SID
CRS_HOME=/app/19.3.0/grid
export CRS_HOME
GRID_HOME=/app/19.3.0/grid
export GRID_HOME

PATH=$PATH:$HOME/.local/bin:$HOME/bin:$ORACLE_HOME/bin
export PATH

umask 022
Instalação SW CRS 19c (9.3.0)
LINUX.X64_193000_grid_home.zip

1. Extract only first node


grid@sgtd01-prd-bd01:stage$ mkdir -p /app/19.3.0/grid
grid@sgtd01-prd-bd01:stage$ unzip -K LINUX.X64_193000_grid_home.zip -d /app/19.3.0/grid/
cd /app/19.3.0/grid

--Pre-check for CRS installation using Cluvfy, on 2 nodes


cd $GRID_HOME
/app/19.3.0/grid
./runcluvfy.sh stage -pre crsinst -n sgtd01-prd-bd01,sgtd01-prd-bd02
Failures were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Swap Size ...FAILED


sgtd01-prd-bd02: PRVF-7573 : Sufficient swap size is not available on node
"sgtd01-prd-bd02" [Required = 15.6235GB (1.6382396E7KB) ;
Found = 4GB (4194300.0KB)]

sgtd01-prd-bd01: PRVF-7573 : Sufficient swap size is not available on node


"sgtd01-prd-bd01" [Required = 15.6235GB (1.6382404E7KB) ;
Found = 4GB (4194300.0KB)]

Verifying RPM Package Manager database ...INFORMATION


PRVG-11250 : The check "RPM Package Manager database" was not performed because
it needs 'root' user privileges.

----------------------------------------
Software Only
-- install using moba ,
ssh grid@sgtd01-prd-bd01
W+|QUQk+@;
cd /app/19.3.0/grid
grid@ccm03-prd-bd01:grid$ ./gridSetup.sh
Launching Oracle Grid Infrastructure Setup Wizard...
grid@sgtd01-prd-bd02:grid$ pwd
/app/19.3.0/grid
Deve estar vazio
execute as root
/app/oraInventory/orainstRoot.sh
/app/19.3.0/grid/root.sh
root@sgtd01-prd-bd02:~# /app/oraInventory/orainstRoot.sh
Changing permissions of /app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /app/oraInventory to oinstall.


The execution of the script is complete.

root@sgtd01-prd-bd02:~# /app/19.3.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:


ORACLE_OWNER= grid
ORACLE_HOME= /app/19.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:


Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

To configure Grid Infrastructure for a Cluster execute the following command as grid user:
/app/19.3.0/grid/gridSetup.sh
This command launches the Grid Infrastructure Setup Wizard. The wizard also supports silent
operation, and the parameters can be passed through the response file that is available in the
installation media.

OK
Close

The response file for this session can be found at:


/app/19.3.0/grid/install/response/grid_2023-01-23_02-32-17PM.rsp

You can find the log of this install session at:


/tmp/GridSetupActions2023-01-23_02-32-17PM/gridSetupActions2023-01-23_02-32-17PM.log
Moved the install session logs to:
/app/oraInventory/logs/GridSetupActions2023-01-23_02-32-17PM
----------------------------------------------------------------------------------------------------------
Grid Cluster config
-- install using moba ,
ssh grid@sgtd01-prd-bd01
W+|QUQk+@;
cd /app/19.3.0/grid
grid@ccm03-prd-bd01:grid$ ./gridSetup.sh
Launching Oracle Grid Infrastructure Setup Wizard...

1. configure oraclegrid for a new cluster

2. configure an oracle standalone cluster


3. create local sacan
name: sgtd01-prd
scan name: sgtd01-prd-scan.spms.min-saude.pt
port: 1521

4. cluster node information


sgtd01-prd-bd01.spms.min-saude.pt sgtd01-prd-bd01-vip.spms.min-saude.pt
sgtd01-prd-bd02.spms.min-saude.pt sgtd01-prd-bd02-vip.spms.min-saude.pt
ssh connectivity:
grid / W+|QUQk+@;

5. network interface usage


ens256 172.30.0.0 ASM & private
ens224 10.105.0.0 public
6. use oracle ASM flex for storage

7. create grid infraestruture, YES


8. YES. separete diskgroup GIMR

9. Create ASM Disk Group


root@sgtd01-prd-bd02:~# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
ls -lh ls -lh /dev/oracleasm/disks/*

Disk group name: ASM


normal, alocation size 4M
Change discovery path /dev/oracleasm/disks/*
slect 3 disks OCR01,OCR02,OCR03

10.
Disk group name: MGMT
normal, alocation size 4M
Change discovery path /dev/oracleasm/disks/*
slect 2 disks MGMT01,MGMT02
11. Password
sys ldtf45#45dsPa
asmsnmp ldtf45#45dsPaasm
12. Do not IPMI

13. not register on emc

14. not automatclly run....


15. Pre requisits check

ignore swap size


ignore Verifies the RPM Package Manager database files

16. summary
Run as root on 2 nodes
/app/19.3.0/grid/root.sh
root@sgtd01-prd-bd01:~# /app/19.3.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:


ORACLE_OWNER= grid
ORACLE_HOME= /app/19.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /app/19.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/app/grid/crsdata/sgtd01-prd-bd01/crsconfig/rootcrs_sgtd01-prd-bd01_2023-01-23_05-09-25PM.log
2023/01/23 17:09:33 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2023/01/23 17:09:33 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2023/01/23 17:09:33 CLSRSC-363: User ignored prerequisites during installation
2023/01/23 17:09:33 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2023/01/23 17:09:34 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2023/01/23 17:09:35 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2023/01/23 17:09:35 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2023/01/23 17:09:35 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2023/01/23 17:09:45 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2023/01/23 17:09:48 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2023/01/23 17:09:54 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2023/01/23 17:10:01 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2023/01/23 17:10:01 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2023/01/23 17:10:05 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2023/01/23 17:10:05 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2023/01/23 17:10:25 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2023/01/23 17:10:30 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2023/01/23 17:10:34 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2023/01/23 17:10:38 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.

ASM has been created and started successfully.

[DBT-30001] Disk groups created successfully. Check /app/grid/cfgtoollogs/asmca/asmca-230123PM051107.log for


details.

2023/01/23 17:11:56 CLSRSC-482: Running command: '/app/19.3.0/grid/bin/ocrconfig -upgrade grid oinstall'


CRS-4256: Updating the profile
Successful addition of voting disk 4fdc261101b74f1bbfee0398f814df91.
Successful addition of voting disk 76555f6f650b4f85bfbcc63d3c5fd96f.
Successful addition of voting disk 167efa804a1e4fd9bf8e558c82b0f5e3.
Successfully replaced voting disk group with +ASM.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 4fdc261101b74f1bbfee0398f814df91 (/dev/oracleasm/disks/OCR01) [ASM]
2. ONLINE 76555f6f650b4f85bfbcc63d3c5fd96f (/dev/oracleasm/disks/OCR02) [ASM]
3. ONLINE 167efa804a1e4fd9bf8e558c82b0f5e3 (/dev/oracleasm/disks/OCR03) [ASM]
Located 3 voting disk(s).
2023/01/23 17:13:19 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2023/01/23 17:14:25 CLSRSC-343: Successfully started Oracle Clusterware stack
2023/01/23 17:14:25 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2023/01/23 17:15:34 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.

[INFO] [DBT-30001] Disk groups created successfully. Check /app/grid/cfgtoollogs/asmca/asmca-230123PM051538.log


for details.

2023/01/23 17:16:12 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

root@sgtd01-prd-bd02:~# /app/19.3.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:


ORACLE_OWNER= grid
ORACLE_HOME= /app/19.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /app/19.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/app/grid/crsdata/sgtd01-prd-bd02/crsconfig/rootcrs_sgtd01-prd-bd02_2023-01-23_05-29-
09PM.log
2023/01/23 17:29:13 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2023/01/23 17:29:13 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2023/01/23 17:29:13 CLSRSC-363: User ignored prerequisites during installation
2023/01/23 17:29:13 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2023/01/23 17:29:14 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2023/01/23 17:29:14 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2023/01/23 17:29:14 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2023/01/23 17:29:15 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2023/01/23 17:29:15 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2023/01/23 17:29:15 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2023/01/23 17:29:23 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2023/01/23 17:29:23 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2023/01/23 17:29:24 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2023/01/23 17:29:24 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2023/01/23 17:29:34 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA)
Collector.
2023/01/23 17:29:41 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2023/01/23 17:29:42 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2023/01/23 17:29:43 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2023/01/23 17:29:43 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
2023/01/23 17:29:50 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2023/01/23 17:30:36 CLSRSC-343: Successfully started Oracle Clusterware stack
2023/01/23 17:30:36 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2023/01/23 17:30:45 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2023/01/23 17:30:49 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

18 close

The response file for this session can be found at:


/app/19.3.0/grid/install/response/grid_2023-01-23_04-19-25PM.rsp

You can find the log of this install session at:


/app/oraInventory/logs/UpdateNodeList2023-01-23_04-19-25PM.log
Update Grid Opatch

How To Download And Install The Latest OPatch(6880880) Version (Doc ID


274526.1)

--p6880880_190000_Linux-x86-64.zip

grid@sgtd01-prd-bd01:$ /app/19.3.0/grid/OPatch/opatch version


Patch Version: 12.2.0.1.17
OPatch succeeded.

unzip -K p6880880_190000_Linux-x86-64.zip -d p6880880_19

(1) Please take a backup of ORACLE_HOME/OPatch into a dedicated backup location.


cp -pr /app/19.3.0/grid/OPatch /app/stage/p6880880_19/opatch_20230123
(2) Please make sure no directory ORACLE_HOME/OPatch exist.
yes
(3) Please unzip the OPatch downloaded zip into ORACLE_HOME directory.
unzip -K p6880880_190000_Linux-x86-64.zip -d /app/19.3.0/grid/
[A] All

grid@sgtd01-prd-bd01:$ /app/19.3.0/grid/OPatch/opatch version


OPatch Version: 12.2.0.1.36
OPatch succeeded.

On 2 nodes
PSU CRS GRID 19.0.0
Validate:
crsctl status res -t
srvctl config nodeapps
oifcfg getif

17-Jan-2023 Combo OJVM RU 19.18.0.0.230117 and GI RU 19.18.0.230117 Patch 34773504


https://updates.oracle.com/Orion/Services/download?type=readme&aru=25080291

Patch 34762026 - Database Grid Infrastructure Jan 2023 Release


Update 19.18.0.0.230117 --> RAC-Rolling Installable

Patch 34786990 - Oracle JavaVM Component Release Update 19.18.0.0.230117 -


-> Patch has some additional requirements if it is to be installed in a "Conditional
Rolling Install" fashion, as detailed in MOS NOTE 2217053.1.

p34773504_190000_Linux-x86-64.zip
------------ Patch 34762026

-- Aplicação PSU CRS


unzip -K p34773504_190000_Linux-x86-64.zip -d p34773504_190000
drwxr-x---. 8 grid oinstall 4096 Jan 16 10:38 34762026
drwxr-xr-x. 4 grid oinstall 67 Dec 6 21:24 34786990
-- 2 patchs

------------------- patch 34762026


cd /app/stage/p34773504_190000/34773504/34762026

----- /app/stage/p34773504_190000/34773504/34762026_README.html
--2.1.1.1 OPatch Utility Information
/app/19.3.0/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.36
--2.1.1.2 Validation of Oracle Inventory
/app/19.3.0/grid/OPatch/opatch lsinventory -detail -oh /app/19.3.0/grid/ > 2.1.1.2_Validation.txt
--2.1.1.4 Run OPatch Conflict Check
As the Grid home user:

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir


/app/stage/p34773504_190000/34773504/34762026/33575402
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir
/app/stage/p34773504_190000/34773504/34762026/34765931
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir
/app/stage/p34773504_190000/34773504/34762026/34768559
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir
/app/stage/p34773504_190000/34773504/34762026/34768569
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir
/app/stage/p34773504_190000/34773504/34762026/34863894

...
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

-- 2.1.1.5 Run OPatch System Space Check


vi /tmp/patch_list_gihome.txt
/app/stage/p34773504_190000/34773504/34762026/33575402
/app/stage/p34773504_190000/34773504/34762026/34765931
/app/stage/p34773504_190000/34773504/34762026/34768559
/app/stage/p34773504_190000/34773504/34762026/34768569
/app/stage/p34773504_190000/34773504/34762026/34863894

$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -


phBaseFile /tmp/patch_list_gihome.txt

Prereq "checkSystemSpace" passed.

-- 2.1.3 Patch Installation Checks


grid@sgtd01-prd-bd01:34762026$ cluvfy stage -pre patch
Verifying cluster upgrade state ...PASSED
Verifying Software home: /app/19.3.0/grid ...PASSED

Pre-check for Patch Application was successful.

CVU operation performed: stage -pre patch


Date: Jan 24, 2023 12:25:41 PM
CVU home: /app/19.3.0/grid/
User: grid

--2.1.5 OPatchAuto
-------- on node 1
The utility must be executed by an operating system (OS) user with root privileges, and it must be
executed on each node in the cluster if the Grid home
as ROOT

export PATH=$PATH:/app/19.3.0/grid/OPatch:/app/19.3.0/grid/bin
opatchauto apply /app/stage/p34773504_190000/34773504/34762026 -analyze
==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /app/stage/p34773504_190000/34773504/34762026/34768559
Log: /app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-01-24_14-44-28PM_1.log

Patch: /app/stage/p34773504_190000/34773504/34762026/34768569
Log: /app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-01-24_14-44-28PM_1.log

Patch: /app/stage/p34773504_190000/34773504/34762026/33575402
Log: /app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-01-24_14-44-28PM_1.log

Patch: /app/stage/p34773504_190000/34773504/34762026/34863894
Log: /app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-01-24_14-44-28PM_1.log

Patch: /app/stage/p34773504_190000/34773504/34762026/34765931
Log: /app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-01-24_14-44-28PM_1.log

------------------------------
opatchauto apply /app/stage/p34773504_190000/34773504/34762026 -oh /app/19.3.0/grid
root@sgtd01-prd-bd01:34762026# opatchauto apply
/app/stage/p34773504_190000/34773504/34762026 -oh /app/19.3.0/grid
root@sgtd01-prd-bd01:34762026# opatchauto apply /app/stage/p34773504_190000/34773504/34762026 -oh /app/19.3.0/grid

OPatchauto session is initiated at Tue Jan 24 14:46:53 2023

System initialization log file is /app/19.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2023-01-24_02-46-58PM.log.

Session log file is /app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2023-01-24_02-47-06PM.log


The id for this session is I145

Executing OPatch prereq operations to verify patch applicability on home /app/19.3.0/grid


Patch applicability verified successfully on home /app/19.3.0/grid

Executing patch validation checks on home /app/19.3.0/grid


Patch validation checks successfully completed on home /app/19.3.0/grid

Performing prepatch operations on CRS - bringing down CRS service on home /app/19.3.0/grid
Prepatch operation log file location: /app/grid/crsdata/sgtd01-prd-bd01/crsconfig/crs_prepatch_apply_inplace_sgtd01-prd-
bd01_2023-01-24_02-47-49PM.log
CRS service brought down successfully on home /app/19.3.0/grid

Start applying binary patch on home /app/19.3.0/grid


Binary patch applied successfully on home /app/19.3.0/grid

Performing postpatch operations on CRS - starting CRS service on home /app/19.3.0/grid


Postpatch operation log file location: /app/grid/crsdata/sgtd01-prd-bd01/crsconfig/crs_postpatch_apply_inplace_sgtd01-prd-
bd01_2023-01-24_02-55-27PM.log
CRS service started successfully on home /app/19.3.0/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:

Host:sgtd01-prd-bd01
CRS Home:/app/19.3.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /app/stage/p34773504_190000/34773504/34762026/33575402
Log: /app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-01-24_14-51-15PM_1.log

Patch: /app/stage/p34773504_190000/34773504/34762026/34765931
Log: /app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-01-24_14-51-15PM_1.log

Patch: /app/stage/p34773504_190000/34773504/34762026/34768559
Log: /app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-01-24_14-51-15PM_1.log

Patch: /app/stage/p34773504_190000/34773504/34762026/34768569
Log: /app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-01-24_14-51-15PM_1.log

Patch: /app/stage/p34773504_190000/34773504/34762026/34863894
Log: /app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-01-24_14-51-15PM_1.log

OPatchauto session completed at Tue Jan 24 15:01:22 2023


Time taken to complete the session 14 minutes, 24 seconds

----Opatchauto Hangs at Starting CRS service on home /u01/app/19.0.0/grid (Doc ID 2721146.1)


----https://eclipsys.ca/how-to-continue-a-grid-infrastructure-patch-in-exacc-when-it-has-failed-in-
node-1/
crsctl query crs activeversion
crsctl query crs activeversion -f

grid@sgtd01-prd-bd01:stage$ $ORACLE_HOME/OPatch/opatch lspatches34863894;TOMCAT


RELEASE UPDATE 19.0.0.0.0 (34863894)
34768569;ACFS RELEASE UPDATE 19.18.0.0.0 (34768569)
34768559;OCW RELEASE UPDATE 19.18.0.0.0 (34768559)
34765931;Database Release Update : 19.18.0.0.230117 (34765931)
33575402;DBWLM RELEASE UPDATE 19.0.0.0.0 (33575402)

-- after install patch


grid@ccm03-prd-bd02:33248471$ $ORACLE_HOME/OPatch/opatch lspatches
33239955;TOMCAT RELEASE UPDATE 19.0.0.0.0 (33239955)
33208123;OCW RELEASE UPDATE 19.13.0.0.0 (33208123)
33208107;ACFS RELEASE UPDATE 19.13.0.0.0 (33208107)
33192793;Database Release Update : 19.13.0.0.211019 (33192793)
32585572;DBWLM RELEASE UPDATE 19.0.0.0.0 (32585572)
------------ on node 2
grid@sgtd01-prd-bd01:stage$ scp -p p34773504_190000_Linux-x86-64.zip grid@sgtd01-prd-
bd02:/app/stage/
grid@sgtd01-prd-bd02:stage$ unzip -K p34773504_190000_Linux-x86-64.zip -d
p34773504_190000
 As root
root@sgtd01-prd-bd02:~# cd /app/stage/p34773504_190000/34773504/34762026

export PATH=$PATH:/app/19.3.0/grid/OPatch:/app/19.3.0/grid/bin
opatchauto apply /app/stage/p34773504_190000/34773504/34762026 -analyze
opatchauto apply /app/stage/p34773504_190000/34773504/34762026 -oh /app/19.3.0/grid

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:sgtd01-prd-bd02
CRS Home:/app/19.3.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /app/stage/p34773504_190000/34773504/34762026/33575402
Log: /app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-01-24_15-35-50PM_1.log

Patch: /app/stage/p34773504_190000/34773504/34762026/34765931
Log: /app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-01-24_15-35-50PM_1.log

Patch: /app/stage/p34773504_190000/34773504/34762026/34768559
Log: /app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-01-24_15-35-50PM_1.log

Patch: /app/stage/p34773504_190000/34773504/34762026/34768569
Log: /app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-01-24_15-35-50PM_1.log

Patch: /app/stage/p34773504_190000/34773504/34762026/34863894
Log: /app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-01-24_15-35-50PM_1.log

OPatchauto session completed at Tue Jan 24 15:54:16 2023


Time taken to complete the session 22 minutes, 28 seconds

node-2/
crsctl query crs activeversion
crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is
[NORMAL]. The cluster active patch level is [3161362881].

grid@sgtd01-prd-bd02:34762026$ $ORACLE_HOME/OPatch/opatch lspatches


34863894;TOMCAT RELEASE UPDATE 19.0.0.0.0 (34863894)
34768569;ACFS RELEASE UPDATE 19.18.0.0.0 (34768569)
34768559;OCW RELEASE UPDATE 19.18.0.0.0 (34768559)
34765931;Database Release Update : 19.18.0.0.230117 (34765931)
33575402;DBWLM RELEASE UPDATE 19.0.0.0.0 (33575402)

-----------------------Patch 34786990
https://updates.oracle.com/Orion/Services/download?type=readme&aru=25032666
Patch 34786990 - Oracle JavaVM Component Release Update 19.18.0.0.230117
--------------não instalar, so Database

https://mikedietrichde.com/2020/01/24/do-you-need-to-apply-ojvm-patches-to-grid-
infrastructure/

grid@sgtd01-prd-bd02:34786990$ pwd
/app/stage/p34773504_190000/34773504/34786990
ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
... Prereq "checkConflictAgainstOHWithDetail" passed.

$ORACLE_HOME/OPatch/opatch apply
OPatch failed with error code 73
--------------não instalar, so Database
Instalação SW RDBMS 11.2.0.4

oracle@sgtd01-prd-bd01:stage$ mkdir -p /app/oracle/product/11.2.0.4/dbhome_1/


oracle@sgtd01-prd-bd01:stage$ unzip -K p13390677_112040_Linux-x86-64_1of7.zip
oracle@sgtd01-prd-bd01:stage$ unzip -K p13390677_112040_Linux-x86-64_2of7.zip
files in /app/stage/database

moba
ssh oracle@sgtd01-prd-bd01
oracle / q:VEG+ltap
cd /app/stage/database
oracle@sgtd01-prd-bd01:database$ ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 1940 MB Passed
Checking swap space: must be greater than 150 MB. Actual 4095 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2023-01-24_04-42-28PM.
Please wait ...

1. I NOT wish...

2. skipe sftware upd..


3. install database software only

4. Oracle real aplication cluster database installation


two servers
ssh connectivity
reuse private and plublyc key
5. english

6. enterprise edition
7.Specify installation location
base: /app/oracle
software: /app/oracle/product/11.2.0.4/dbhome_1/

8. Operation system groups


dba oper
oracle@sgtd01-prd-bd01:database$ id
uid=54321(oracle) gid=54321(oinstall)
groups=54321(oinstall),54322(dba),54330(racdba),54332(asmdba),54333(asmoper),54334(asmad
min) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
9, validations:

---PRVF-7532 : Package "elfutils-libelf-devel" is missing on node


Installing 11.2.0.3 Or 11.2.0.4 (32-bit (x86) or 64-bit (x86-64) ) On RHEL6 Reports That Packages
"elfutils-libelf-devel-0.97" And "pdksh-5.2.14" Are Missing (PRVF-7532) (Doc ID 1454982.1)

---PRVF-7532 : Package "pdksh" is missing on node


Installing 11.2.0.3 Or 11.2.0.4 (32-bit (x86) or 64-bit (x86-64) ) On RHEL6 Reports That Packages
"elfutils-libelf-devel-0.97" And "pdksh-5.2.14" Are Missing (PRVF-7532) (Doc ID 1454982.1)
cd /app/stage/database/stage/cvu/cv/admin
cp cvu_config backup_cvu_config
vi cvu_config
change CV_ASSUME_DISTID=OEL4 to CV_ASSUME_DISTID=OEL6
Restart installer

--------------------------------------------------
CRS Integrity - This test checks the integrity of Oracle Clusterware stack across the cluster
nodes. Error:
-------PRVF-4037 : CRS is not installed on any of the node
PRVF-7593 : CRS is not found to be installed on node
----PRVF-7593 : CRS is not found to be installed on node
RAC RDBMS Installation fails with Error:"PRVF-4037 : CRS is not installed on any of the
nodes" (Doc ID 2315020.1)

Less /app/oraInventory/ContentsXML/inventory.xml
<HOME_LIST>
<HOME NAME="OraGI19Home1" LOC="/app/19.3.0/grid" TYPE="O" IDX="1" CRS="true"/>
</HOME_LIST>

With grid user (nos dois nodes)


/app/19.3.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME="/app/19.3.0/grid"
"CLUSTER_NODES={sgtd01-prd-bd01.spms.min-saude.pt,sgtd01-prd-bd02.spms.min-saude.pt}"
CRS=true

<HOME NAME="OraGI19Home1" LOC="/app/19.3.0/grid" TYPE="O" IDX="1" CRS="true">


<NODE_LIST>
<NODE NAME="sgtd01-prd-bd01.spms.min-saude.pt"/>
<NODE NAME="sgtd01-prd-bd01.spms.min-saude.pt"/>
</NODE_LIST>
</HOME>
Restart installer
-------------------------------------------------------
-- PRVF-5414 : Check of NTP Config file failed on all nodes. Cannot proceed further for the NTP
tests
-- PRVF-5415 : Check to see if NTP daemon or service is running failed
CLUVFY REPORTING PRVF-5414 and PRVF-5415 IN CRS ALERT LOG WHEN USING "CHRONY" FOR
TIME SYNC (Doc ID 2647841.1)

root@sgtd01-prd-bd01:admin# systemctl status -l chronyd


● ● chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2023-01-16 12:20:15 WET; 1 weeks 1 days ago
Docs: man:chronyd(8)
man:chrony.conf(5)
Ignored
------------------------------------------------------------------

--PRVF-7611 : Proper user file creation mask (umask) for user "oracle" is not found on node
"ccm03-prd-bd02" [Expected = "0022" ; Found = "0027"] -
Cause: The user's OS file creation mask (umask) was not the required setting. -
Action: Set appropriate user file creation mask. Modify the users .profile or .cshrc
or .bashrc to include the required umask
in .bash_profile add umask 0022

ignore
---------------------------------------------------------------------------------
ap Size - This is a prerequisite condition to test whether sufficient total swap space is available on
the system.
Check Failed on Nodes: [sgtd01-prd-bd02, sgtd01-prd-bd01]
Verification result of failed node: sgtd01-prd-bd02
Expected Value
: 15.6235GB (1.6382396E7KB)
Actual Value
: 4GB (4194300.0KB)

Ignore
---------------------------------------------------------------------------------
ingle Client Access Name (SCAN) - This test verifies the Single Client Access Name
configuration. Error:
-
PRVG-1101 : SCAN name "null" failed to resolve - Cause: An attempt to resolve specified SCAN
name to a list of IP addresses failed because SCAN could not be resolved in DNS or GNS using
'nslookup'. - Action: Check whether the specified SCAN name is correct. If SCAN name should be
resolved in DNS, check the configuration of SCAN name in DNS. If it should be resolved in GNS
make sure that GNS resource is online.
-
PRVF-4657 : Name resolution setup check for "null" (IP address: 127.0.0.1) failed -
Cause: Inconsistent IP address definitions found for the SCAN name identified using DNS and
configured name resolution mechanism(s). - Action: Look up the SCAN name with nslookup, and
make sure the returned IP addresses are consistent with those defined in NIS and /etc/hosts as
configured in /etc/nsswitch.conf by reconfiguring the latter. Check the Name Service Cache
Daemon (/usr/sbin/nscd) by clearing its cache and restarting it.

PRVG-1101 : SCAN name "null" failed to resolve


PRVF-4657 : Name resolution setup check for "null" (IP address: 127.0.0.1) failed
grid@sgtd01-prd-bd01:34786990$ cluvfy comp scan
Single Client Access Name (SCAN) ...PASSED

Bug 25409838 - DBCA fails with error PRVG-1101 if SCAN VIP uses IPv6 addresses
(Doc ID 25409838.8)
grid@sgtd01-prd-bd01:~$ srvctl config scan
SCAN name: sgtd01-prd-scan.spms.min-saude.pt, Network: 1
Subnet IPv4: 10.105.8.0/255.255.255.0/ens224, static
Subnet IPv6:
SCAN 1 IPv4 VIP: 10.105.8.29
SCAN VIP is enabled.
SCAN 2 IPv4 VIP: 10.105.8.30
SCAN VIP is enabled.
SCAN 3 IPv4 VIP: 10.105.8.31
SCAN VIP is enabled.
grid@sgtd01-prd-bd01:~$ nslookup sgtd01-prd-scan.spms.min-saude.pt
Server: 127.0.0.1
Address: 127.0.0.1#53

Non-authoritative answer:
Name: sgtd01-prd-scan.spms.min-saude.pt
Address: 10.105.8.29
Name: sgtd01-prd-scan.spms.min-saude.pt
Address: 10.105.8.31
Name: sgtd01-prd-scan.spms.min-saude.pt
Address: 10.105.8.30

grid@sgtd01-prd-bd01:~$ cluvfy comp scan

Performing following verification checks ...

Single Client Access Name (SCAN) ...


DNS/NIS name service 'sgtd01-prd-scan.spms.min-saude.pt' ...
Name Service Switch Configuration File Integrity ...PASSED
DNS/NIS name service 'sgtd01-prd-scan.spms.min-saude.pt' ...PASSED
Single Client Access Name (SCAN) ...PASSED

Verification of SCAN was successful.

CVU operation performed: SCAN


Date: Jan 24, 2023 8:00:56 PM
CVU version: 19.18.0.0.0 (011323x8664)
Clusterware version: 19.0.0.0.0
CVU home: /app/19.3.0/grid
Grid home: /app/19.3.0/grid
User: grid
Operating system: Linux5.4.17-2136.315.5.el7uek.x86_64
10.install...

--Installing 11G : "File Not Found" Errors Running RunInstaller or Setup.exe (WFMLRSVCApp.ear,
WFMGRApp.ear, WFALSNRSVCApp.ear) (Doc ID 468771.1)
unzip -K p13390677_112040_Linux-x86-64_2of7.zip
--------------------------------------------------------------------
Exception String: Error in invoking target 'agent nmhs' of makefile
'/app/oracle/product/11.2.0.4/dbhome_1/sysman/lib/ins_emagent.mk'. See
'/app/oraInventory/logs/installActions2023-01-24_11-39-17PM.log' for details.

--Error in invoking target 'agent nmhs' of make file ins_emagent.mk while installing Oracle 11.2.0.4
on Linux (Doc ID 2299494.1)

vi /app/oracle/product/11.2.0.4/dbhome_1/sysman/lib/ins_emagent.mk
find $(MK_EMAGENT_NMECTL)
Then replace the line with
$(MK_EMAGENT_NMECTL) -lnnz11
----------------------------------------------------------

run as root, on two servers


/app/oracle/product/11.2.0.4/dbhome_1/root.sh
root@sgtd01-prd-bd01:~# /app/oracle/product/11.2.0.4/dbhome_1/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:


ORACLE_OWNER= oracle
ORACLE_HOME= /app/oracle/product/11.2.0.4/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

-----------------------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------------------------
----------------
OPatch oracle home 11.2.0.4
unzip -K p6880880_112000_Linux-x86-64.zip -d p6880880_112000

oracle@sgtd01-prd-bd01:stage$ /app/oracle/product/11.2.0.4/dbhome_1/OPatch/opatch version


OPatch Version: 11.2.0.3.4

(1) Please take a backup of ORACLE_HOME/OPatch into a dedicated backuplocation.


cp -pr /app/oracle/product/11.2.0.4/dbhome_1/OPatch /app/stage/
p6880880_112000/OPatch_20230125
(2) Please make sure no directory ORACLE_HOME/OPatch exist.
yes
(3) Please unzip the OPatch downloaded zip into ORACLE_HOME directory.
unzip -K p6880880_112000_Linux-x86-64.zip -d /app/oracle/product/11.2.0.4/dbhome_1/
[A]
oracle@sgtd01-prd-bd01:stage$ /app/oracle/product/11.2.0.4/dbhome_1/OPatch/opatch version
OPatch Version: 11.2.0.3.29

Repete on server 2
-----------------------------------------------------------------------------------------------------------------------------------
----------------
Aplicação PSU + OJVM RDBMS 11.2.0.4
unzip -K p31720776_112040_Linux-x86-64.zip

cd 31720776
drwxr-xr-x. 30 oracle oinstall 4096 Sep 24 2020 31537677
drwxr-xr-x. 4 oracle oinstall 67 Sep 8 2020 31668908

patch 31537677
https://updates.oracle.com/Orion/Services/download?type=readme&aru=23856146
cd /app/stage/31720776/31537677

export ORACLE_BASE=/app/oracle
export ORACLE_HOME=/app/oracle/product/11.2.0.4/dbhome_1
export
PATH=$PATH:/app/oracle/product/11.2.0.4/dbhome_1/bin:/app/oracle/product/11.2.0.4/dbhom
e_1/OPatch
opatch version
OPatch Version: 11.2.0.3.29

------ opatch prereq CheckConflictAgainstOHWithDetail -ph ./


oracle@sgtd01-prd-bd02:31537677$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 11.2.0.3.29
Copyright (c) 2023, Oracle Corporation. All rights reserved.

PREREQ session

Oracle Home : /app/oracle/product/11.2.0.4/dbhome_1


Central Inventory : /app/oraInventory
from : /app/oracle/product/11.2.0.4/dbhome_1/oraInst.loc
OPatch version : 11.2.0.3.29
OUI version : 11.2.0.4.0
Log file location : /app/oracle/product/11.2.0.4/dbhome_1/cfgtoollogs/opatch/opatch2023-01-
25_01-14-47AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

-------------------------------------------------

cd /app/stage/31720776/31537677
opatch apply
oracle@ccm03-prd-bd01:31537677$ opatch apply
Oracle Interim Patch Installer version 11.2.0.3.29
Copyright (c) 2022, Oracle Corporation. All rights reserved.

Oracle Home : /app/oracle/product/11.2.0.4/dbhome_1


Central Inventory : /app/oraInventory
from : /app/oracle/product/11.2.0.4/dbhome_1/oraInst.loc
OPatch version : 11.2.0.3.29
OUI version : 11.2.0.4.0
Log file location : /app/oracle/product/11.2.0.4/dbhome_1/cfgtoollogs/opatch/opatch2022-06-
09_18-37-15PM_1.log

Verifying environment and performing prerequisite checks...


OPatch continues with these patches: 17478514 18031668 18522509 19121551 19769489
20299013 20760982 21352635 21948347 22502456 23054359 24006111 24732075 25869727
26609445 26392168 26925576 27338049 27734982 28204707 28729262 29141056 29497421
29913194 30298532 30670774 31103343 31537677

Do you want to proceed? [y|n]


....
--------------------------------
OPatch found the word "warning" in the stderr of the make command.
Please look at this stderr. You can re-run this make command.
Stderr output:
/usr/bin/ld: warning: -z lazyload ignored.
/usr/bin/ld: warning: -z nolazyload ignored.

- "warning: -z lazyload ignored" and "warning: -z nolazyload ignored" During Install or


Patching in 11.2.0.4 Database in OEL7/RHEL7 (Doc ID 2071922.1)

--------------------------------
OPatch found the word "error" in the stderr of the make command.
Please look at this stderr. You can re-run this make command.
Stderr output:
chmod: changing permissions of ‘/app/oracle/product/11.2.0.4/dbhome_1/bin/extjobO’:
Operation not permitted
make: [iextjob] Error 1 (ignored)
--Oracle Database 12.2.0.1 Release Update & Release Update Revision October 2019 Known
Issues (Doc ID 2568307.1)
Applying Proactive Bundle / PSU Patch fails with Error: "chmod: changing permissions of
`$ORACLE_HOME/bin/extjobO': Operation not permitted" (Doc ID 2265726.1)

oracle@ccm03-prd-bd01:31537677$ ll /app/oracle/product/11.2.0.4/dbhome_1/bin/extjobO
-rwsr-x---. 1 root oinstall 1248008 Jun 3 13:30
/app/oracle/product/11.2.0.4/dbhome_1/bin/extjobO

--ignore all
Composite patch 31537677 successfully applied.
OPatch Session completed with warnings.
Log file location: /app/oracle/product/11.2.0.4/dbhome_1/cfgtoollogs/opatch/opatch2023-01-
25_10-00-12AM_1.log

OPatch completed with warnings.


--------
oracle@sgtd01-prd-bd02:31537677$ opatch lspatches
31537677;Database Patch Set Update : 11.2.0.4.201020 (31537677)
oracle@sgtd01-prd-bd01:admin$ opatch lspatches
31537677;Database Patch Set Update : 11.2.0.4.201020 (31537677)

Patch 31668908
pach 31668908
Patch 31668908 - Oracle JavaVM Component 11.2.0.4.201020 Database PSU
https://updates.oracle.com/Orion/Services/download?type=readme&aru=23800881

export ORACLE_HOME=/app/oracle/product/11.2.0.4/dbhome_1
export
PATH=$PATH:/app/oracle/product/11.2.0.4/dbhome_1/bin:/app/oracle/product/11.2.0.4/dbh
ome_1/OPatch

cd /app/stage/31720776/31668908
opatch prereq CheckConflictAgainstOHWithDetail -ph ./
opatch apply

oracle@sgtd01-prd-bd01:31668908$ opatch apply


Oracle Interim Patch Installer version 11.2.0.3.29
Copyright (c) 2023, Oracle Corporation. All rights reserved.

Oracle Home : /app/oracle/product/11.2.0.4/dbhome_1


Central Inventory : /app/oraInventory
from : /app/oracle/product/11.2.0.4/dbhome_1/oraInst.loc
OPatch version : 11.2.0.3.29
OUI version : 11.2.0.4.0
Log file location : /app/oracle/product/11.2.0.4/dbhome_1/cfgtoollogs/opatch/opatch2023-01-
25_12-00-14PM_1.log
Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 31668908
Do you want to proceed? [y|n] y
User Responded with: Y
All checks passed.
Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/app/oracle/product/11.2.0.4/dbhome_1')
Is the local system ready for patching? [y|n] y
User Responded with: Y
Backing up files...
Applying interim patch '31668908' to OH '/app/oracle/product/11.2.0.4/dbhome_1'
ApplySession: Optional component(s) [ oracle.sqlj, 11.2.0.4.0 ] , [ oracle.sqlj.companion, 11.2.0.4.0
] not present in the Oracle Home or a higher version is found.
Patching component oracle.javavm.server, 11.2.0.4.0...
Patching component oracle.precomp.common, 11.2.0.4.0...
Patching component oracle.rdbms, 11.2.0.4.0...
Patching component oracle.rdbms.dbscripts, 11.2.0.4.0...
Patching component oracle.javavm.client, 11.2.0.4.0...
Patching component oracle.dbjava.jdbc, 11.2.0.4.0...
Patching component oracle.dbjava.ic, 11.2.0.4.0...
Patch 31668908 successfully applied.
Log file location: /app/oracle/product/11.2.0.4/dbhome_1/cfgtoollogs/opatch/opatch2023-01-
25_12-00-14PM_1.log
OPatch succeeded.
--------------------
racle@sgtd01-prd-bd01:31668908$ opatch lspatches
31668908;OJVM PATCH SET UPDATE 11.2.0.4.201020
31537677;Database Patch Set Update : 11.2.0.4.201020 (31537677)
---------------------------------
--opatch lsinventory
oracle@sgtd01-prd-bd01:31668908$ opatch lsinventory | grep 31668908
Patch 31668908 : applied on Wed Jan 25 12:00:40 WET 2023
oracle@sgtd01-prd-bd01:31668908$ opatch lsinventory | grep 31537677
Patch 31537677 : applied on Wed Jan 25 10:19:21 WET 2023
Patch description: "Database Patch Set Update : 11.2.0.4.201020 (31537677)"
select * from registry$history ORDER BY 1 desc;
SELECT TO_CHAR(action_time, 'DD-MON-YYYY HH24:MI:SS') AS action_time, action, version, id,
comments, bundle_series FROM sys.registry$history ORDER by action_time;
Create diskgroups..
ssh grid@sgtd01-prd-bd01
oracle / q:VEG+ltap
grid / W+|QUQk+@;
root / QbTkmta$GGM

oracle@sgtd01-prd-bd01:31668908$ ll -ltrh /dev/oracleasm/disks/


total 0
brw-rw----. 1 grid asmadmin 8, 161 Jan 24 19:19 FLASH02
brw-rw----. 1 grid asmadmin 8, 145 Jan 24 19:19 FLASH01
brw-rw----. 1 grid asmadmin 8, 129 Jan 24 19:19 DATA02
brw-rw----. 1 grid asmadmin 8, 113 Jan 24 19:19 DATA01
brw-rw----. 1 grid asmadmin 8, 97 Jan 24 19:20 MGMT02
brw-rw----. 1 grid asmadmin 8, 81 Jan 24 19:20 MGMT01
brw-rw----. 1 grid asmadmin 8, 49 Jan 25 12:25 OCR02
brw-rw----. 1 grid asmadmin 8, 33 Jan 25 12:25 OCR01
brw-rw----. 1 grid asmadmin 8, 65 Jan 25 12:25 OCR03

grid@sgtd01-prd-bd01:~$ env | grep ORA


ORACLE_SID=+ASM1
ORACLE_BASE=/app/grid
ORACLE_HOME=/app/19.3.0/grid
grid@sgtd01-prd-bd01:~$ cd $ORACLE_HOME/bin
grid@sgtd01-prd-bd01:bin$ ./asmca
Disks_groups -->right clic-->create

Origem:
[oracle@sgtdracdb11prd ~]$ asmcmd
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB
Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 102399 101269 0 101269 0
N DG_ARCH/
MOUNTED EXTERN N 512 4096 1048576 409598 157083 0 157083 0
N DG_DATA/
DG_DATA --> External, disks: DATA01 + DATA02, alocation unit size 3096(4MB)
DG_ARCH --> External, disks: FLASH01 + FLASH02, alocation unit size 3096(4MB)

grid@sgtd01-prd-bd01:~$ asmcmd
ASMCMD> lsdg
State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB
Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL N 512 512 4096 4194304 15348 14432 5116
4658 0 Y ASM/
MOUNTED EXTERN N 512 512 4096 4194304 204792 204640 0
204640 0 N DG_ARCH/
MOUNTED EXTERN N 512 512 4096 4194304 614392 614240 0
614240 0 N DG_DATA/
MOUNTED NORMAL N 512 512 4096 4194304 102392 53936 0
26968 0 N MGMT/
using dbca , create database test
ssh oracle@sgtd01-prd-bd01
oracle / q:VEG+ltap
grid / W+|QUQk+@;
root / QbTkmta$GGM

ORACLE_BASE=/app/oracle
export ORACLE_BASE
ORACLE_HOME=/app/oracle/product/11.2.0.4/dbhome_1
export ORACLE_HOME
ORACLE_SID=TEST
export ORACLE_SID
PATH=$ORACLE_HOME/bin:$ORACLE_HOME/lib:$PATH
export PATH

cd $ORACLE_HOME/bin
dbca

11. Password
Sys/system Omp118ilvvtp1P
dbsnmp Omp1$8ianMP
Criação Dataguard da BD NORPRD

NORPRD - sugestão db_unique_name NORPROD

--------------------------------------------
-- duplicate from active database....
http://www.lamimdba.com.br/2014/12/duplicate-partir-de-um-active-dataguard.html
Step by Step Guide on Creating Physical Standby Using RMAN DUPLICATE...FROM ACTIVE
DATABASE (Doc ID 1075908.1)
https://dbaclass.com/article/rman-active-cloning-from-rac-to-rac/

-- diferentes db_unique_name...
db_name string NORPRD
db_unique_name string NORPRD
standby db_unique_name : NORPROD

add to /etc/hosts:
10.105.8.110 sgtdracdb11prd.spms.local sgtdracdb11prd
10.105.8.120 sgtdracdb12prd.spms.local sgtdracdb12prd

10.105.8.111 sgtdracdb11prd-vip.spms.local sgtdracdb11prd-vip


10.105.8.121 sgtdracdb12prd-vip.spms.local sgtdracdb12prd-vip

1. Prepare the production database to be the primary database


[oracle@sgtdracdb11prd ~]$ . .profileNORPRD1
a. Ensure that the database is in archivelog mode .
SQL> select log_mode from v$database;
LOG_MODE
------------
ARCHIVELOG

b. Enable force logging


select force_logging from v$database;
SQL> ALTER DATABASE FORCE LOGGING;
ORA-12920: a base de dados ja esta no modo FORCE LOGGING

c. Create standby redologs

SELECT * FROM v$log;


1,2,3,4
alter database drop logfile group 1;
--alter database drop logfile group 3;
alter database add logfile THREAD 1 group 5 ('+DG_DATA','+DG_ARCH') SIZE 52428800;
alter database add logfile THREAD 2 group 6 ('+DG_DATA','+DG_ARCH') SIZE 52428800;

SELECT * FROM V$STANDBY_LOG;


alter database add standby logfile THREAD 1 group 101 ('+DG_DATA','+DG_ARCH') SIZE 52428800;
alter database add standby logfile THREAD 1 group 102 ('+DG_DATA','+DG_ARCH') SIZE 52428800;
alter database add standby logfile THREAD 1 group 103 ('+DG_DATA','+DG_ARCH') SIZE 52428800;
alter database add standby logfile THREAD 1 group 104 ('+DG_DATA','+DG_ARCH') SIZE 52428800;

alter database add standby logfile THREAD 2 group 201 ('+DG_DATA','+DG_ARCH') SIZE 52428800;
alter database add standby logfile THREAD 2 group 202 ('+DG_DATA','+DG_ARCH') SIZE 52428800;
alter database add standby logfile THREAD 2 group 203 ('+DG_DATA','+DG_ARCH') SIZE 52428800;
alter database add standby logfile THREAD 2 group 204 ('+DG_DATA','+DG_ARCH') SIZE 52428800;

d. Modify the primary initialization parameter for dataguard on primary,

SELECT * FROM gv$parameter WHERE Upper(name) LIKE Upper('LOG_ARCHIVE_CONFIG');


--alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(NORPRD,NORPROD)';
SELECT * FROM gv$parameter WHERE Upper(name) LIKE Upper('LOG_ARCHIVE_DEST_1');
--alter system set LOG_ARCHIVE_DEST_1='location=USE_DB_RECOVERY_FILE_DEST
valid_for=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=NORPRD';

SELECT * FROM gv$parameter WHERE Upper(name) LIKE Upper('%DB_RECO%');


SELECT * FROM v$archived_log ORDER BY 1 desc;
ALTER SYSTEM SWITCH LOGFILE;
--rman : copy archivelog
'/u01/app/oracle/product/11.2.0.4/db_1/dbs/USE_DB_RECOVERY_FILE_DEST,1_12411_1026757115.dbf' to
'+DG_ARCH';
--rman : delete archivelog
'/u01/app/oracle/product/11.2.0.4/db_1/dbs/USE_DB_RECOVERY_FILE_DEST,1_12411_1026757115.dbf';

SELECT * FROM gv$parameter WHERE Upper(name) LIKE Upper('LOG_ARCHIVE_DEST_2');


-- service="NORPROD", LGWR ASYNC NOAFFIRM delay=0 optional compression=disable
max_failure=0 max_connections=1 reopen=300 db_unique_name="NORPROD" net_timeout=30,
valid_for=(all_logfiles,primary_role)
alter system set LOG_ARCHIVE_DEST_2='SERVICE="NORPROD", LGWR ASYNC NOAFFIRM delay=0 optional
compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="NORPROD",
valid_for=(all_logfiles,primary_role)';

--alter system set LOG_ARCHIVE_DEST_STATE_2=ENABLE;


--ALTER SYSTEM SET log_archive_dest_state_2=DEFER;
alter system set FAL_CLIENT=NORPRD;
alter system set FAL_SERVER=NORPROD;
alter system set DB_FILE_NAME_CONVERT='+DG_DATA','+DG_DATA' scope=spfile;
alter system set LOG_FILE_NAME_CONVERT='+DG_ARCH, +DG_ARCH, +DG_DATA, +DG_DATA' scope=spfile;

---------------------------
2. Ensure that the sql*net connectivity is working fine.
Insert a static entry for NORPROD in the listener.ora file of the standby system.
/app/19.3.0/grid/network/admin/listener.ora

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = NORPROD_DGMGRL)
(ORACLE_HOME = /app/oracle/product/11.2.0.4/dbhome_1)
(SID_NAME = NORPROD1)
)
(SID_DESC =
(GLOBAL_DBNAME = CENPROD_DGMGRL)
(ORACLE_HOME = /app/oracle/product/11.2.0.4/dbhome_1)
(SID_NAME = CENPROD1)
)
(SID_DESC =
(GLOBAL_DBNAME = LVTPROD_DGMGRL)
(ORACLE_HOME = /app/oracle/product/11.2.0.4/dbhome_1)
(SID_NAME = LVTPROD1)
)
(SID_DESC =
(GLOBAL_DBNAME = SULPROD_DGMGRL)
(ORACLE_HOME = /app/oracle/product/11.2.0.4/dbhome_1)
(SID_NAME = SULPROD1)
)
)

2.1 TNSNAMES.ORA for the Primary and Standby should have BOTH entries
--PARA AS STANDBYS

--PARA AS STANDBYS

NORPROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sgtd01-prd-bd01-vip.spms.min-saude.pt)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = NORPROD1)
)
)

CENPROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sgtd01-prd-bd01-vip.spms.min-saude.pt)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = CENPROD1)
)
)

LVTPROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sgtd01-prd-bd01-vip.spms.min-saude.pt)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = LVTPROD1)
)
)

SULPROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sgtd01-prd-bd01-vip.spms.min-saude.pt)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = SULPROD1)
)
)

## primary
NORPRD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sgtdracdb11prd-vip.spms.local)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = NORPRD1)
)
)

CENPRD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sgtdracdb11prd-vip.spms.local)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = CENPRD1)
)
)

LVTPRD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sgtdracdb11prd-vip.spms.local)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = LVTPRD1)
)
)

SULPRD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sgtdracdb11prd-vip.spms.local)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = SULPRD1)
)
)

------------------------------- PARA O PRIMARIO

NORPROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sgtd01-prd-scan.spms.min-saude.pt)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = NORPROD1)
)
)

CENPROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sgtd01-prd-scan.spms.min-saude.pt)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = CENPROD1)
)
)

LVTPROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sgtd01-prd-scan.spms.min-saude.pt)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = LVTPROD1)
)
)

SULPROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sgtd01-prd-scan.spms.min-saude.pt)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = SULPROD1)
)
)

-----------
3. Create the standby database
a. Copy the password file from the primary $ORACLE_HOME/dbs and rename it to the standby
database name.
[oracle@sgtdracdb11prd dbs]$ scp -p orapw* oracle@sgtd01-prd-bd01.spms.min-
saude.pt:/app/oracle/product/11.2.0.4/dbhome_1/dbs/ gSypdd9QKjnTYbnRZczN

orapwCENPRD1 -- > move to orapwCENPROD1


orapwLVTPRD1 
orapwNORPRD1 
orapwSULPRD1 
b. Create a initialization parameter with only one parameter DB_NAME.
oracle@sgtd01-prd-bd01:dbs$ /app/oracle/product/11.2.0.4/dbhome_1/dbs
vi initNORPROD1.ora

DB_NAME=NORPRD
DB_UNIQUE_NAME= NORPROD
DB_BLOCK_SIZE=8192
sga_target=4G

c. Create the necessary directories in the standby location to place database files and trace files
($ADR_HOME).

mkdir /app/oracle/diag/rdbms/norprod
mkdir /app/oracle/admin/NORPROD
mkdir -p /app/oracle/admin/NORPROD/adump/

d. Set the environment variable ORACLE_SID to the standby service and start the standby-
instance.
Vi .profileNORPROD
ORACLE_BASE=/app/oracle
export ORACLE_BASE
ORACLE_HOME=/app/oracle/product/11.2.0.4/dbhome_1
export ORACLE_HOME
ORACLE_SID=NORPROD1
export ORACLE_SID
PATH=$ORACLE_HOME/bin:$ORACLE_HOME/lib:$PATH
export PATH

sqlplus "/ as sysdba"


startup nomount pfile=/app/oracle/product/11.2.0.4/dbhome_1/dbs/initNORPROD1.ora
ORACLE instance started.

Total System Global Area 4275781632 bytes


Fixed Size 2260088 bytes
Variable Size 872416136 bytes
Database Buffers 3388997632 bytes
Redo Buffers 12107776 bytes

e. Verify if the connection 'AS SYSDBA' is working


--NORPRD - SYS sxoso17
sqlplus /nolog
connect sys/sxoso17@NORPRD AS SYSDBA
connect sys/sxoso17@NORPROD AS SYSDBA

Migração BD
sqlplus "/ as sysdba"
startup nomount pfile=/app/oracle/product/11.2.0.4/dbhome_1/dbs/initNORPROD1.ora
sqlplus sys/sxoso17@NORPRD as sysdba
sqlplus sys/sxoso17@NORPROD as sysdba
rman target sys/sxoso17@NORPRD auxiliary sys/sxoso17@NORPROD
spool log to duplicate_ NORPROD1log

run {
allocate channel prmy1 type disk;
allocate channel prmy2 type disk;
allocate channel prmy3 type disk;
allocate channel prmy4 type disk;
allocate auxiliary channel stby type disk;

duplicate target database for standby from active database nofilenamecheck


spfile
parameter_value_convert 'NORPRD','NORPROD'
set db_unique_name='NORPROD'
set db_file_name_convert='+DG_DATA','+DG_DATA'
set log_file_name_convert='+DG_ARCH', '+DG_ARCH', '+DG_DATA', '+DG_DATA'
set LOG_ARCHIVE_DEST_2=''
set control_files='+DG_DATA'
set log_archive_max_processes='5'
set fal_client='NORPROD'
set fal_server='NORPRD'
set standby_file_management='AUTO'
set log_archive_config='dg_config=(NORPRD,NORPROD)'
set instance_number='1'
set cluster_database='FALSE'
set diagnostic_dest='/app/oracle'
set REMOTE_LISTENER=''
set audit_file_dest='/app/oracle/admin/NORPROD/adump'
;
}

RMAN-04014: startup failed: ORA-48108: invalid value given for the diagnostic_dest init.ora
parameter
ORA-48140: the specified ADR Base directory does not exist [/u01/app/oracle]
ORA-48187: specified directory does not exist
Linux-x86_64 Error: 2: No such file or directory
set diagnostic_dest='/app/oracle'
RMAN-03015: error occurred in stored script Memory Script
RMAN-04014: startup failed: ORA-00119: invalid specification for system parameter
REMOTE_LISTENER
ORA-00132: syntax error or unresolved network name 'sgtddb1xracprd-scan:1521'
-- configure ccm02-prod-scan no hosts novos
10.105.8.130 sgtddb1xracprd-scan.spms.local sgtddb1xracprd-scan

RMAN-04014: startup failed: ORA-09925: Unable to create audit trail file


Linux-x86_64 Error: 2: No such file or directory
mkdir -p /app/oracle/admin/NORPROD/adump/

RMAN-04014: startup failed: ORA-09925: Unable to create audit trail file


Linux-x86_64 Error: 2: No such file or directory
Additional information: 9925
audit_file_dest = "/u01/app/oracle/admin/NORPROD/adump"
set audit_file_dest='/app/oracle/admin/RHV4PRD/adump'
--------------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------------------------------------------------
set pagesize 900
set linesize 900
STARTUP NOMOUNT;
ALTER DATABASE MOUNT STANDBY DATABASE;
--ALTER DATABASE OPEN READ ONLY;
--ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
-- on primary- ALTER SYSTEM SET log_archive_dest_state_2=ENABLE;

set pagesize 900


set linesize 900
SELECT a.thread# ,a.sequence#, TO_CHAR(a.next_time,'DD-MON-YY HH24:MI:SS')
"Last_Sync_date", TO_CHAR(sysdate,'DD-MON-YY HH24:MI:SS') "sysdate" FROM v$archived_log a
, (SELECT thread# ,MAX(sequence#) FROM v$archived_log WHERE applied='YES' group by
thread#) c WHERE a.thread#=c.thread# AND a.sequence#=c."MAX(SEQUENCE#)";
select * from v$flash_recovery_area_usage;
select DB_UNIQUE_NAME, OPEN_MODE, name,log_mode, FLASHBACK_ON, DATABASE_ROLE
from v$database;
SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM
gV$MANAGED_STANDBY where PROCESS='MRP0';
-----------------------------------------------------------------------------------------------------------------------------------
-
--- SPfile and add to cluster

show parameter spfile


create pfile='/home/oracle/pfileNORPROD.ora' from spfile;
- validate /home/oracle/pfileNORPROD.ora, delete *.instance_number=1,
*.cluster_database=FALSE, change NORPRD to NORPROD
create spfile='+DG_DATA/NORPROD/spfileNORPROD.ora' from
pfile='/home/oracle/pfileNORPROD.ora';
vi /app/oracle/product/11.2.0.4/dbhome_1/dbs/initNORPROD1.ora
spfile='+DG_DATA/NORPROD/spfileNORPROD.ora'
remove /app/oracle/product/11.2.0.4/dbhome_1/dbs/spfileNORPROD1.ora

SQL> STARTUP NOMOUNT;


ORA-29760: instance_number parameter not specified

-- add to cluster
srvctl add database -d NORPROD -o /app/oracle/product/11.2.0.4/dbhome_1 -s MOUNT
srvctl add instance -d NORPROD -i NORPROD1 -n sgtd01-prd-bd01
srvctl add instance -d NORPROD -i NORPROD2 -n sgtd01-prd-bd02
srvctl modify database -d NORPROD -r physical_standby -p
'+DG_DATA/NORPROD/spfileNORPROD.ora'
srvctl status database -d NORPROD
srvctl start database -d NORPROD
--srvctl modify database -d NORPROD -s "MOUNT" -r PHYSICAL_STANDBY
srvctl config database -d NORPROD

-----------------------------------------------------------------------------------------------------------------------------------
-
---------- setup dg broker
https://www.oracle.com/br/technical-resources/articles/database-performance/dataguard-setup-
broker.html
SQL> show parameter dg_
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cell_offloadgroup_name string
dg_broker_config_file1 string +DG_DATA/NORPROD/dr1.dat
dg_broker_config_file2 string +DG_DATA/NORPROD/dr2.dat
dg_broker_start boolean FALSE
--- primary
ALTER SYSTEM SET DG_BROKER_CONFIG_FILE1='+DG_DATA/NORPRD/dr1.dat' SCOPE=BOTH;
ALTER SYSTEM SET DG_BROKER_CONFIG_FILE2='+DG_DATA/NORPRD/dr2.dat' SCOPE=BOTH;
-- standby
ALTER SYSTEM SET DG_BROKER_CONFIG_FILE1='+DG_DATA/NORPROD/dr1.dat' SCOPE=BOTH;
ALTER SYSTEM SET DG_BROKER_CONFIG_FILE2='+DG_DATA/NORPROD/dr2.dat' SCOPE=BOTH;
-- primary e standby

ALTER SYSTEM SET DG_BROKER_START=TRUE;

dgmgrl /
DGMGRL>
CREATE CONFIGURATION dg_config AS PRIMARY DATABASE IS NORPRD CONNECT IDENTIFIER IS
NORPRD;
ADD DATABASE NORPROD AS CONNECT IDENTIFIER IS NORPROD MAINTAINED AS PHYSICAL;
ENABLE CONFIGURATION;
SHOW CONFIGURATION;
SHOW DATABASE NORPRD;
SHOW DATABASE NORPROD;

Instance(s):
NORPRD1
NORPRD2
Error: ORA-16737: the redo transport service for standby database "norprod" has an error
md5sum orapwNORPRD2
3d9d4fa1519bb4b1a3be044b7294ef30 orapwNORPRD2
5d46f98131375f2e8424bf9cf68c0b46 orapwNORPRD1
5d46f98131375f2e8424bf9cf68c0b46 orapwNORPROD1
5d46f98131375f2e8424bf9cf68c0b46 orapwNORPROD2
5d46f98131375f2e8424bf9cf68c0b46 orapwNORPRD2

edit database NORPROD set state=apply-off;


show database NORPROD StandbyFileManagement
EDIT DATABASE NORPROD SET PROPERTY 'StandbyFileManagement' = 'AUTO';
edit database NORPROD set state=apply-on;
SHOW DATABASE NORPROD;
-----------------------------------------------------------------------------------------------------------------------------------
-
rman
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+DG_DATA/NORPROD/snapcf_NORPROD.f';
-----------------------------------------------------------------------------------------------------------------------------------
-

You might also like