EM 13c2 LifeCycle Management LCM
EM 13c2 LifeCycle Management LCM
EM 13c2 LifeCycle Management LCM
November 2016
Oracle Enterprise Manager Lifecycle Management Administrator's Guide, 13c Release 2
E74271-04
Copyright © 2015, 2016, Oracle and/or its affiliates. All rights reserved.
Contributing Authors: Pushpa Raghavachar, Deepak Gujrathi, Jacqueline Gosselin, Jim Garrison, Leo
Cloutier, Pavithra Mendon
Contributors: Enterprise Manager Cloud Control Lifecycle Management Development Teams, Quality
Assurance Teams, Customer Support Teams, and Product Management Teams.
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on
behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software,
any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are
"commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-
specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the
programs, including any operating system, integrated software, any programs installed on the hardware,
and/or documentation, shall be subject to license terms and license restrictions applicable to the programs.
No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of
their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron,
the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro
Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless
otherwise set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates
will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party
content, products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents
iii
Part II Discovery
iv
6.2 Provisioning Oracle Grid Infrastructure and Oracle Databases with Oracle Automatic
Storage Management .......................................................................................................................... 6-2
6.2.1 Prerequisites for Provisioning Oracle Grid Infrastructure and Oracle Databases
with Oracle ASM......................................................................................................................... 6-2
6.2.2 Procedure for Provisioning Oracle Grid Infrastructure and Oracle Databases with
Oracle ASM.................................................................................................................................. 6-2
6.3 Provisioning Oracle Grid Infrastructure and Oracle Database Software Only ...................... 6-8
6.3.1 Prerequisites for Provisioning Oracle Grid Infrastructure and Oracle Database
Software Only.............................................................................................................................. 6-8
6.3.2 Procedure for Provisioning Oracle Grid Infrastructure and Oracle Database
Software Only.............................................................................................................................. 6-9
8 Provisioning Oracle Real Application Clusters One (Oracle RAC One) Node
Databases
8.1 Getting Started with Provisioning Oracle RAC One Node Databases ..................................... 8-1
8.2 Deployment Procedures for Provisioning Oracle RAC One Node Databases ........................ 8-2
8.3 Provisioning Oracle RAC One Node Databases .......................................................................... 8-2
8.3.1 Prerequisites for Provisioning Oracle RAC One Node Databases................................. 8-2
8.3.2 Procedure for Provisioning Oracle RAC One Node Databases...................................... 8-4
v
9 Provisioning Oracle Real Application Clusters for 10g and 11g
9.1 Getting Started with Provisioning Oracle Real Application Clusters for 10g and 11g .......... 9-1
9.2 Core Components Deployed When Provisioning Oracle RAC ................................................. 9-2
9.3 Cloning a Running Oracle Real Application Clusters ................................................................ 9-3
9.3.1 Prerequisites for Cloning a Running Oracle Real Application Clusters ....................... 9-3
9.3.2 Procedure for Cloning a Running Oracle Real Application Clusters ............................ 9-4
9.4 Provisioning Oracle Real Application Clusters Using Gold Image ....................................... 9-10
9.4.1 Prerequisites for Provisioning Oracle Real Application Clusters Using Gold Image 9-10
9.4.2 Procedure for Provisioning Oracle Real Application Clusters Using Gold Image.... 9-11
9.5 Provisioning Oracle Real Application Clusters Using Archived Software Binaries............. 9-16
9.5.1 Prerequisites for Provisioning Oracle Real Application Clusters Using Archived
Software Binaries ...................................................................................................................... 9-17
9.5.2 Procedure for Provisioning Oracle Real Application Clusters Using Archived
Software Binaries ...................................................................................................................... 9-18
9.6 Provisioning Oracle Real Application Clusters (Oracle RAC) Databases Using No Root
Credentials ......................................................................................................................................... 9-25
vi
12.4 Provisioning an Oracle Database Replay Client Using Installation Binaries....................... 12-9
12.4.1 Prerequisites for Provisioning an Oracle Database Replay Client Using
Installation Binaries .................................................................................................................. 12-9
12.4.2 Procedure for Provisioning an Oracle Database Replay Client Using Installation
Binaries ..................................................................................................................................... 12-10
vii
15 Cloning Solutions in Hybrid Cloud
15.1 Hybrid Cloud Cloning Overview............................................................................................... 15-1
15.2 Cloning in Hybrid Cloud Use Cases.......................................................................................... 15-1
15.3 Prerequisites for Cloning in Oracle Cloud ............................................................................... 15-2
15.4 Cloning to Oracle Cloud .............................................................................................................. 15-2
15.4.1 Cloning a DB to Oracle Cloud (Oracle Compute Service) .......................................... 15-3
15.4.2 Cloning a PDB to Oracle Cloud....................................................................................... 15-7
15.4.3 Cloning Schema(s) to a DB or PDB on Oracle Cloud................................................. 15-11
15.4.4 Cloning a DB to a PDB on Oracle Cloud...................................................................... 15-13
15.5 Cloning from Oracle Cloud....................................................................................................... 15-16
15.5.1 Cloning a PDB from Oracle Cloud ............................................................................... 15-16
15.5.2 Cloning Schema(s) from Oracle Cloud to a DB or PDB............................................. 15-20
15.5.3 Cloning a DB from Oracle Cloud to an On-Premise PDB ......................................... 15-22
15.5.4 Cloning a DB from Oracle Cloud (Oracle Compute Service) ................................... 15-24
15.6 Cloning Within Oracle Cloud ................................................................................................... 15-28
15.6.1 Cloning a PDB Within Oracle Cloud............................................................................ 15-29
15.6.2 Cloning a DB Within Oracle Cloud (Oracle Compute Service)................................ 15-31
16 Creating Databases
16.1 Getting Started with Creating Databases .................................................................................. 16-1
16.2 Creating an Oracle Database....................................................................................................... 16-2
16.2.1 Prerequisites for Creating an Oracle Database ............................................................. 16-2
16.2.2 Procedure for Creating an Oracle Database .................................................................. 16-3
16.3 Creating Oracle Real Application Clusters Database.............................................................. 16-7
16.3.1 Prerequisites for Creating an Oracle Real Application Clusters Database ............... 16-7
16.3.2 Procedure for Creating an Oracle Real Application Clusters Database.................... 16-8
16.4 Creating Oracle Real Application Clusters One Node Database......................................... 16-10
16.4.1 Prerequisites for Creating an Oracle RAC One Node Database .............................. 16-11
16.4.2 Procedure for Creating an Oracle Real Application Clusters One Node Database 16-12
viii
17.5.1 Unplugging and Dropping a Pluggable Database Using Enterprise Manager...... 17-30
17.5.2 Deleting Pluggable Databases Using Enterprise Manager ....................................... 17-35
17.6 Viewing Pluggable Database Job Details Using Enterprise Manager................................. 17-38
17.6.1 Viewing Create Pluggable Database Job Details ........................................................ 17-39
17.6.2 Viewing Unplug Pluggable Database Job Details ...................................................... 17-40
17.6.3 Viewing Delete Pluggable Database Job Details......................................................... 17-41
17.7 Administering Pluggable Databases Using Enterprise Manager ........................................ 17-42
17.7.1 Switching Between Pluggable Databases Using Enterprise Manager..................... 17-43
17.7.2 Altering Pluggable Database State Using Enterprise Manager................................ 17-43
18 Upgrading Databases
18.1 Getting Started .............................................................................................................................. 18-1
18.2 Supported Releases....................................................................................................................... 18-2
18.3 Upgrading Databases Using Deployment Procedure ............................................................. 18-3
18.3.1 About Deployment Procedures....................................................................................... 18-3
18.3.2 Meeting the Prerequisites................................................................................................. 18-4
18.3.3 Upgrading Oracle Cluster Database Using Deployment Procedure......................... 18-5
18.3.4 Upgrading Oracle Clusterware Using Deployment Procedure ............................... 18-11
18.3.5 Upgrading Oracle Database Instance Using Deployment Procedure ..................... 18-15
18.4 Upgrading an Oracle Database or Oracle RAC Database Instance Using the Database
Upgrade Wizard.............................................................................................................................. 18-19
18.4.1 Meeting the Prerequisites............................................................................................... 18-20
18.4.2 Performing the Upgrade Procedure ............................................................................. 18-20
18.5 Data Guard Rolling Upgrade.................................................................................................... 18-24
18.5.1 About Rolling Upgrades ................................................................................................ 18-25
18.5.2 Prerequisites to Rolling Upgrades ................................................................................ 18-25
18.5.3 Submitting a Rolling Upgrade Procedure for a Primary Database With One
Physical Standby Database.................................................................................................... 18-26
18.5.4 Viewing a Running or Completed Rolling Upgrade Procedure in Enterprise
Manager ................................................................................................................................... 18-29
ix
Part VI Middleware Provisioning
x
24.2 Source Environment and Destination Environment after SOA Provisioning ..................... 24-2
24.2.1 Source and Destination Environments for a Fresh SOA Provisioning Use Case..... 24-3
24.2.2 Source and Destination Environments for SOA Cloning Use Case........................... 24-3
24.3 Supported Versions of SOA for Provisioning .......................................................................... 24-4
24.4 Before you Begin Provisioning SOA Domain and Oracle Home .......................................... 24-4
24.4.1 Create Middleware Roles and Assign Privileges to them ........................................... 24-5
24.4.2 Setting Named Credentials and Privileged Credentials for the Middleware
Targets ........................................................................................................................................ 24-5
24.4.3 (Applicable only for a Cloning WebLogic Domain Use Case) Cloning a Database 24-5
24.5 Use Case 1: First Time Provisioning of a SOA Domain .......................................................... 24-5
24.6 Use Case 2: Provisioning from a SOA Oracle Home Based Provisioning Profile ............... 24-6
24.7 Use Case 3: Cloning from a Provisioning Profile based on an Existing SOA Domain....... 24-7
24.8 Use Case 4: Provisioning from an Existing SOA Home.......................................................... 24-7
24.9 Use Case 5: Scaling Up an Existing SOA Domain ................................................................... 24-7
24.10 Use Case 6 - Cloning a database in WebLogic Domain ........................................................ 24-7
xi
26.5.3 (Applicable only for a Cloning WebLogic Domain Use Case) Cloning a Database 26-7
26.6 Use Case 1: First Time Provisioning of a WebCenter Portal with Lock-downs .................. 26-7
26.7 Use Case 2: Provisioning a WebCenter Home ......................................................................... 26-8
26.8 Use Case 3: Cloning an Existing WebCenter Portal Environment ........................................ 26-9
26.9 Use Case 4: Provisioning from an Existing WebCenter Home .............................................. 26-9
26.10 Use Case 5: Scaling Up an Existing WebCenter Domain...................................................... 26-9
xii
29 Scaling Up / Scaling Out Fusion Middleware Domains
29.1 Getting Started .............................................................................................................................. 29-1
29.2 Prerequisites .................................................................................................................................. 29-2
29.3 Running the Scale Up / Scale Out Middleware Deployment Procedure............................. 29-3
29.3.1 WebLogic Domain Scaling Up: Select Source Page...................................................... 29-4
29.3.2 Weblogic Domain Scaling Up: Managed Servers Page ............................................... 29-5
29.3.3 WebLogic Domain Scaling Up / Scaling Out: Web Tier ............................................. 29-6
29.3.4 WebLogic Domain Scaling Up / Scaling Out : Credentials Page .............................. 29-6
29.3.5 Weblogic Domain Scaling Up / Scaling Out : Schedule Page .................................... 29-7
29.3.6 WebLogic Domain Scaling Up / Scaling Out : Review Page...................................... 29-7
29.4 Middleware Provisioning and Scale Up / Scale Out Best Practices ..................................... 29-7
xiii
33 Provisioning SOA Artifacts and Composites
33.1 Getting Started with SOA Artifacts Provisioning .................................................................... 33-1
33.2 Understanding SOA Artifacts Provisioning ............................................................................. 33-2
33.3 Deployment Procedures, Supported Releases, and Core Components Deployed ............. 33-4
33.4 Provisioning SOA Artifacts ......................................................................................................... 33-4
33.4.1 Provisioning SOA Artifacts from a Reference Installation.......................................... 33-4
33.4.2 Provisioning SOA Artifacts from Gold Image .............................................................. 33-7
33.5 Deploying SOA Composites ....................................................................................................... 33-9
xiv
Part VIII Host Management
39 Monitoring Hosts
39.1 Overall Monitoring....................................................................................................................... 39-1
39.1.1 CPU Details ........................................................................................................................ 39-1
39.1.2 Memory Details ................................................................................................................. 39-2
39.1.3 Disk Details ........................................................................................................................ 39-2
39.1.4 Program Resource Utilization ......................................................................................... 39-2
39.1.5 Log File Alerts.................................................................................................................... 39-2
39.1.6 Metric Collection Errors ................................................................................................... 39-2
39.2 Storage Details............................................................................................................................... 39-2
39.2.1 Storage Utilization............................................................................................................. 39-3
39.2.2 Overall Utilization............................................................................................................. 39-3
39.2.3 Provisioning Summary..................................................................................................... 39-3
39.2.4 Consumption Summary ................................................................................................... 39-4
39.2.5 ASM ..................................................................................................................................... 39-4
39.2.6 Databases ............................................................................................................................ 39-4
39.2.7 Disks .................................................................................................................................... 39-5
39.2.8 File Systems ........................................................................................................................ 39-5
39.2.9 Volumes .............................................................................................................................. 39-6
39.2.10 Vendor Distribution ........................................................................................................ 39-7
39.2.11 Storage History ................................................................................................................ 39-7
39.2.12 Storage Layers.................................................................................................................. 39-8
xv
39.2.13 Storage Refresh ................................................................................................................ 39-8
40 Administering Hosts
40.1 Configuration Operations on Hosts........................................................................................... 40-1
40.1.1 Configuring File and Directory Monitoring Criteria ................................................... 40-1
40.1.2 Configuring Generic Log File Monitor Criteria ............................................................ 40-2
40.1.3 Configuring Program Resource Utilization Monitoring Criteria............................... 40-4
40.2 Administration Tasks ................................................................................................................... 40-5
40.2.1 Services ............................................................................................................................... 40-5
40.2.2 Default System Run Level................................................................................................ 40-6
40.2.3 Network Card .................................................................................................................... 40-7
40.2.4 Host Lookup Table............................................................................................................ 40-8
40.2.5 NFS Client........................................................................................................................... 40-8
40.2.6 User and Group Administration (Users) ....................................................................... 40-9
40.2.7 User and Group Administration (Groups).................................................................. 40-11
40.3 Using Tools and Commands..................................................................................................... 40-11
40.3.1 Enabling Sudo and Power Broker................................................................................. 40-12
40.3.2 Executing the Host Command Using Sudo or PowerBroker.................................... 40-12
40.3.3 Using Remote File Editor ............................................................................................... 40-13
40.4 Adding Host Targets.................................................................................................................. 40-14
40.5 Running Host Command .......................................................................................................... 40-14
40.5.1 Accessing Host Command............................................................................................. 40-14
40.5.2 Executing Host Command Using Sudo or Power Broker ......................................... 40-14
40.5.3 Execute Host Command - Multiple Hosts................................................................... 40-15
40.5.4 Execute Host Command - Group.................................................................................. 40-17
40.5.5 Execute Host Command - Single Host ......................................................................... 40-18
40.5.6 Load OS Script ................................................................................................................. 40-18
40.5.7 Load From Job Library ................................................................................................... 40-18
40.5.8 Execution History............................................................................................................ 40-19
40.5.9 Execution Results ............................................................................................................ 40-19
40.6 Miscellaneous Tasks ................................................................................................................... 40-19
40.6.1 Enabling Collection of WBEM Fetchlet Based Metrics .............................................. 40-19
40.6.2 Enabling Hardware Monitoring for Dell PowerEdge Linux Hosts ......................... 40-20
40.6.3 Adding and Editing Host Configuration..................................................................... 40-21
xvi
41.1.5 Supported Targets, Releases, and Deployment Procedures for Patching................. 41-8
41.1.6 Overview of Supported Patching Modes .................................................................... 41-13
41.1.7 Understanding the Patching Workflow ....................................................................... 41-17
41.2 Setting Up the Infrastructure for Patching.............................................................................. 41-18
41.2.1 Meeting Basic Infrastructure Requirements for Patching ......................................... 41-19
41.2.2 Creating Administrators with the Required Roles for Patching .............................. 41-19
41.2.3 Setting Up the Infrastructure for Patching in Online Mode (Connected to MOS) 41-20
41.2.4 Setting Up the Infrastructure for Patching in Offline Mode (Not Connected to
MOS) ......................................................................................................................................... 41-22
41.2.5 Analyzing the Environment and Identifying Whether Your Targets Can Be
Patched ..................................................................................................................................... 41-27
41.3 Identifying the Patches to Be Applied ..................................................................................... 41-28
41.3.1 About Patch Recommendations.................................................................................... 41-29
41.3.2 About Knowledge Articles for Patching ...................................................................... 41-32
41.3.3 About Service Requests for Patching ........................................................................... 41-32
41.3.4 Searching for Patches on My Oracle Support ............................................................. 41-32
41.3.5 Searching for Patches in Oracle Software Library ...................................................... 41-33
41.4 Applying Patches........................................................................................................................ 41-34
41.4.1 Creating a Patch Plan...................................................................................................... 41-35
41.4.2 Accessing the Patch Plan ................................................................................................ 41-37
41.4.3 Analyzing, Preparing, and Deploying Patch Plans.................................................... 41-38
41.4.4 Switching Back to the Original Oracle Home After Deploying a Patch Plan......... 41-48
41.4.5 Saving Successfully Analyzed or Deployed Patch Plan As a Patch Template....... 41-49
41.4.6 Creating a Patch Plan from a Patch Template and Applying Patches .................... 41-49
41.4.7 Patching Oracle Grid Infrastructure Targets............................................................... 41-51
41.4.8 Patching Oracle Exadata................................................................................................. 41-51
41.4.9 Patching Oracle Data Guard Targets............................................................................ 41-54
41.4.10 Patching Oracle Identity Management Targets ........................................................ 41-60
41.4.11 Patching Oracle Siebel Targets .................................................................................... 41-60
41.4.12 Patching Oracle Service Bus......................................................................................... 41-61
41.4.13 Rollback of Oracle Service Bus Patch ......................................................................... 41-64
41.4.14 Deploying Web Logic Patches Along with SOA or Oracle Service Bus Patches In
A Single Patch Plan................................................................................................................. 41-65
41.5 Diagnosing and Resolving Patching Issues ............................................................................ 41-67
41.5.1 Workaround for Errors................................................................................................... 41-67
41.5.2 Common Patching Issues ............................................................................................... 41-69
41.5.3 Resolving Patching Issues .............................................................................................. 41-71
41.5.4 Rolling Back Patches ...................................................................................................... 41-72
41.6 Additional Patching Tasks You Can Perform ........................................................................ 41-72
41.6.1 Viewing or Modifying a Patch Template..................................................................... 41-73
41.6.2 Saving a Deployed Patch Plan as a Patch Template................................................... 41-73
41.6.3 Downloading Patches from a Patch Template............................................................ 41-74
41.6.4 Deleting a Patch Plan ...................................................................................................... 41-74
xvii
41.6.5 Deleting a Patch Template ............................................................................................. 41-75
41.6.6 Converting a Nondeployable Patch Plan to a Deployable Patch Plan .................... 41-75
41.6.7 Associating Additional Targets to a Patch in a Patch Plan ....................................... 41-75
41.6.8 Manually Staging the Patching Root Component ...................................................... 41-77
41.6.9 Restricting Root User Access for Patching................................................................... 41-77
41.6.10 Resolving Patch Conflicts............................................................................................. 41-77
41.6.11 Analyzing the Results of Patching Operations ......................................................... 41-77
41.6.12 Customizing Patching Deployment Procedures....................................................... 41-78
41.6.13 Pausing the Patching Process While Patching Targets in Rolling Mode .............. 41-79
41.6.14 Rolling Back Patches ..................................................................................................... 41-79
41.7 End-to-End Use Case: Patching Your Data Center................................................................ 41-80
41.8 Patching Database as a Service Pools ...................................................................................... 41-80
xviii
43 Database Fleet Maintenance
43.1 About Database Fleet Maintenance ........................................................................................... 43-1
43.2 Getting Started with Fleet Maintenance.................................................................................... 43-2
43.2.1 Discovering Configuration Pollution ............................................................................. 43-2
43.2.2 Creating a Gold Image...................................................................................................... 43-3
43.2.3 Retrieving a List of Available Gold Images................................................................... 43-5
43.2.4 Changing Version Status to Current .............................................................................. 43-6
43.2.5 Subscribing the Targets to the Selected Image.............................................................. 43-7
43.2.6 Staging and Deploying the Software.............................................................................. 43-8
43.2.7 Migrating the Listeners................................................................................................... 43-10
43.2.8 Updating the Database / Cluster.................................................................................. 43-11
43.3 Update Verb Quick Reference .................................................................................................. 43-16
xix
45.5.1 About Comparison Templates ...................................................................................... 45-17
45.5.2 Working with Comparison Templates......................................................................... 45-18
45.5.3 Specifying Rules .............................................................................................................. 45-21
45.5.4 About Rules Expression and Syntax............................................................................. 45-23
45.5.5 Understanding Rules by Example ................................................................................ 45-25
45.5.6 About Comparisons ........................................................................................................ 45-28
45.5.7 Working with Comparison Results .............................................................................. 45-37
45.5.8 Comparison and Drift Management BI Publisher Reports ....................................... 45-40
45.6 Overview of Configuration Extensions and Collections....................................................... 45-41
45.6.1 Working with Configuration Extensions..................................................................... 45-42
45.6.2 About Configuration Extensions and Deployment.................................................... 45-49
45.6.3 Extending Configuration Data Collections.................................................................. 45-51
45.6.4 Using Configuration Extensions as Blueprints ........................................................... 45-54
45.7 Overview of Parsers ................................................................................................................... 45-54
45.7.1 Managing Parsers ............................................................................................................ 45-55
45.7.2 About XML Parsers......................................................................................................... 45-56
45.7.3 About Format-Specific Parsers ...................................................................................... 45-59
45.7.4 About Columnar Parsers................................................................................................ 45-64
45.7.5 About Properties Parsers................................................................................................ 45-67
45.7.6 Using Parsed Files and Rules......................................................................................... 45-74
45.8 Overview of Relationships ........................................................................................................ 45-80
45.9 Overview of Configuration Topology Viewer ....................................................................... 45-81
45.9.1 About Configuration Topology Viewer....................................................................... 45-82
45.9.2 Examples of Using Topology......................................................................................... 45-82
45.9.3 Viewing a Configuration Topology.............................................................................. 45-82
45.9.4 Determining System Component Structure ................................................................ 45-83
45.9.5 Determining General Status of Target's Configuration Health ................................ 45-84
45.9.6 Getting Configuration Health/Compliance Score of a Target ................................. 45-84
45.9.7 Analyzing a Problem and Viewing a Specific Issue in Detail................................... 45-84
45.9.8 About Dependency Analysis ......................................................................................... 45-85
45.9.9 About Impact Analysis ................................................................................................... 45-85
45.9.10 Creating a Custom Topology View ............................................................................ 45-86
45.9.11 Deleting a Custom Topology View............................................................................. 45-86
45.9.12 Excluding Relationships from a Custom Topology View ....................................... 45-87
45.9.13 Including Relationships to a Target in a Custom Topology View ......................... 45-87
45.9.14 Creating a Relationship to a Target ............................................................................ 45-87
45.9.15 Deleting a Relationship from a Target ....................................................................... 45-88
45.9.16 Controlling the Appearance of Information on a Configuration Topology Graph 45-88
46 Managing Compliance
46.1 Overview of Compliance ............................................................................................................. 46-1
46.1.1 Terminology Used in Compliance .................................................................................. 46-2
46.1.2 Accessing the Compliance Features ............................................................................... 46-4
xx
46.1.3 Roles and Privileges Needed to Use the Compliance Features .................................. 46-4
46.2 Evaluating Compliance................................................................................................................ 46-7
46.2.1 Accessing Compliance Statistics ..................................................................................... 46-8
46.2.2 Viewing Compliance Summary Information .............................................................. 46-10
46.2.3 Viewing Target Compliance Evaluation Results ........................................................ 46-10
46.2.4 Viewing Compliance Framework Evaluation Results............................................... 46-11
46.2.5 Managing Violations....................................................................................................... 46-11
46.2.6 Investigating Compliance Violations and Evaluation Results ................................. 46-13
46.2.7 Investigating Evaluation Errors .................................................................................... 46-18
46.2.8 Analyzing Compliance Reports .................................................................................... 46-18
46.2.9 Overview of Compliance Score and Importance........................................................ 46-19
46.3 Investigating Real-time Observations...................................................................................... 46-22
46.3.1 Viewing Observations .................................................................................................... 46-23
46.3.2 Operations on Observations During Compliance Evaluation.................................. 46-25
46.4 Configuring Compliance Management................................................................................... 46-27
46.4.1 About Compliance Frameworks ................................................................................... 46-28
46.4.2 Operations on Compliance Frameworks ..................................................................... 46-29
46.4.3 About Compliance Standards ....................................................................................... 46-36
46.4.4 Operations on Compliance Standards ......................................................................... 46-39
46.4.5 About Compliance Standard Rule Folders.................................................................. 46-49
46.4.6 About Compliance Standard Rules .............................................................................. 46-50
46.4.7 Operations on Compliance Standards Rules............................................................... 46-52
46.5 Real-time Monitoring Facets ..................................................................................................... 46-79
46.5.1 About Real-time Monitoring Facets.............................................................................. 46-79
46.5.2 Operations on Facets....................................................................................................... 46-81
46.6 Examples ...................................................................................................................................... 46-87
46.6.1 Creating Repository Rule Based on Custom Configuration Collections ................ 46-88
46.6.2 Creating Compliance Standard Agent-side and Manual Rules ............................... 46-91
46.6.3 Suppressing Violations................................................................................................... 46-98
46.6.4 Clearing Violations.......................................................................................................... 46-99
xxi
48 Managing Database Schema Changes
48.1 Overview of Change Management for Databases ................................................................... 48-1
48.2 Using Schema Baselines............................................................................................................... 48-2
48.2.1 Overview of Scope Specification..................................................................................... 48-3
48.2.2 About Capturing a Schema Baseline Version ............................................................... 48-4
48.2.3 About Working With A Schema Baseline Version ....................................................... 48-4
48.2.4 About Working With Multiple Schema Baseline Versions ......................................... 48-4
48.2.5 Exporting and Importing Schema Baselines ................................................................. 48-5
48.3 Using Schema Comparisons........................................................................................................ 48-6
48.3.1 Defining Schema Comparisons ....................................................................................... 48-6
48.3.2 About Working with Schema Comparison Versions................................................... 48-8
48.4 Using Schema Synchronizations ................................................................................................ 48-9
48.4.1 About Defining Schema Synchronizations.................................................................... 48-9
48.4.2 Creating a Synchronization Definition from a Comparison ..................................... 48-11
48.4.3 Working with Schema Synchronization Versions ...................................................... 48-12
48.4.4 Creating Add itional Synchronization Versions......................................................... 48-15
48.5 Using Change Plans.................................................................................................................... 48-15
48.5.1 About Working with Change Plans.............................................................................. 48-16
48.5.2 Creating a Change Plan .................................................................................................. 48-16
48.5.3 Submitting Schema Change Plans From SQL Developer Interface ......................... 48-20
48.6 Using Database Data Comparison ........................................................................................... 48-21
48.6.1 Requirements for Database Data Comparisons.......................................................... 48-21
48.6.2 Comparing Database Data and Viewing Results ....................................................... 48-23
xxii
49.7.1 Installation Prerequisite for AIX 5.3 ............................................................................. 49-11
49.7.2 Administering AIX Auditing......................................................................................... 49-11
49.7.3 Verifying AIX System Log Files for the OS User Monitoring Module.................... 49-13
49.8 Preparing To Monitor the Oracle Database ............................................................................ 49-13
49.8.1 Setting Auditing User Privileges .................................................................................. 49-13
49.8.2 Specifying Audit Options............................................................................................... 49-13
49.9 Setting Up Change Request Management Integration.......................................................... 49-15
49.9.1 BMC Remedy Action Request System 7.1 Integration............................................... 49-15
49.10 Overview of the Repository Views Related to Real-time Monitoring Features .............. 49-22
49.11 Modifying Data Retention Periods......................................................................................... 49-27
49.12 Real-time Monitoring Supported Platforms ......................................................................... 49-28
49.12.1 OS User Monitoring ...................................................................................................... 49-28
49.12.2 OS Process Monitoring ................................................................................................. 49-31
49.12.3 OS File Monitoring ........................................................................................................ 49-32
49.12.4 OS Windows Registry Monitoring ............................................................................. 49-35
49.12.5 OS Windows Active Directory User Monitoring...................................................... 49-36
49.12.6 OS Windows Active Directory Computer Monitoring............................................ 49-36
49.12.7 OS Windows Active Directory Group Monitoring .................................................. 49-37
49.12.8 Oracle Database Table Monitoring ............................................................................. 49-37
49.12.9 Oracle Database View Monitoring.............................................................................. 49-38
49.12.10 Oracle Database Materialized View Monitoring .................................................... 49-39
49.12.11 Oracle Database Index Monitoring........................................................................... 49-40
49.12.12 Oracle Database Sequence Monitoring .................................................................... 49-40
49.12.13 Oracle Database Procedure Monitoring................................................................... 49-40
49.12.14 Oracle Database Function Monitoring ..................................................................... 49-41
49.12.15 Oracle Database Package Monitoring ...................................................................... 49-41
49.12.16 Oracle Database Library Monitoring........................................................................ 49-42
49.12.17 Oracle Database Trigger Monitoring........................................................................ 49-42
49.12.18 Oracle Database Tablespace Monitoring ................................................................. 49-43
49.12.19 Oracle Database Cluster Monitoring ........................................................................ 49-43
49.12.20 Oracle Database Link Monitoring............................................................................. 49-43
49.12.21 Oracle Database Dimension Monitoring ................................................................. 49-44
49.12.22 Oracle Database Profile Monitoring ......................................................................... 49-44
49.12.23 Oracle Database Public Link Monitoring................................................................. 49-44
49.12.24 Oracle Database Public Synonym Monitoring........................................................ 49-44
49.12.25 Oracle Database Synonym Monitoring .................................................................... 49-45
49.12.26 Oracle Database Type Monitoring............................................................................ 49-45
49.12.27 Oracle Database Role Monitoring ............................................................................. 49-46
49.12.28 Oracle Database User Monitoring............................................................................. 49-46
49.12.29 Oracle Database SQL Query Statement Monitoring .............................................. 49-46
xxiii
50.1.1 Change Activity Planner Roles and Privileges ............................................................. 50-1
50.1.2 Change Activity Planner Terminology .......................................................................... 50-2
50.2 Creating a Change Activity Plan ................................................................................................ 50-6
50.2.1 Creating a Task Definition ............................................................................................... 50-8
50.2.2 Creating a Task Group.................................................................................................... 50-11
50.3 Operations on Change Activity Plans...................................................................................... 50-12
50.3.1 Creating a Plan Like Another Plan ............................................................................... 50-12
50.3.2 Editing a Plan ................................................................................................................... 50-13
50.3.3 Deleting a Plan................................................................................................................. 50-13
50.3.4 Deactivating a Plan ......................................................................................................... 50-13
50.3.5 Exporting Plans................................................................................................................ 50-14
50.3.6 Printing Plans................................................................................................................... 50-14
50.3.7 Changing the Owner of a Plan ...................................................................................... 50-14
50.4 Managing a Change Activity Plan ........................................................................................... 50-15
50.4.1 Summary Tab................................................................................................................... 50-16
50.4.2 Tasks Tab .......................................................................................................................... 50-17
50.4.3 Comments and Audit Trail Tab .................................................................................... 50-18
50.5 Viewing My Tasks ...................................................................................................................... 50-18
50.6 Example of Using Change Activity Planner ........................................................................... 50-20
50.6.1 Automating Activity Planning ...................................................................................... 50-21
50.6.2 Additional Steps in Automating Activity Planning................................................... 50-21
50.6.3 Using Change Activity Planner for Patching .............................................................. 50-22
xxiv
51.5.6 Setting Step Level Grace Period .................................................................................... 51-20
51.6 Creating, Saving, and Launching User Defined Deployment Procedure (UDDP) ........... 51-21
51.6.1 Step 1: Creating User Defined Deployment Procedure ............................................. 51-21
51.6.2 Step 2: Saving and Launching User Defined Deployment Procedure with Default
Inputs........................................................................................................................................ 51-22
51.6.3 Step 3: Launching and Running the Saved User Defined Deployment Procedure 51-30
51.6.4 Step 4: Tracking the Submitted User Defined Deployment Procedure................... 51-31
51.7 Procedure Instance Execution Page ........................................................................................ 51-31
51.7.1 Comparison Between the Existing Design and the New Design for Procedure
Instance Execution Page ........................................................................................................ 51-32
51.7.2 Overview of the Procedure Instance Execution Page ................................................ 51-33
51.7.3 Investigating a Failed Step for a Single or a Set of Targets ....................................... 51-35
51.7.4 Retrying a Failed Step..................................................................................................... 51-35
51.7.5 Creating an Incident........................................................................................................ 51-36
51.7.6 Viewing the Execution Time of a Deployment Procedure........................................ 51-36
51.7.7 Searching for a Step......................................................................................................... 51-36
51.7.8 Downloading a Step Output.......................................................................................... 51-36
51.7.9 Accessing the Job Summary Page ................................................................................. 51-37
xxv
52.6 Copying Customized Provisioning Entities from One Enterprise Manager Site to
Another ............................................................................................................................................. 52-17
52.6.1 Prerequisites for Copying Customized Provisioning Entities from One Enterprise
Manager Site to Another........................................................................................................ 52-18
52.6.2 Copying Customized Provisioning Entities from One Enterprise Manager Site to
Another..................................................................................................................................... 52-18
52.7 A Workflow Example for Customizing a Directive............................................................... 52-19
52.7.1 Creating and Uploading a Copy of a Default Directive ............................................ 52-19
52.7.2 Customizing a Deployment Procedure to Use the New Directive .......................... 52-20
52.7.3 Running the Customized Deployment Procedure ..................................................... 52-20
xxvi
B.3 Root Setup (Privilege Delegation) .......................................................................................................... B-2
B.4 Environment Settings............................................................................................................................... B-2
B.4.1 Kernel Requirements .................................................................................................................... B-3
B.4.2 Node Time Requirements ............................................................................................................ B-4
B.4.3 Package Requirements.................................................................................................................. B-4
B.4.4 Memory and Disk Space Requirements..................................................................................... B-4
B.4.5 Network & IP Address Requirements........................................................................................ B-5
B.5 Storage Requirements .............................................................................................................................. B-6
B.6 Installation Directories and Oracle Inventory ...................................................................................... B-7
xxvii
F Troubleshooting Issues
F.1 Troubleshooting Database Provisioning Issues ................................................................................... F-1
F.1.1 Grid Infrastructure Root Script Failure ...................................................................................... F-1
F.1.2 SUDO Error During Deployment Procedure Execution.......................................................... F-2
F.1.3 Prerequisites Checks Failure........................................................................................................ F-2
F.1.4 Oracle Automatic Storage Management (Oracle ASM) Disk Creation Failure.................... F-2
F.1.5 Oracle ASM Disk Permissions Error........................................................................................... F-3
F.1.6 Specifying a Custom Temporary Directory for Database Provisioning................................ F-3
F.1.7 Incident Creation When Deployment Procedure Fails ............................................................ F-3
F.1.8 Reading Remote Log Files............................................................................................................ F-4
F.1.9 Retrying Failed Jobs ..................................................................................................................... F-4
F.2 Troubleshooting Patching Issues ........................................................................................................... F-4
F.2.1 Oracle Software Library Configuration Issues.......................................................................... F-5
F.2.2 My Oracle Support Connectivity Issues..................................................................................... F-6
F.2.3 Host and Oracle Home Credential Issues.................................................................................. F-8
F.2.4 Collection Issues ............................................................................................................................ F-9
F.2.5 Patch Recommendation Issues .................................................................................................. F-12
F.2.6 Patch Plan Issues.......................................................................................................................... F-12
F.2.7 Patch Plan Analysis Issues ......................................................................................................... F-17
F.2.8 User Account and Role Issues ................................................................................................... F-20
F.3 Troubleshooting Linux Patching Issues .............................................................................................. F-21
F.4 Troubleshooting Linux Provisioning Issues ....................................................................................... F-22
F.5 Frequently Asked Questions on Linux Provisioning ........................................................................ F-24
F.6 Refreshing Configurations..................................................................................................................... F-26
F.6.1 Refreshing Host Configuration ................................................................................................. F-26
F.6.2 Refreshing Oracle Home Configuration .................................................................................. F-27
F.7 Reviewing Log Files ............................................................................................................................... F-28
F.7.1 OMS-Related Log Files ............................................................................................................... F-28
F.7.2 Management Agent-Related Log Files ..................................................................................... F-28
F.7.3 Advanced Options....................................................................................................................... F-29
Index
xxviii
List of Figures
xxix
17-44 Deleting Pluggable Databases: Procedure Activity Page.................................................. 17-38
20-1 Data Redaction Page................................................................................................................. 20-1
31-1 Deploy / Undeploy Java EE Applications: Select Target.................................................... 31-5
31-2 Deploy / Undeploy Java EE Applications: Add Applications........................................... 31-6
32-1 Create Generic Component: Describe Page........................................................................... 32-3
32-2 Create Generic Component: Select Files Page....................................................................... 32-4
32-3 Source Selection Page................................................................................................................ 32-5
32-4 Add Coherence Node Page...................................................................................................... 32-8
35-1 Bare Metal Provisioning Home Page...................................................................................... 35-3
41-1 Accessing the Patches & Updates Screen............................................................................... 41-3
41-2 Create Plan Wizard................................................................................................................... 41-6
41-3 Edit Template Wizard............................................................................................................... 41-8
41-4 In-Place Mode of Patching..................................................................................................... 41-14
41-5 Out-of-Place Mode of Patching............................................................................................. 41-16
41-6 Rolling Mode of Patching...................................................................................................... 41-17
41-7 Parallel Mode of Patching...................................................................................................... 41-17
41-8 Searching for Patches.............................................................................................................. 41-24
41-9 Downloading Patches from My Oracle Support................................................................ 41-25
41-10 Out Of Place Patching Of Clusters....................................................................................... 41-54
41-11 Patch Plan Errors Displayed on the Validation Page......................................................... 41-70
41-12 Patch Plan Errors Displayed in the Issues to Resolve Section.......................................... 41-70
41-13 Patch Plan Errors Displayed in the Plans Region............................................................... 41-71
43-1 Database Fleet Maintenance Steps.......................................................................................... 43-2
43-2 Software Standardization Advisor......................................................................................... 43-3
43-3 Creating a Gold Image.............................................................................................................. 43-4
43-4 Subscribing to an Image........................................................................................................... 43-7
43-5 Verifying the Subscription....................................................................................................... 43-8
44-1 Select QFSDP.............................................................................................................................. 44-6
46-1 Violations for a Compliance Standard................................................................................. 46-15
46-2 Violations Using the Target Compliance Tab..................................................................... 46-16
46-3 Show Details Page................................................................................................................... 46-16
46-4 Event Details and Guided Resolution.................................................................................. 46-17
46-5 Compliance Summary Region on Enterprise Summary Page.......................................... 46-17
46-6 How Compliance Score of a Compliance Standard-Target Is Calculated...................... 46-21
46-7 How Compliance Score of a Compliance Framework Is Calculated............................... 46-22
46-8 Compliance Score of Parent Node........................................................................................ 46-22
46-9 Compliance Standard Definition.......................................................................................... 46-37
46-10 Completed Create Configuration Extension Page............................................................. 46-92
46-11 Completed Compliance Standard Rule Details Page........................................................ 46-93
46-12 Completed Compliance Standard Rule Check Definition Page....................................... 46-94
46-13 Completed Compliance Standard Rule Test Page............................................................. 46-94
46-14 Completed Compliance Standard Rule Review Page....................................................... 46-95
46-15 Completed Manual Rule Page............................................................................................... 46-96
46-16 Completed Create Compliance Standard Pop-Up............................................................. 46-97
46-17 Compliance Standard Rules.................................................................................................. 46-97
46-18 Compliance Standards Library Page.................................................................................... 46-97
46-19 Completed Target Association Page.................................................................................... 46-98
46-20 CS1 - DB Check Compliance Standard in Evaluation Results Tab.................................. 46-98
46-21 Manage Violations Page - Unsuppressed Violations......................................................... 46-98
46-22 Evaluation Results Page After Violation Is Suppressed.................................................... 46-99
46-23 Manage Violations Page Showing the Suppressed Violations Tab................................. 46-99
46-24 Clearing Manual Rule Violations....................................................................................... 46-100
48-1 Steps in a Change Plan .......................................................................................................... 48-16
51-1 Accessing the Provisioning Page............................................................................................ 51-2
xxx
51-2 Procedure Execution Page..................................................................................................... 51-33
F-1 Proxy Settings Error.................................................................................................................... F-7
F-2 Inadvertently Selecting Privileged Credentials as Normal Credentials............................. F-9
F-3 Instances Not to Be Migrated Are Shown as Impacted Targets......................................... F-14
F-4 Analysis Fails Stating Node Name Property Is Missing for a Target................................ F-18
F-5 Link to Show Detailed Progress Appears Broken................................................................ F-19
xxxi
xxxii
List of Tables
xxxiii
28-7 GET Request Configuration for Listing All the Profiles...................................................... 28-7
28-8 GET Request Configuration for Listing the WebLogic Domain Profiles.......................... 28-9
28-9 GET Request Configuration for Listing the Oracle Home Profiles.................................... 28-9
28-10 GET Request Configuration for Listing the Installation Media Profiles......................... 28-10
28-11 GET Request Configuration for Describing the WebLogic Domain Profile................... 28-11
28-12 GET Request Configuration for Describing the Oracle Home Profile............................ 28-14
28-13 GET Request Configuration for Describing the Installation Media Profile.................... 28-15
28-14 DELETE Request Configuration for Deleting the WebLogic Domain Profile............... 28-17
28-15 DELETE Request Configuration for Deleting the Oracle Home Profile......................... 28-17
28-16 DELETE Request Configuration for Deleting the Installation Media Profile................ 28-18
31-1 Getting Started with Deploying, Undeploying, or Redeploying a Java EE Application 31-2
32-1 Getting Started with Deploying a Coherence Node............................................................ 32-2
32-2 Add Coherence Node Page...................................................................................................... 32-8
32-3 Environment Variables........................................................................................................... 32-10
33-1 Getting Started with Provisioning SOA Artifacts and Composites................................... 33-1
34-1 Getting Started with Provisioning Service Bus Resources.................................................. 34-1
34-2 Understanding Export Modes................................................................................................. 34-5
35-1 Getting Started with Provisioning Linux Operating System.............................................. 35-2
35-2 Checklist for Boot Server, Stage Server, RPM Repository, and Reference Host............ 35-15
35-3 Agent Settings.......................................................................................................................... 35-17
35-4 Additional OS Configuration................................................................................................ 35-17
35-5 Boot Configuration and Configuration Scripts................................................................... 35-18
35-6 Additional OS Details............................................................................................................. 35-20
35-7 Boot Configuration and Configuration Scripts................................................................... 35-21
35-8 Agent Settings.......................................................................................................................... 35-24
35-9 Additional OS Configuration................................................................................................ 35-24
35-10 Boot Configuration and Configuration Scripts................................................................... 35-25
35-11 Additional OS Configuration................................................................................................ 35-28
35-12 Boot Configuration and Configuration Scripts................................................................... 35-28
41-1 Current Patch Management Tools and Challenges.............................................................. 41-2
41-2 Supported Targets and Releases for Patching Oracle Database......................................... 41-9
41-3 Supported Targets and Releases for Patching other target types.................................... 41-11
41-4 Roles and Privileges for Using Patch Plans and Patch Templates................................... 41-20
41-5 Oracle Data Guard Patching (RAC - RAC)......................................................................... 41-56
41-6 Oracle Data Guard Patching (RAC - SIDB)......................................................................... 41-57
41-7 Oracle Data Guard Patching (SIDB - SIDB)......................................................................... 41-58
41-8 Missing Properties Error and Workaround........................................................................ 41-67
41-9 Workaround for Unsupported Configuration Errors........................................................ 41-69
41-10 Diagnosing Patching Issues................................................................................................... 41-71
42-1 Jobs Submitted for Setting Up Linux Patching Group......................................................... 42-9
42-2 Oracle Grid Infrastructure and Oracle RAC Configuration Support.............................. 42-18
43-1 Steps for Patching Databases with Dataguard Configuration......................................... 43-16
43-2 Steps for Databases with Non Dataguard Configuration................................................. 43-17
43-3 Update Activities..................................................................................................................... 43-18
44-1 Exadata Update Information................................................................................................... 44-2
44-2 Update Status............................................................................................................................. 44-3
45-1 Collected Configurations for Various Targets...................................................................... 45-1
46-1 Importance and Severity Ranges.......................................................................................... 46-20
49-1 Audit Options Table .............................................................................................................. 49-14
49-2 OS User Monitoring................................................................................................................ 49-29
49-3 OS User Monitoring................................................................................................................ 49-30
49-4 OS Process Monitoring........................................................................................................... 49-31
49-5 OS Process Monitoring (continued)..................................................................................... 49-32
49-6 OS File Monitoring.................................................................................................................. 49-33
xxxiv
49-7 OS File Monitoring (continued)............................................................................................ 49-35
49-8 OS Windows Registry Monitoring....................................................................................... 49-35
49-9 OS Windows Active Directory User Monitoring............................................................... 49-36
49-10 OS Windows Active Directory Computer Monitoring...................................................... 49-37
49-11 OS Windows Active Directory Group Monitoring............................................................ 49-37
49-12 Oracle Database Table Monitoring....................................................................................... 49-37
49-13 Oracle Database View Monitoring....................................................................................... 49-38
49-14 Oracle Database Materialized View Monitoring................................................................ 49-39
49-15 Oracle Database Index Monitoring....................................................................................... 49-40
49-16 Oracle Database Sequence Monitoring................................................................................ 49-40
49-17 Oracle Database Procedure Monitoring.............................................................................. 49-41
49-18 Oracle Database Function Monitoring................................................................................. 49-41
49-19 Oracle Database Package Monitoring.................................................................................. 49-41
49-20 Oracle Database Library Monitoring................................................................................... 49-42
49-21 Oracle Database Trigger Monitoring................................................................................... 49-42
49-22 Oracle Database Tablespace Monitoring............................................................................. 49-43
49-23 Oracle Database Cluster Monitoring.................................................................................... 49-43
49-24 Oracle Database Link Monitoring........................................................................................ 49-43
49-25 Oracle Database Dimension Monitoring............................................................................. 49-44
49-26 Oracle Database Profile Monitoring..................................................................................... 49-44
49-27 Oracle Database Public Link Monitoring............................................................................ 49-44
49-28 Oracle Database Public Synonym Monitoring.................................................................... 49-45
49-29 Oracle Database Synonym Monitoring................................................................................ 49-45
49-30 Oracle Database Type Monitoring........................................................................................ 49-45
49-31 Oracle Database Role Monitoring......................................................................................... 49-46
49-32 Oracle Database User Monitoring........................................................................................ 49-46
49-33 Oracle Database SQL Query Statement Monitoring.......................................................... 49-47
51-1 Predefined Roles for Designers............................................................................................... 51-3
51-2 Predefined Roles for Operators............................................................................................... 51-4
51-3 Field Description - Adding Rolling Phase........................................................................... 51-11
51-4 Field Description - Adding Steps.......................................................................................... 51-16
51-5 Deployment Procedure Status............................................................................................... 51-18
51-6 Comparison Between the Existing Procedure Activity Page and the New Procedure
Activity Page...................................................................................................................... 51-32
52-1 Field Description - Customizing Steps................................................................................... 52-7
52-2 Assigning Variable Values at Runtime.................................................................................. 52-9
A-1 EM CLI Provisioning Verbs and their Usage.......................................................................... A-2
A-2 EM CLI Patching Verbs and their Usage................................................................................. A-6
A-3 Software Library EM CLI Verbs and Their Usage............................................................... A-10
A-4 EM CLI Patching Scenarios..................................................................................................... A-26
A-5 Description of the Parameters Used in a Properties File That Is Used for Provisioning
Oracle WebLogic Server with a Provisioning Profile.................................................... A-35
A-6 Description of the Parameters Used in a Properties File for Scaling Up or Scaling Out a
WebLogic Server.................................................................................................................. A-39
C-1 emctl partool Options................................................................................................................. C-2
E-1 Creating Administrators with the Required Roles................................................................. E-3
F-1 Status Values.............................................................................................................................. F-25
F-2 Maturity Levels.......................................................................................................................... F-25
xxxv
xxxvi
Preface
Audience
This guide is primarily meant for administrators who want to use the discovery,
provisioning, patching, and configuration and compliance management features
offered by Cloud Control to meet their lifecycle management challenges. As an
administrator, you can be either a Designer, who performs the role of a system
administrator and does critical data center operations, or an Operator, who runs the
default as well custom deployment procedures, patch plans, and patch templates to
manage the enterprise configuration.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle
Accessibility Program website at http://www.oracle.com/pls/topic/lookup?
ctx=acc&id=docacc.
xxxvii
Related Documents
For more information, see the following books in the Cloud Control documentation
library:
• Oracle Enterprise Manager Cloud Control Advanced Installation and Configuration Guide
Conventions
The following conventions are used in this document:
Convention Meaning
boldface Indicates graphical user interface elements associated with an action,
or terms defined in text or the glossary.
italic Indicates book titles, emphasis, or placeholder variables for which you
supply particular values.
Other Graphics Graphics have been used extensively in addition to the textual
descriptions to ensure that certain concepts and processes are
illustrated better.
xxxviii
Part, Chapter, or Section Change Description
Export and Import of Domain Partitions Oracle WebLogic Server supports WebLogic
About Exporting and Importing domain partition functionality. Introducing
WebLogic Domain Partitions export and import functionality between
WebLogic Server domains.
Exporting WebLogic Domain Partition
Importing WebLogic Domain Partition
Patching Oracle Service Bus Fusion Middleware patching for Oracle Service
Rollback of Oracle Service Bus Patch Bus domains using Enterprise Manager Cloud
Control.
Deploying Web Logic Patches Along with
SOA or Oracle Service Bus Patches In A
Single Patch Plan
About Patch Recommendations
Supported Targets, Releases, and
Deployment Procedures for Patching
Working with Inventory and Usage Included important information about the
Details inventory location in a shared Oracle Home
Provisioning Middleware Domains and deployment.
Oracle Homes
xxxix
Part I
Overview and Setup Details
Compliance • Evaluates the compliance of targets and systems as they relate to your
Management business best practices for configuration, security, and storage.
• Advises of how to change configuration to bring your targets and
systems into compliance.
• Helps you define, customize, and manage Compliance frameworks,
Compliance standards, Compliance standard rules.
• Helps you test your environment against the criteria defined for your
company or regulatory bodies using these self-defined entities
Enterprise • Provides the means to identify databases within the enterprise that
Data potentially contain sensitive data, and then to evaluate the data within
Governance these candidates to determine if sensitive data exists.
• Uses metadata discovery to identify databases containing objects that are
protected by security features known as Protection Policies.
• Discovers sensitive database candidates by identifying application
signatures, a set of database objects such as schemas, tables, and views
that are unique to a specific application.
• Performs metadata discovery automatically whenever a database target
is discovered. This feature can be disabled if you want more control over
when and how the metadata discovery job runs.
• Enables you to associate a sensitive database candidate with a new or
existing Application Data Model (ADM) and set sensitive columns for
the ADM.
Change • Enables you to plan, manage, and monitor operations within your data
Activity center. These operations involve dependencies and coordination across
Planner teams and business owners, as well as multiple processes.
• Provides you the ability to create plans comprising of one or more tasks.
Tasks can be associated with operations like a patch template, a
compliance standard, or a manual job.
• Enables you to monitor all managed plans. This helps you to identify
any issues that may delay the activity plan completion deadline.
• Prints plans that can be used for reporting purposes. Information
includes overall summary across all plans, plan summary within a given
plan, overall tasks across all tasks across plans, and task summary across
tasks within a given plan.
Note:
This chapter describes the infrastructure requirements you must meet before you start
using the lifecycle management features. This chapter is essentially for administrators
or designers who create the infrastructure. The requirements described in this chapter
have to be performed just once.
This chapter covers the following:
• Setting Up Credentials
Click the reference links provided against the steps in the Table 2-1 for more
information on each of the seactions.
Note:
Ensure that the OMS is patched appropriately to the required level. For
information about the patches that need to be applied on the Enterprise
Manager Cloud Control Management Server (OMS) for using the Provisioning
and Patching features, see My Oracle Support note 427577.1.
To start using the Software Library to create and manage entities, the Software Library
Storage Locations must be configured. System Administrators are responsible for
configuring the Software Library storage locations, following which the Software
Library becomes usable.
Cloud Control offers the following types of storage locations:
• Upload File Locations: These locations are configured for storing files uploaded by
Software Library as part of creating or updating an entity. The Upload File
Locations support two storage options:
• Referenced File Locations: These are locations that allow you to leverage the
organization's existing IT infrastructure (like file servers, web servers, or storage
systems) for sourcing software binaries and scripts. Such locations allow entities to
refer to files without having to upload them explicitly to a Software Library
storage. Referenced File Locations support three storage options:
1. HTTP Locations
2. NFS Locations
See Also:
Note:
you can even register the root account details as Named Credentials for the
privileged users. Once they are registered as Named Credentials, you can save them as
Preferred Credentials if you want.
The advantages of saving the credentials are:
• You do not have to expose the credential details to all the users.
• It saves time and effort as you do not have to specify the user name and password
every time for each Oracle home or host machine, you can instead select a named
profile that will use the saved credentials.
For more information on Named Credentials, see Oracle Enterprise Manager Cloud
Control Security Guide.
While most steps within a Deployment Procedure can be run as a normal user, there
are some steps that require special permissions and privileges, and the Oracle account
credentials or the root account credentials may not be sufficient. Under such
circumstances, use authentication utilities to run some steps within the Deployment
Procedure with the privileges of another user. The authentication utilities supported
by Enterprise Manager Cloud Control are SUDO and PowerBroker. This support is
offered using the Privilege Delegation mechanism available in Enterprise Manager
Cloud Control.
For a conceptual overview of Privilege Delegation and the authentication tools
supported by it, see Oracle Enterprise Manager Cloud Control Security Guide.
Table 2-2 lists the use cases pertaining to credentials, and describes the steps to be
performed for setting up credentials for provisioning. Select the use case that best
matches with the situation you are in, and follow the suggested instructions.
• Super Administrator
• Designers (EM_ALL_DESIGNER)
• Operators (EM_ALL_OPERATOR)
Super Administrators
Super Administrators are powerful Cloud Control administrators with full access
privileges on all targets. They are responsible for creating and administering accounts
within the Cloud Control environment. For example, Super Administrators create the
Designer and Operator roles, and grant these roles to different users and groups
within their enterprise.
Designers
Designers are lead administrators with increased privileges on Deployment Procedures
and Software Library. Starting with Cloud Control, designers can create deployment
procedure templates using the Lock down feature, and save these templates to enforce
standardization and consistency. Operator privileges are granted on these templates
so that administrators who login as Operators can launch these templates, and run the
Deployment Procedure successfully. Doing this ensures that the procedures are less
error prone and more consistent.
For more information about saving deployment procedures using lock downs, see
Saving and Launching the Deployment Procedure with Lock Down
Designers are responsible for performing all the design-time activities like:
• Creating components, directives, and images, and storing them in Oracle Software
Library.
Note:
Designers can choose to perform both design-time and run-time activities, but
operators can perform only run-time activities.
1. In Cloud Control, from the Setup menu, select Security, then select
Administrators.
a. On the Properties page, specify the name Designer and provide a password.
Leave the other fields blank, and click Next.
Note:
c. On the Target Privileges page, select the targets privileges that must be
granted to a Designer user account. For information about the target
privileges available to an Administrator with Designer role, see Granting
Roles and Privileges to Administrators on the Deployment Procedure
e. On the Review page, review the information you have provided for this user
account, and click Finish.
1. In Cloud Control, from the Setup menu, select Security, then select
Administrators.
a. On the Properties page, specify the name Operator and provide a password.
Leave the other fields blank and click Next.
Note:
You can alternately restrict the Operator access to either Provisioning or
Patching domains. For granting privileges explicitly for Provisioning, select
the EM_PROVISION_OPERATOR role. Similarly, for granting designer
privileges explicitly for Patching, select the EM_PATCH_OPERATOR role.
c. On the Target Privileges page, select the targets privileges that must be
granted to an Operator user account. For information about the target
e. On the Review page, review the information you have provided for this user
account, and click Finish.
Note:
Note:
1. In Cloud Control, from the Setup menu, select Extensibility, then select Self
Update.
3. From the Actions menu, select Subscribe to ensure that you receive notification
whenever a provisioning bundle is available for download.
4. In the Updates Home page, select update of Type Provisioning Bundle and from
the Actions menu, select Open.
1. From the Enterprise menu, select Provisioning and Patching, then select
Software Library.
4. Click Stage.
5. Specify the host (where you want to stage the root dispatcher), and the dispatcher
location, that is, the location on the specified host where you want to stage the
root dispatcher. Click Submit to manually stage the root dispatcher.
8. Click Stage.
9. Specify the host (where you want to stage the patching root component), and the
dispatcher location, that is, the location on the specified host where you want to
stage the patching root component. Click Submit to manually stage the patching
root component.
Note:
Before you manually stage the root components at a custom location, ensure
that the user has the read and execute permissions on the dispatcher location
and on root_dispatcher.sh.
Note:
After staging the patching root component manually, you must specify this in
the patch plan that you create for applying the required Oracle patches.
Access the Deployment Options page of the patch plan that you created. In the
Where to Stage section, select No (already staged) for Stage Root Component,
and specify the dispatcher location where you have staged the patching root
component manually. If the dispatcher location is a shared location, select
Dispatcher Location Shared.
Note:
Note:
Note:
For information on commands to be run as root for patching RAC and Grid
Infrastructure databases, see Oracle Grid Infrastructure and Oracle RAC
Configuration Support
1. From the Enterprise menu, select Provisioning and Patching, then select
Software Library.
4. Click Stage.
5. Specify the host (where you want to stage the root dispatcher), and the dispatcher
location, that is, the location on the specified host where you want to stage the
root dispatcher. Click Submit to manually stage the root dispatcher.
8. Click Stage.
9. Specify the host (where you want to stage the patching root component), and the
dispatcher location, that is, the location on the specified host where you want to
stage the patching root component. Click Submit to manually stage the patching
root component.
Note:
Before you manually stage the root components at a custom location, ensure
that the user has the read and execute permissions on the dispatcher location
and on root_dispatcher.sh.
Note:
After staging the root components manually, you must access the Select
Software Locations page from the Provision Oracle Database wizard. In the
Root Dispatcher Location, specify the same dispatcher location where you
have staged the root components manually, and select Select this option if all
the root scripts are staged to ROOT_DISPATCH_LOC already.
If you have not manually staged the root scripts component, then you can use
the database provisioning workflow to specify the details. In the Select
Software Locations page of the Provision Oracle Database wizard, enter a
location where you want to stage the root scripts. If you do not provide the
location details, then a standard enterprise manager stage location will be
used to stage the root components.
<command_alias> represents the alias that describes the entities that can be run by
aime as root.
<agent_home> represents the Management Agent home.
<dispatcher_loc> represents the root dispatcher location where the root
component is staged. By default, the root component is automatically staged at
%emd_emstagedir%. If you chose a custom location for the automatic staging, or
staged the root component manually at a custom location, ensure that you specify this
location for <dispatcher_loc>. Else, specify the default location, that is,
%emd_emstagedir%.
Discovery is the first step toward monitoring and managing the health of your
software deployments. Discovery refers to the process of identifying unmanaged hosts
and their software deployments, and adding them as manageable targets in Oracle
Enterprise Manager Cloud Control (Cloud Control).
This chapter describes how you can discover the hosts and their software
deployments, and add them to Cloud Control. In particular, this chapter describes the
following:
• Provisioning Oracle Real Application Clusters One (Oracle RAC One) Node
Databases
• Creating Databases
Provision Oracle Real Application • Oracle Real Application Clusters (Oracle RAC) 12c
Clusters Release 1
• Oracle RAC One Node 12c Release 1
• Oracle Grid Infrastructure 12c Release 1
• Oracle Automatic Storage Management (Oracle
ASM) 12c Release 1
Provision Oracle Clusterware / • Oracle Real Application Clusters (Oracle RAC) 10g
Oracle RAC for UNIX and RDBMS Release 1 to 12c Release 1
versions 10g/11g/12c (applicable • Oracle Clusterware 10g Release 1 to 12c Release 1
for UNIX platform) • Oracle Clusterware Automatic Storage
Management (Oracle ASM) 10g Release 1 to 12c
Release 1
Provision Oracle Clusterware / • Oracle Real Application Clusters (Oracle RAC) 10g
Oracle RAC for Windows and Release 1 to 12c Release 1
RDBMS versions 10g/11g/12c • Oracle Clusterware 10g Release 1 to 12c Release 1
(applicable for Windows platform) • Oracle Clusterware Automatic Storage
Management (Oracle ASM) 10g Release 1 to 12c
Release 1
Extend/Scale Up Oracle Real Oracle Real Application Clusters (Oracle RAC) 10g
Application Clusters Release 1 to 12c Release 1
Delete/Scale Down Oracle Real Oracle Real Application Clusters (Oracle RAC) 10g
Application Clusters Release 1 to 12c Release 1
Provision Oracle Database Client Oracle Database Client 10g Release 2 to 12c Release 1
Table 4-2 lists various use cases for database provisioning deployment procedures.
Provision Oracle Real • Provisioning Grid Infrastructure • Provisioning Grid Infrastructure with
Application Clusters with Oracle Real Application Oracle Real Application Clusters
Clusters Database and Database and Configuring Database
Configuring Database with Oracle with Oracle Automatic Storage
Automatic Storage Management Management
• Provisioning Oracle Real • Provisioning Oracle Real Application
Application Clusters Database Clusters Database with File System on
with File System on an Existing an Existing Cluster
Cluster • Provisioning Oracle Real Application
• Provisioning Oracle Real Clusters Database with File System on a
Application Clusters Database New Cluster
with File System on a New Cluster
Extend/Scale Up Oracle Extending Oracle Real Application Extending Oracle Real Application Clusters
Real Application Clusters Clusters
Delete/Scale Down Oracle Deleting Oracle Real Application Deleting the Entire Oracle RAC
Real Application Clusters Clusters Scaling Down Oracle RAC by Deleting
Some of Its Nodes
Provision Oracle Database • Cloning a Running Oracle • Cloning a Running Oracle Database
Client Database Replay Client Replay Client
• Provisioning Oracle Database • Provisioning an Oracle Database Replay
Replay Client Using Gold Image Client Using Gold Image
• Provisioning Oracle Database • Provisioning an Oracle Database Replay
Replay Client Using Installation Client Using Installation Binaries
Binaries
Note:
If you have upgraded from an older version of Cloud Control to version 12c,
you will need to ensure that CSH shell is present as /bin/csh before you can
run the database provisioning deployment procedures.
• Ensure that the host is set up for database provisioning entities. For more
information about host readiness, see Checking Host Readiness Before
Provisioning or Patching.
• If you plan to provision database software on a Microsoft Windows host, you must
ensure that Cygwin is installed on the host, before provisioning the database
software.
For information on how to install Cygwin on a host, see Enterprise Manager Cloud
Control Basic Installation Guide.
• Discover and monitor the destination hosts in Cloud Control. For this purpose, you
need the latest version of Oracle Management Agent (Management Agent) on the
destination hosts. For more information refer to the Oracle Cloud Control
Installation and Basic Configuration Guide. Ensure that the agents are installed in
the same location on all hosts.
• Set up the Oracle Software Library (Software Library). Ensure that the installation
media, database templates, or provisioning entities are available in the Software
Library. For information about creating them, see Setting Up Database
Provisioning. Alternatively, use a provisioning profile to store the database
template. For information about creating a database provisioning profile, see
Creating Database Provisioning Profiles.
• Ensure that the operating system groups corresponding to the following roles
already exist on the hosts you select for provisioning. If these groups do not exist,
then the Deployment Procedure automatically creates them. However, if these have
to be created on NIS, then you must create them manually before running the
Deployment Procedure. For information about creating these operating system
groups, refer to the Oracle Grid Infrastructure Installation Guide 12c Release 1
(11.2).
The Oracle Database user (typically oracle) must be a member of the following
groups:
• Ensure that you use an operating system user that has write permission on the
following locations:
– Oracle base directory for Grid Infrastructure where diagnostic data files related
to Grid Infrastructure can be stored.
– Oracle base directory for database where diagnostic data files related to
database can be stored.
• For provisioning Oracle Real Application Clusters Databases (Oracle RAC), the
following are additional prerequisites:
– Meet the hardware, software, and network requirements for Oracle Grid
Infrastructure and Oracle RAC installation on the target hosts.
– The Oracle RAC Database user must be a member of the group ASM Database
Administrator (ASMDBA).
• Ensure that as an operator, you have permissions to view credentials (set and
locked by the designer), view targets, submit jobs, and launch deployment
procedures.
• Ensure that the operating system groups corresponding to the following roles
already exist on the hosts you select for provisioning. The operating system users
of these groups automatically get the respective privileges.
Note:
You do not require out of box profiles anymore. Provisioning profiles for
11.2.0.4 Gold Image can be created using the gold image flow. You can also
create installation media based profiles for any version of grid infrastructure
and database.
If a database is used as a reference for a Gold Image, the new profile will
contain database data. If the reference database is not in ARCHIVE LOG
MODE, then the reference database will be shutdown and restarted during the
process.
1. From the Enterprise menu, select Provisioning and Patching, then select Database
Provisioning.
2. In the Database Procedures page, in the Profiles section, click Create. The Create
Database Provisioning Profile wizard is launched.
4. On the Search and Select: Targets window, select the reference target from which
you want to create the provisioning profile, and then click Select.
5. In the Reference target page, the Include Operation allows you to select the
components you want to include in the provisioning profile. Depending on the
reference host configuration, you can select to include Database Oracle Home, Grid
Infrastructure Oracle Home, and their related configuration as part of the
provisioning profile as below:
• Database Oracle Home to include Oracle database gold image in the profile
• Data Content to include Oracle database template (or Data) in the profile where
you can Select Structure and Data to include physical and structural files from
the database or Structure only to include only the structural files in the
template.
In the Add Credentials window, specify the User Name and Password. Select Set
as Preferred Credentials if you want to set these as the Preferred Credentials. Click
Add.
Click Next.
• In the Profile Information section, enter a unique profile name of your choice.
For example:
Cluster Profile [time when created]
Retain or edit the default details such as Profile Location where you want to
store the provisioning profile in the Software Library, Name, Description,
Version, Vendor, Notes, and the Name of Components included in the profile.
• In the Schedule section, you can choose to start the profile creation immediately,
or you can schedule it for a later time.
• In the Software Library Storage section, select the Software Library Location
Type and Software Library Location Name.
• Click Next.
8. In the Review page, ensure that the selections you have made in the previous pages
are correctly displayed and click Submit. Otherwise, click Back repeatedly till you
reach the page where you want to make changes. Click Cancel to abort the
provisioning profile creation. The Deployment Instance Name is generated with
the profile name and user name.
9. Once you have submitted the provisioning profile creation job, ensure that the
provisioning profile appears in the Database Provisioning page.
4.3.6 Describing, Creating, and Deleting Database Provisioning Profiles Using EMCLI
This method enables administrators or provisioning operators to create or delete
database provisioning profiles using EMCLI verbs.
This section explains the following:
• Use the following EMCLI verb to describe the database provisioning profile:
emcli describe_dbprofile_input
1. Use the following EMCLI verb to get the running provisioning profile instance:
emcli get_instances
2. Use the GUID get from the previous step to get the response file. For example:
emcli get_instance_data -instance=<GUID> >/tmp/profile.txt
For example:
emcli create_dbprofile -input_file=data:"/tmp/profile.txt"
This command takes in a property file that completely describes the type of profile
that will be created and the options used.
1. Use the following EMCLI verb to list the database profiles created:
emcli list_dbprofiles
For example:
emcli delete_dbprofile -comp_loc="Grid Infrastructure Home Provisioning Profiles/
11.2.0.2.0/linux_x64/Cluster clustername Profile 02-04-2014 08:03 PM"
comp_loc is the combination of the database profile name and the location of the
profile.
3. To check the status of the profile deletion, run the following EMCLI command:
3. Click the See All link for the operating system on which you want to provision
the database.
5. Download zip files 1 and 2 for Database and Grid Infrastructure software to the
temporary directory created earlier.
6. Navigate to the temporary directory and extract the contents of the zip files.
For example, to extract the contents of the database software zip files, run these
commands:
Unzip linux_11gR2_database_1of2.zip
Unzip linux_11gR2_database_2of2.zip
8. In Cloud Control, from the Enterprise menu, select Provisioning and Patching
and then select Software Library.
9. In Software Library, select the directory where you want to create the installation
media component for the database.
10. From the Actions menu, select Create Entity, then select Component.
11. In the Create Entity: Component dialog, select Subtype as Installation Media and
click Continue.
12. In the Create Installation Media: Describe page, enter the Name and Description
for the component, and click Next.
13. In the Create Installation Media: Configure page, select Product Version,
Platform, and Product from the list.
For Product, select Oracle Database for Oracle Database, Oracle Client for Oracle
Database Replay Client, and Oracle Grid Infrastructure for Grid Infrastructure
software.
Click Next.
14. In the Create Installation Media: Select Files page, select Upload Files.
b. In the Specify Source section, select File Source as Agent Machine and select
the host from which you want to upload the files.
c. Click Add.
f. Navigate to the temporary directory and select the zipped database files that
you created.
15. In the Create Installation Media: Review page, review the details you have
provided and click Save and Upload to save and upload the installation media
files to Software Library.
2. In the Databases page, click on the database from which you want to create a
template.
3. In the Database home page, from the Oracle Database menu, select Provisioning,
then select Create Database Template.
5. In the Template Options page, specify the Template Name and Description.
Specify the template location:
• Select Store Template in Software Library to specify the Storage Type and
Location on the OMS Agent File System or Shared File System.
• Use Oracle Flexible Architecture to convert the location of files in the template
to OFA.
• Maintain File Location if you want the location of the files in the template to be
identical to the source database.
Click Next.
6. In the Schedule page, specify the job name and schedule. If you want to run the job
immediately, then retain the default selection, that is, One Time (Immediately). If
you want to run the job later, then select One Time (Later) and provide time zone,
start date, and start time details. You can also select to blackout the database
during the template creation process. Click Next.
7. In the Review page, review the details you have provided for the job and if you are
satisfied with the details, then click Submit Job to run the job according to the
schedule set. If you want to modify the details, then click Back repeatedly to reach
the page where you want to make the changes.
8. In the Jobs page, verify that the job has successfully completed and the template
has been created as specified.
Note:
You can also use Database Configuration Assistant (DBCA) for creating
database templates.
You can edit and customize the database template you create and then upload
the customized template to the Software Library. For information about
uploading database templates to Software Library manually, see Uploading
Database Templates to Software Library.
1. From the Enterprise menu, select Provisioning and Patching, then select
Software Library.
2. On the Software Library Home page, select the folder where you want to upload
the database template.
3. From the Actions menu, select Create Entity, then select Component. Alternately,
right click the custom folder, and from the menu, select Create Entity, then select
Component.
4. From the Create Entity: Component dialog box, select Database Template and
click Continue.
Cloud Control displays the Create DatabaseTemplate page.
5. On the Describe page, enter the Name, Description, and Other Attributes that
describe the entity.
Note: The component name must be unique to the parent folder that it resides in.
Sometime even when you enter a unique name, it may report a conflict, this is
because there could be an entity with the same name in the folder that is not
visible to you, as you do not have view privilege on it.
Click +Add to attach the database template. Select the template as the Source file
in the format templatename.dbt or templatename.dbc. Retain the File Name as
displayed. Ensure that the file size is less than 2 MB.
In the Notes field, include information related to the entity like changes being
made to the entity or modification history that you want to track.
6. On the Select Files page, add all the database template related files.
Select Upload Files to upload all the database template files as follows:
a. In the Specify Destination section, choose the Software Library location where
you want to upload the files.
b. In the Specify Source section, select the location where you have stored the
template files. The location can be your local machine or the agent machine.
7. On the Review page, review the details and then click Save and Upload to create
the component and upload the binary to Software Library.
1. From the Enterprise menu, select Provisioning and Patching, then select
Software Library.
2. On the Software Library Home page, select any custom folder and create the
database clone component.
3. From the Actions menu, select Create Entity, then select Component. Alternately,
right click the custom folder, and from the menu, select Create Entity, then select
Component.
4. From the Create Entity: Component dialog box, select Oracle Database Software
Clone and click Continue.
Cloud Control displays the Create Oracle Database Software Clone page.
5. On the Describe page, enter the Name, Description, and Other Attributes that
describe the entity.
Note: The component name must be unique to the parent folder that it resides in.
Sometime even when you enter a unique name, it may report a conflict, this is
because there could be an entity with the same name in the folder that is not
visible to you, as you do not have view privilege on it.
Click +Add to attach files that describe the entity better like readme, collateral,
licensing, and so on. Ensure that the file size is less than 2 MB.
In the Notes field, include information related to the entity like changes being
made to the entity or modification history that you want to track.
6. On the Configure page, from the Create Component from menu, select Reference
Oracle Home and do the following:
a. In the Reference Oracle Home section, click the magnifier icon to select the
desired database Oracle home from the list of databases running on the host
machine.
The Oracle Home Location and Host Name fields are populated with the
selected values.
b. In the Oracle Home Credentials section, select the credential type you want to
use for accessing the targets you manage. For information about setting
credentials, see Setting Up Credentials
7. On the Review page, review the details and then click Save and Upload to create
the component and upload the binary to Software Library.
1. From the Enterprise menu, select Provisioning and Patching, then select
Software Library.
2. On the Software Library Home page, select any custom folder and create the
database clone component.
3. From the Actions menu, select Create Entity, then select Component. Alternately,
right click the custom folder, and from the menu, select Create Entity, then select
Component.
4. From the Create Entity: Component dialog box, select Oracle Database Software
Clone and click Continue.
Cloud Control displays the Create Oracle Database Software Clone page.
5. On the Describe page, enter the Name, Description, and Other Attributes that
describe the entity.
Note: The component name must be unique to the parent folder that it resides in.
Sometime even when you enter a unique name, it may report a conflict, this is
because there could be an entity with the same name in the folder that is not
visible to you, as you do not have view privilege on it.
Click +Add to attach files that describe the entity better like readme, collateral,
licensing, and so on. Ensure that the file size is less than 2 MB.
In the Notes field, include information related to the entity like changes being
made to the entity or modification history that you want to track.
6. On the Configure page, from the Create Component from menu, select Existing
Oracle Home Archive and do the following:
a. In the Oracle Home Archive section, select a external storage location from
where you can refer to the database clone software. From the External
Storage Location Name menu, select the location name.
For more information on configuring external storage locations, see Setting
Up Oracle Software Library.
In Oracle Home Archive Location, enter the exact path, which is basically the
relative path from the configured location, of the archive file residing on the
external storage location. Ensure that the archive file is a valid zip file.
Note:
To create the zip file of an Oracle Home, use the following syntax:
<ZIP PATH>/zip -r -S -9 -1 <archiveName.zip> <directory or list of files
to be archived> -x <patterns to exclude files>
b. In the Oracle Home Properties section, select the Product, Version, Platform,
and RAC Home values, as these configuration properties are particularly
useful to search or track an entity.
7. On the Review page, review the details and then click Save and Upload to create
the component and upload the binary to Software Library.
1. From the Enterprise menu, select Provisioning and Patching, then select
Software Library.
2. On the Software Library Home page, select any custom folder and create the
database clone component.
3. From the Actions menu, select Create Entity, then select Component. Alternately,
right click the custom folder, and from the menu, select Create Entity, then select
Component.
4. From the Create Entity: Component dialog box, select Oracle Clusterware Clone
and click Continue.
Cloud Control displays the Create Oracle Clusterware Clone: Describe page.
5. On the Describe page, enter the Name, Description, and Other Attributes that
describe the entity.
Note: The component name must be unique to the parent folder that it resides in.
Sometime even when you enter a unique name, it may report a conflict, this is
because there could be an entity with the same name in the folder that is not
visible to you, as you do not have view privilege on it.
Click +Add to attach files that describe the entity better like readme, collateral,
licensing, and so on. Ensure that the file size is less than 2 MB.
In the Notes field, include information related to the entity like changes being
made to the entity or modification history that you want to track.
6. On the Configure page, from the Create Component from menu, select Reference
Home and do the following:
a. In the Reference Oracle Home section, click the magnifier icon to select the
desired Oracle Clusterware Oracle home from the list of Clusterware homes
running on the host machine.
The Oracle Home Location and Host fields are populated with the selected
values.
b. In the Oracle Home Credentials section, select the credential type you want to
use for accessing the targets you manage. For information about setting
credentials, see Setting Up Credentials.
7. On the Review page, review the details and then click Save and Upload to create
the component and upload the binary to the Software Library.
1. From the Enterprise menu, select Provisioning and Patching, then select
Software Library.
2. On the Software Library Home page, select any custom folder and create the
database clone component.
3. From the Actions menu, select Create Entity, then select Component. Alternately,
right click the custom folder, and from the menu, select Create Entity, then select
Component.
4. From the Create Entity: Component dialog box, select Oracle Clusterware Clone
and click Continue.
Cloud Control displays the Create Oracle Clusterware Clone: Describe page.
5. On the Describe page, enter the Name, Description, and Other Attributes that
describe the entity.
Note: The component name must be unique to the parent folder that it resides in.
Sometime even when you enter a unique name, it may report a conflict, this is
because there could be an entity with the same name in the folder that is not
visible to you, as you do not have view privilege on it.
Click +Add to attach files that describe the entity better like readme, collateral,
licensing, and so on. Ensure that the file size is less than 2 MB.
In the Notes field, include information related to the entity like changes being
made to the entity or modification history that you want to track.
6. On the Configure page, from the Create Component from menu, select Existing
Oracle Home Archive and do the following:
a. In the Oracle Home Archive section, select a external storage location from
where you can refer to the Oracle Clusterware clone software. From the
External Storage Location Name menu, select the location name.
For more information on configuring external storage locations, see Setting
Up Oracle Software Library.
In Oracle Home Archive Location, enter the exact path, which is basically the
relative path from the configured location, to the archive file residing on the
external storage location. Ensure that the archive file is a valid zip file.
Note:
To create the zip file of an Oracle Home, use the following syntax:
<ZIP PATH>/zip -r -S -9 -1 <archiveName.zip> <directory or list of files
to be archived> -x <patterns to exclude files>
b. In the Oracle Home Properties section, select the Product, Version, and
Platform values, as these configuration properties are particularly useful to
search or track an entity.
7. On the Review page, review the details and then click Save and Upload to create
the component and upload the binary to Software Library.
1. From the Enterprise menu, select Provisioning and Patching, then select
Database Provisioning.
3. In the Download Cluster Verification Utility page, select one of the following:
a. Local Machine to select the CVU binaries from your local computer.
b. Agent Machine to select the CVU binaries from the agent machine.
4. Click OK. This will update the Software Library with the latest cluster verification
utility binaries.
This chapter explains how you can mass-deploy Oracle Databases (also called as
single-instance databases) in an unattended, repeatable, and reliable manner, using
Oracle Enterprise Manager Cloud Control (Cloud Control). In particular, this chapter
covers the following:
1. Log in as a designer, and from the Enterprise menu, select Provisioning and
Patching, then select Database Provisioning.
3. In the Select Hosts page, if you want to use a provisioning profile for the
deployment, choose Select a Provisioning Profile and then, select the profile with
previously saved configuration parameters.
In the Select destination hosts section, click Add to select the destination host
where you want to deploy and configure the software.
In the Select Tasks to Perform section, select the platform, the version for the
process, and the components you want to provision:
• To deploy Database software select either Deploy software only or Deploy and
create a new database, which creates a new database and configures it after
installing the standalone Oracle Database.
Click on the Lock icon against the fields that you do not want to be edited in the
operator role. For more information about the lock down feature in deployment
procedures, see Introduction to Database Provisioning.
Click Next.
4. In the Configure page, the various configuration options are displayed. Provide
values for the Setup Hosts, Deploy Software, Configure Grid Infrastructure, and
Create Database tasks.
6. In the Specify OS Users page, specify the operating system user for the Oracle
Home for the database.
Note:
For Oracle Home User for the database, select the Normal User and Privileged User
to be added to the OS group.
Click on the Lock icon against the fields that you do not want to be edited in the
operator role.
Click Next.
7. In the Specify OS Groups page, specify the OS Groups to use for operating system
authentication. Ensure that the groups corresponding to the following roles already
exist on the hosts you select for provisioning.
9. In the Select Software Locations page, specify the source and destination locations
for the software binaries of Oracle Database.
In the Source section, select the Software Library location for Oracle Database
binaries.
Note:
• Oracle Base for Database, a location on the destination host where the
diagnostic and administrative logs, and other logs associated with the database
can be stored. This location is used for storing only the dump files and is
different from the Oracle home directory where the database software will be
installed.
• Database Oracle Home, a location on the destination host where the database
software can be provisioned. This is the Oracle home directory for the database.
In the Additional Parameters section, specify the Working Directory on the
destination host where the files related to cloning can be staged temporarily.
Ensure that you have approximately 7 GB of space for this directory. For Installer
Parameters, specify any additional Oracle Universal Installer (OUI) parameters you
want to run while provisioning Oracle database. For example, -force (to override
any warnings), -debug (to view more debug information), and -invPtrLoc
<Location> (for UNIX only). Ensure that the parameters are separated by white
space.
Click on the Lock icon against the fields that you do not want to be edited in the
operator role.
Click Next. You will come back to the Configure page. If you have configured the
source and destination location for the software, the Configure Software task will
have a completed status.
11. In the Database Template page, choose the database template location. The location
can be Software Library or Oracle Home. The template selected must be compatible
with the selected Oracle Home version.
If you choose Select Template from Software Library, click on the search icon and
select the template from the Software Library. Specify Temporary Storage Location
on Managed Host(s). This location must exist on all hosts where you want to create
the database.
Click Show Template Details to view details of the selected template. You can
view initialization parameters, table spaces, data files, redo log groups, common
options, and other details of the template.
If you choose Select Template from Oracle Home, select the template from the
Oracle home. The default location is ORACLE_HOME/assistants/dbca/
templates.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
12. In the Identification and Placement page, specify database configuration details.
Specify Global Database Name and SID prefix. Specify the Database Credentials
for SYS, SYSTEM, and DBSNMP database accounts. You can choose to use the
same or different administrative passwords for these accounts.
Note:
• SID must be unique for a database on a host. This means, the SID assigned
to one database on a host cannot be reused on another database on the
same host, but can be reused on another database on a different host. For
example, if you have two databases (db1 and db2) on a host (host1), then
their SIDs need to be unique. However, if you install the third database on
another host (host2), then its SID can be db1 or db2.
• Global database name must be unique for a database on a host and also
unique for databases across different hosts. This means, the global database
name assigned to one database on a host can neither be reused on another
database on the same host nor on another database on a different host. For
example, if you have two databases (db1 and db2) on a host (host1), then
their global database names need to be unique. And if you install the third
database on another host (host2), the global database name of even this
database must be unique and different from all other names registered
with Cloud Control.
• The database credentials you specify here will be used on all the
destination hosts. However, after provisioning, if you want to change the
password for any database, then you must change it manually.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
13. In the Storage Locations page, select the storage type, whether File System or
Automatic Storage Management (ASM).
If you want to use a file system, then select File System and specify the full path to
the location where the data file is present. For example, %ORACLE_BASE%/
oradata or /u01/product/db/oradata.
If you want to use ASM, then select Automatic Storage Management (ASM), and
click the torch icon to select the disk group name and specify ASMSNMP
password. The Disk Group Name List window appears and displays the disk
groups that are common on all the destination hosts.
In the Database Files Location section, specify the location where data files,
temporary files, redo logs, and control files will be stored.
• Select Use Database File Locations from Template to select defaults from the
template used.
• Select Use Common Location for All Database Files to specify a different
location.
If you select Use Oracle Managed Files (OMF), in the Multiplex Redo Logs and
Control Files section, you can specify locations to store duplicate copies of redo
logs and control files. Multiplexing provides greater fault-tolerance. You can
specify upto five locations.
In the Recovery Files Location section, select Use same storage type as database
files location to use the same storage type for recovery files as database files. Select
Use Flash Recovery Area and specify the location for recovery-related files and
Fast Recovery Area Size.
Select Enable Archiving to enable archive logging. Click Specify Archive Log
Locations and specify upto nine archive log locations. If the log location is not
specified, the logs will be saved in the default location.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
14. In the Initialization Parameters page, select the memory management type as
Automatic Memory Management or Automatic Shared Memory Management.
Select Specify Memory Settings as Percentage of Available Memory to specify
memory settings as percentage of available physical memory. For Automatic
Shared Memory management, specify Total SGA and Total PGA. For Automatic
Memory Management, specify Total Memory for Oracle.
In the Database sizing section, specify the Block Size and number of Processes. If
you have selected a database template with datafiles in the Database Template
page, you cannot edit the Block Size.
Specify the Host CPU Count. The maximum CPU count that can be specified is
equal to the number of CPUs present on the host.
In the Character Sets section, select the default character set. The default character
set is based on the locale and operating system.
Select a national character set. The default is AL16UTF16.
In the Database Connection Mode section, select the dedicated server mode. For
shared server mode, specify the number of shared servers.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
15. In the Additional Configuration Options, all the available listeners running from
the Oracle Home are listed. You can either select a listener or create a new one. You
can select multiple listeners to register with the database. To create a new listener,
specify the Listener Name and Port. Select database schemas and specify custom
scripts, if any. Select custom scripts from the host where you are creating the
database or from Software Library. If you have selected multiple hosts, you can
specify scripts only from Software Library.
If you have selected a Structure Only database template in the Database Template
page, you can also view and edit database options.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
16. Review the details you have provided for creating the database and click Next. You
will come back to the Configure page. If you have configured the database, the
Create Databases task will have a completed status.
19. In the Schedule page, specify a Deployment Instance name. If you want to run the
procedure immediately, then retain the default selection, that is, One Time
(Immediately). If you want to run the procedure later, then select One Time (Later)
and provide time zone, start date, and start time details. You can set the
notification preferences according to deployment procedure status. If you want to
run only prerequisites, you can select Pause the procedure to allow me to analyze
results after performing prerequisite checks to pause the procedure execution
after all prerequisite checks are performed.
Click Next.
20. In the Review page, review the details you have provided for the deployment
procedure and if you are satisfied with the details, then click Finish to run the
deployment procedure according to the schedule set. If you want to modify the
details, then click Back repeatedly to reach the page where you want to make the
changes. Click Save to save the deployment procedure for future deployment.
21. In the Operator role, launch the saved deployment procedure. Add targets for
provisioning and provide values for configurable fields in the deployment
procedure.
22. In the Procedure Activity page, view the status of the execution of the job and steps
in the deployment procedure. Click the Status link for each step to view the details
of the execution of each step. You can click Debug to set the logging level to Debug
and click Stop to stop the procedure execution.
23. After the procedure execution is completed, click on the Targets menu and select
All Targets to navigate to the All Targets page and verify that the newly created
databases appear as Cloud Control targets.
5.4.1 Prerequisites for Provisioning Oracle Databases with Oracle Automatic Storage
Management
Before running the Deployment Procedure, meet the prerequisites listed in Setting Up
Database Provisioning.
1. Log in as a designer, and from the Enterprise menu, select Provisioning and
Patching, then select Database Provisioning.
3. In the Select Hosts page, if you want to use a provisioning profile for the
deployment, choose Select a Provisioning Profile and then, select the profile with
previously saved configuration parameters.
In the Select destination hosts section, click Add to select the destination host
where you want to deploy and configure the software.
In the Select Tasks to Perform section, select the platform, the version for the
process, and the components you want to provision:
• To deploy Database software select either Deploy software only or Deploy and
create a new database, which creates a new database and configures it after
installing the standalone Oracle Database.
Click on the Lock icon against the fields that you do not want to be edited in the
operator role. For more information about the lock down feature in deployment
procedures, see Introduction to Database Provisioning.
Click Next.
5. In the Specify OS Users page, specify the operating system user for the Oracle
Home for the database.
Note:
For Oracle Home User for the database, select the Normal User and Privileged User
to be added to the OS group.
Click on the Lock icon against the fields that you do not want to be edited in the
operator role.
Click Next.
6. In the Specify OS Groups page, specify the OS Groups to use for operating system
authentication. Ensure that the groups corresponding to the following roles already
exist on the hosts you select for provisioning.
8. In the Select Software Locations page, specify the source and destination locations
for the software binaries of Oracle Database.
In the Source section, select the Software Library location for Oracle Database
binaries.
In the Destination location, specify the following:
• Oracle Base for Database, a location on the destination host where the
diagnostic and administrative logs, and other logs associated with the database
can be stored. This location is used for storing only the dump files and is
different from the Oracle home directory where the database software will be
installed.
• Database Oracle Home, a location on the destination host where the database
software can be provisioned. This is the Oracle home directory for the database.
In the Additional Parameters section, specify the Working Directory on the
destination host where the files related to cloning can be staged temporarily.
Ensure that you have approximately 7 GB of space for this directory. For Installer
Parameters, specify any additional Oracle Universal Installer (OUI) parameters you
want to run while provisioning Oracle Grid Infrastructure. For example, -force (to
override any warnings), -debug (to view more debug information), and -invPtrLoc
<Location> (for UNIX only). Ensure that the parameters are separated by white
space.
Click on the Lock icon against the fields that you do not want to be edited in the
operator role.
Click Next. You will come back to the Configure page. If you have configured the
source and destination location for the software, the Configure Software task will
have a completed status.
10. In the Database Template page, choose the database template location. The location
can be Software Library or Oracle home. The template selected must be compatible
with the selected Oracle home version.
If you choose Select Template from Software Library, click on the search icon and
select the template from the Software Library. Specify Temporary Storage Location
on Managed Host(s). This location must exist on all hosts where you want to create
the database.
Click Show Template Details to view details of the selected template. You can
view initialization parameters, table spaces, data files, redo log groups, common
options, and other details of the template.
If you choose Select Template from Oracle Home, select the template from the
Oracle home. The default location is ORACLE_HOME/assistants/dbca/
templates.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
11. In the Identification and Placement page, specify database configuration details.
Specify Global Database Name and SID prefix. Specify the Database Credentials
for SYS, SYSTEM, and DBSNMP database accounts. You can choose to use the
same or different administrative passwords for these accounts.
Note:
• SID must be unique for a database on a host. This means, the SID assigned
to one database on a host cannot be reused on another database on the
same host, but can be reused on another database on a different host. For
example, if you have two databases (db1 and db2) on a host (host1), then
their SIDs need to be unique. However, if you install the third database on
another host (host2), then its SID can be db1 or db2.
• Global database name must be unique for a database on a host and also
unique for databases across different hosts. This means, the global database
name assigned to one database on a host can neither be reused on another
database on the same host nor on another database on a different host. For
example, if you have two databases (db1 and db2) on a host (host1), then
their global database names need to be unique. And if you install the third
database on another host (host2), the global database name of even this
database must be unique and different from all other names registered
with Cloud Control.
• The database credentials you specify here will be used on all the
destination hosts. However, after provisioning, if you want to change the
password for any database, then you must change it manually.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
12. In the Storage Locations page, select the storage type as Automatic Storage
Management (ASM) and click the torch icon to select the disk group name and
specify ASMSNMP password. The Disk Group Name List window appears and
displays the disk groups that are common on all the destination hosts.
In the Database Files Location section, specify the location where data files,
temporary files, redo logs, and control files will be stored.
• Select Use Database File Locations from Template to select defaults from the
template used.
• Select Use Common Location for All Database Files to specify a different
location.
If you select Use Oracle Managed Files (OMF), in the Multiplex Redo Logs and
Control Files section, you can specify locations to store duplicate copies of redo
logs and control files. Multiplexing provides greater fault-tolerance. You can
specify upto five locations.
In the Recovery Files Location section, select Use same storage type as database
files location to use the same storage type for recovery files as database files. Select
Use Flash Recovery Area and specify the location for recovery-related files and
Fast Recovery Area Size.
Select Enable Archiving to enable archive logging. Click Specify Archive Log
Locations and specify upto nine archive log locations. If the log location is not
specified, the logs will be saved in the default location.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
13. In the Initialization Parameters page, select the memory management type as
Automatic Memory Management or Automatic Shared Memory Management.
Select Specify Memory Settings as Percentage of Available Memory to specify
memory settings as percentage of available physical memory. For Automatic
Shared Memory management, specify Total SGA and Total PGA. For Automatic
Memory Management, specify Total Memory for Oracle.
In the Database sizing section, specify the Block Size and number of Processes. If
you have selected a database template with datafiles in the Database Template
page, you cannot edit the Block Size.
Specify the Host CPU Count. The maximum CPU count that can be specified is
equal to the number of CPUs present on the host.
In the Character Sets section, select the default character set. The default character
set is based on the locale and operating system.
Select a national character set. The default is AL16UTF16.
In the Database Connection Mode section, select the dedicated server mode. For
shared server mode, specify the number of shared servers.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
14. In the Additional Configuration Options, all the available listeners running from
the Oracle Home are listed. You can either select a listener or create a new one. You
can select multiple listeners to register with the database. To create a new listener,
specify the Listener Name and Port. Select database schemas and specify custom
scripts, if any. Select custom scripts from the host where you are creating the
database or from Software Library. If you have selected multiple hosts, you can
specify scripts only from Software Library.
If you have selected a Structure Only database template in the Database Template
page, you can also view and edit database options.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
15. Review the details you have provided for creating the database and click Next. You
will come back to the Configure page. If you have configured the database, the
Create Databases task will have a completed status.
19. The Custom Properties page will be displayed only for user customized
deployment procedures that require custom parameters. Specify custom properties
for the deployment, if any. Click Next.
20. In the Schedule page, specify a Deployment Instance name. If you want to run the
procedure immediately, then retain the default selection, that is, One Time
(Immediately). If you want to run the procedure later, then select One Time (Later)
and provide time zone, start date, and start time details. You can set the
notification preferences according to deployment procedure status. If you want to
run only prerequisites, you can select Pause the procedure to allow me to analyze
results after performing prerequisite checks to pause the procedure execution
after all prerequisite checks are performed.
Click Next.
21. In the Review page, review the details you have provided for the deployment
procedure and if you are satisfied with the details, then click Finish to run the
deployment procedure according to the schedule set. If you want to modify the
details, then click Back repeatedly to reach the page where you want to make the
changes. Click Save to save the deployment procedure for future deployment.
22. In the Operator role, launch the saved deployment procedure. Add targets for
provisioning and provide values for configurable fields in the deployment
procedure.
23. In the Procedure Activity page, view the status of the execution of the job and steps
in the deployment procedure. Click the Status link for each step to view the details
of the execution of each step. You can click Debug to set the logging level to Debug
and click Stop to stop the procedure execution.
24. After the procedure execution is completed, click on the Targets menu and select
All Targets to navigate to the All Targets page and verify that the newly created
databases appear as Cloud Control targets.
1. Log in as a designer, and from the Enterprise menu, select Provisioning and
Patching, then select Database Provisioning.
2. In the Database Procedures page, select the Provision Oracle Database Deployment
Procedure and click Launch. The Oracle Database provisioning wizard is launched.
3. In the Select Hosts page, if you want to use a provisioning profile for the
deployment, choose Select a Provisioning Profile and then, select the profile with
previously saved configuration parameters.
In the Select destination hosts section, click Add to select the destination host
where you want to deploy and configure the software.
In the Select Tasks to Perform section, select Deploy Database software to
provision single-instance databases.
Click on the Lock icon against the fields that you do not want to be edited in the
operator role. For more information about the lock down feature in deployment
procedures, see Introduction to Database Provisioning.
Click Next.
5. In the Specify OS Users page, specify the operating system user for the Oracle
Home for the database.
Note:
To use no root credentials, refer to Using No Root Credentials for Provisioning
Oracle Databases.
For Oracle Home User for the database, select the Normal User and Privileged User
to be added to the OS group.
Click on the Lock icon against the fields that you do not want to be edited in the
operator role.
Click Next.
6. In the Specify OS Groups page, specify the OS Groups to use for operating system
authentication. Ensure that the groups corresponding to the following roles already
exist on the hosts you select for provisioning.
8. In the Select Software Locations page, specify the source and destination locations
for the software binaries of Oracle Database.
In the Source section, select the Software Library location for Oracle Database
binaries.
Note:
• Oracle Base for Database, a location on the destination host where the
diagnostic and administrative logs, and other logs associated with the database
can be stored. This location is used for storing only the dump files and is
different from the Oracle home directory where the database software will be
installed.
• Database Oracle Home, a location on the destination host where the database
software can be provisioned. This is the Oracle home directory for the database.
9. In the Schedule page, specify a Deployment Instance name. If you want to run the
procedure immediately, then retain the default selection, that is, One Time
(Immediately). If you want to run the procedure later, then select One Time (Later)
and provide time zone, start date, and start time details. You can set the
notification preferences according to deployment procedure status. If you want to
run only prerequisites, you can select Pause the procedure to allow me to analyze
results after performing prerequisite checks to pause the procedure execution
after all prerequisite checks are performed.
Click Next.
10. In the Review page, review the details you have provided for the deployment
procedure and if you are satisfied with the details, then click Finish to run the
deployment procedure according to the schedule set. If you want to modify the
details, then click Back repeatedly to reach the page where you want to make the
changes. Click Save to save the deployment procedure for future deployment.
11. In the Operator role, launch the saved deployment procedure. Add targets for
provisioning and provide values for configurable fields in the deployment
procedure.
12. In the Procedure Activity page, view the status of the execution of the job and steps
in the deployment procedure. Click the Status link for each step to view the details
of the execution of each step. You can click Debug to set the logging level to Debug
and click Stop to stop the procedure execution.
13. After the procedure execution is completed, click on the Targets menu and select
All Targets to navigate to the All Targets page and verify that the newly created
databases appear as Cloud Control targets.
2. Select the new normal name credential for both Normal user and Privileged user.
3. Click Submit.
When the database provisioning process reaches the step which requires root
credentials, the process will stop. You will need to run the command line manually.
To do this, set the environment to $AGENT_HOME, and then run the command line
copy from the Instructions field for the following two steps:
4. Once the command line is run manually using root user for both the steps, click
Confirm. The database provisioning process then continues till it completes.
This chapter explains how you can mass-deploy Oracle Grid Infrastructure for Oracle
databases (also called as single-instance databases) in an unattended, repeatable, and
reliable manner, using Oracle Enterprise Manager Cloud Control (Cloud Control). In
particular, this chapter covers the following:
• Getting Started with Provisioning Oracle Grid Infrastructure for Oracle Databases
6.1 Getting Started with Provisioning Oracle Grid Infrastructure for Oracle
Databases
This section helps you get started with this chapter by providing an overview of the
steps involved in provisioning Oracle Grid Infrastructure for single-instance
databases. Consider this section to be a documentation map to understand the
sequence of actions you must perform to successfully provision Oracle Grid
Infrastructure with single-instance databases. Click the reference links provided
against the steps to reach the relevant sections that provide more information.
Table 6-1 (Cont.) Getting Started with Provisioning Oracle Grid Infrastructure
• Procedure for Provisioning Oracle Grid Infrastructure and Oracle Databases with
Oracle ASM
6.2.1 Prerequisites for Provisioning Oracle Grid Infrastructure and Oracle Databases
with Oracle ASM
Before running the Deployment Procedure, meet the prerequisites listed in Setting Up
Database Provisioning.
6.2.2 Procedure for Provisioning Oracle Grid Infrastructure and Oracle Databases with
Oracle ASM
To provision Oracle grid infrastructure and Oracle databases with Oracle ASM, follow
these steps:
1. Log in as a designer, and from the Enterprise menu, select Provisioning and
Patching, then select Database Provisioning.
2. In the Database Procedures page, select the Provision Oracle Database Deployment
Procedure and click Launch. The Oracle Database provisioning wizard is launched.
3. In the Select Hosts page, if you want to use a provisioning profile for the
deployment, choose Select a Provisioning Profile and then, select the profile with
previously saved configuration parameters.
In the Select destination hosts section, click Add to select the destination host
where you want to deploy and configure the software.
In the Select Tasks to Perform section, select the platform, the version for the
process, and the components you want to provision:
• To deploy Database software select either Deploy and create a new database,
which creates a new database and configures it after installing the standalone
Oracle Database.
Click on the Lock icon against the fields that you do not want to be edited in the
operator role. For more information about the lock down feature in deployment
procedures, see Introduction to Database Provisioning.
Click Next.
5. In the Specify OS Users page, specify the operating system user for the Oracle
Home for the database and Grid Infrastructure. Specify the Normal User and
Privileged User to be added to the OS groups.
Click Next.
6. In the Specify OS Groups page, specify the OS Groups to use for operating system
authentication. Ensure that the groups corresponding to the following roles already
exist on the hosts you select for provisioning.
For more information, see Oracle Database 2 Day DBA Guide available at:
http://www.oracle.com/pls/db112/homepage
Click Next. You will come back to the Configure page. If you have configured the
destination hosts, the Setup Hosts task will have a completed status.
8. In the Select Software Locations page, specify the source and destination locations
for the software binaries of Oracle Database.
In the Source section, select the Software Library location for the Grid
Infrastructure and Oracle Database binaries.
In the Destination location, specify the following:
• Oracle Base for Grid Infrastructure, a location on the destination host where
the diagnostic and administrative logs, and other logs associated with the Grid
Infrastructure can be stored.
• Grid Infrastructure Home, a location on the destination host where the Grid
Infrastructure software can be provisioned. This is the Oracle home directory
for Grid Infrastructure. Do not select a location that is a subdirectory of the
Oracle Base for Grid Infrastructure or database.
• Oracle Base for Database, a location on the destination host where the
diagnostic and administrative logs, and other logs associated with the database
can be stored. This location is used for storing only the dump files and is
different from the Oracle home directory where the database software will be
installed.
• Database Oracle Home, a location on the destination host where the database
software can be provisioned. This is the Oracle home directory for the database.
For Grid Infrastructure, Oracle Base is /u01/app/user and Oracle Home is
%ORACLE_BASE%/sihahome. You can use %ORACLE_BASE% and
%GI_ORACLE_BASE% to specify the relative paths which will be interpolated to
their respective values.
In the Additional Parameters section, specify the Working Directory on the
destination host where the files related to cloning can be staged temporarily.
Ensure that you have approximately 7 GB of space for this directory. For Installer
Parameters, specify any additional Oracle Universal Installer (OUI) parameters you
want to run while provisioning Oracle Grid Infrastructure. For example, -force (to
override any warnings), -debug (to view more debug information), and -invPtrLoc
<Location> (for UNIX only). Ensure that the parameters are separated by white
space.
Click on the Lock icon against the fields that you do not want to be edited in the
operator role.
Click Next. You will come back to the Configure page. If you have configured the
source and destination location for the software, the Configure Software task will
have a completed status.
10. In the Configure GI page, in the ASM Storage section, click Add to add an ASM
Disk Group. In the Add/Edit Disk Group dialog box, specify the Disk Group
Name, Disk List, and specify the redundancy as Normal, High, or External. Click
OK.
For ASM 11.2 and higher, specify the Disk Group Name for storing the parameter
file. Specify the ASM Password for ASMSNMP and SYS users. Specify the Listener
Port for registering the ASM instances.
As a designer, you can click on the Lock icon to lock these fields. These fields will
then not be available for editing in the operator role.
Click Next. You will come back to the Configure page. If you have configured the
storage options for the Grid Infrastructure and database, the Configure Grid
Infrastructure task will have a completed status.
12. In the Database Template page, choose the database template location. The location
can be Software Library or Oracle home. The template selected must be compatible
with the selected Oracle home version.
If you choose Select Template from Software Library, click on the search icon and
select the template from the Software Library. Specify Temporary Storage Location
on Managed Host(s). This location must exist on all hosts where you want to create
the database.
Click Show Template Details to view details of the selected template. You can
view initialization parameters, table spaces, data files, redo log groups, common
options, and other details of the template.
If you choose Select Template from Oracle Home, select the template from the
Oracle home. The default location is ORACLE_HOME/assistants/dbca/
templates.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
13. In the Identification and Placement page, specify database configuration details.
Specify Global Database Name and SID prefix. Specify the Database Credentials
for SYS, SYSTEM, and DBSNMP database accounts. You can choose to use the
same or different administrative passwords for these accounts.
Note:
• SID must be unique for a database on a host. This means, the SID assigned
to one database on a host cannot be reused on another database on the
same host, but can be reused on another database on a different host. For
example, if you have two databases (db1 and db2) on a host (host1), then
their SIDs need to be unique. However, if you install the third database on
another host (host2), then its SID can be db1 or db2.
• Global database name must be unique for a database on a host and also
unique for databases across different hosts. This means, the global database
name assigned to one database on a host can neither be reused on another
database on the same host nor on another database on a different host. For
example, if you have two databases (db1 and db2) on a host (host1), then
their global database names need to be unique. And if you install the third
database on another host (host2), the global database name of even this
database must be unique and different from all other names registered
with Cloud Control.
• The database credentials you specify here will be used on all the
destination hosts. However, after provisioning, if you want to change the
password for any database, then you must change it manually.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
14. In the Storage Locations page, select the storage type as Automatic Storage
Management (ASM), and click the torch icon to select the disk group name and
specify ASMSNMP password. The Disk Group Name List window appears and
displays the disk groups that are common on all the destination hosts.
In the Database Files Location section, specify the location where data files,
temporary files, redo logs, and control files will be stored.
• Select Use Database File Locations from Template to select defaults from the
template used.
• Select Use Common Location for All Database Files to specify a different
location.
If you select Use Oracle Managed Files (OMF), in the Multiplex Redo Logs and
Control Files section, you can specify locations to store duplicate copies of redo
logs and control files. Multiplexing provides greater fault-tolerance. You can
specify upto five locations.
In the Recovery Files Location section, select Use same storage type as database
files location to use the same storage type for recovery files as database files. Select
Use Flash Recovery Area and specify the location for recovery-related files and
Fast Recovery Area Size.
Select Enable Archiving to enable archive logging. Click Specify Archive Log
Locations and specify upto nine archive log locations. If the log location is not
specified, the logs will be saved in the default location.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
15. In the Initialization Parameters page, select the memory management type as
Automatic Memory Management or Automatic Shared Memory Management.
Select Specify Memory Settings as Percentage of Available Memory to specify
memory settings as percentage of available physical memory. For Automatic
Shared Memory management, specify Total SGA and Total PGA. For Automatic
Memory Management, specify Total Memory for Oracle.
In the Database sizing section, specify the Block Size and number of Processes. If
you have selected a database template with datafiles in the Database Template
page, you cannot edit the Block Size.
Specify the Host CPU Count. The maximum CPU count that can be specified is
equal to the number of CPUs present on the host.
In the Character Sets section, select the default character set. The default character
set is based on the locale and operating system.
Select a national character set. The default is AL16UTF16.
In the Database Connection Mode section, select the dedicated server mode. For
shared server mode, specify the number of shared servers.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
16. In the Additional Configuration Options, all the available listeners running from
the Oracle Home are listed. You can either select a listener or create a new one. You
can select multiple listeners to register with the database. To create a new listener,
specify the Listener Name and Port. Select database schemas and specify custom
scripts, if any. Select custom scripts from the host where you are creating the
database or from Software Library. If you have selected multiple hosts, you can
specify scripts only from Software Library.
If you have selected a Structure Only database template in the Database Template
page, you can also view and edit database options.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
17. Review the details you have provided for creating the database and click Next. You
will come back to the Configure page. If you have configured the database, the
Create Databases task will have a completed status.
20. The Custom Properties page will be displayed only for user customized
deployment procedures that require custom parameters. Specify custom properties
for the deployment, if any. Click Next.
21. In the Schedule page, specify a Deployment Instance name. If you want to run the
procedure immediately, then retain the default selection, that is, One Time
(Immediately). If you want to run the procedure later, then select One Time (Later)
and provide time zone, start date, and start time details. You can set the
notification preferences according to deployment procedure status. If you want to
run only prerequisites, you can select Pause the procedure to allow me to analyze
results after performing prerequisite checks to pause the procedure execution
after all prerequisite checks are performed.
Click Next.
22. In the Review page, review the details you have provided for the deployment
procedure and if you are satisfied with the details, then click Finish to run the
deployment procedure according to the schedule set. If you want to modify the
details, then click Back repeatedly to reach the page where you want to make the
changes. Click Save to save the deployment procedure for future deployment.
23. In the Operator role, launch the saved deployment procedure. Add targets for
provisioning and provide values for configurable fields in the deployment
procedure.
24. In the Procedure Activity page, view the status of the execution of the job and steps
in the deployment procedure. Click the Status link for each step to view the details
of the execution of each step. You can click Debug to set the logging level to Debug
and click Stop to stop the procedure execution.
25. After the procedure execution is completed, click on the Targets menu and select
All Targets to navigate to the All Targets page and verify that the newly
provisioned databases appear as Cloud Control targets.
Note:
6.3.1 Prerequisites for Provisioning Oracle Grid Infrastructure and Oracle Database
Software Only
Before running the Deployment Procedure, meet the prerequisites listed in Setting Up
Database Provisioning.
6.3.2 Procedure for Provisioning Oracle Grid Infrastructure and Oracle Database
Software Only
To provision Oracle Grid Infrastructure and Oracle Database software, follow these
steps:
1. Log in as a designer, and from the Enterprise menu, select Provisioning and
Patching, then select Database Provisioning.
3. In the Select Hosts page, if you want to use a provisioning profile for the
deployment, choose Select a Provisioning Profile and then, select the profile with
previously saved configuration parameters.
In the Select destination hosts section, click Add to select the destination host
where you want to deploy and configure the software.
In the Select Tasks to Perform section, select the platform, the version for the
process, and the components you want to provision:
• To deploy Database software select Deploy and create a new database which
creates a new database and configures it after installing the standalone Oracle
Database.
Click on the Lock icon against the fields that you do not want to be edited in the
operator role. For more information about the lock down feature in deployment
procedures, see Introduction to Database Provisioning.
Click Next.
5. In the Specify OS Users page, specify the operating system user for the Oracle
Home for the database and Grid Infrastructure. Specify the Normal User and
Privileged User to be added to the OS groups.
Click Next.
6. In the Specify OS Groups page, specify the OS Groups to use for operating system
authentication. Ensure that the groups corresponding to the following roles already
exist on the hosts you select for provisioning.
If they do not exist, then either specify alternative groups that exist on the host or
create new groups as described in Oracle Database Quick Installation Guide
available at
http://www.oracle.com/pls/db112/homepage
The new groups you create or the alternative groups you specify automatically get
SYSDBA and SYSOPER privileges after the database is configured.
For more information, see Oracle Database 2 Day DBA Guide available at:
http://www.oracle.com/pls/db112/homepage
Click Next. You will come back to the Configure page. If you have configured the
destination hosts, the Setup Hosts task will have a completed status.
8. In the Select Software Locations page, specify the source and destination locations
for the software binaries of Oracle Database.
In the Source section, select the Software Library location for the Grid
Infrastructure and Oracle Database binaries.
In the Destination location, specify the following:
• Oracle Base for Grid Infrastructure, a location on the destination host where
the diagnostic and administrative logs, and other logs associated with the Grid
Infrastructure can be stored.
• Grid Infrastructure Home, a location on the destination host where the Grid
Infrastructure software can be provisioned. This is the Oracle home directory
for Grid Infrastructure. Do not select a location that is a subdirectory of the
Oracle Base for Grid Infrastructure or database.
• Oracle Base for Database, a location on the destination host where the
diagnostic and administrative logs, and other logs associated with the database
can be stored. This location is used for storing only the dump files and is
different from the Oracle home directory where the database software will be
installed.
• Database Oracle Home, a location on the destination host where the database
software can be provisioned. This is the Oracle home directory for the database.
For Grid Infrastructure, Oracle Base is /u01/app/user and Oracle Home is
%ORACLE_BASE%/sihahome. You can use %ORACLE_BASE% and
%GI_ORACLE_BASE% to specify the relative paths which will be interpolated to
their respective values.
In the Additional Parameters section, specify the Working Directory on the
destination host where the files related to cloning can be staged temporarily.
Ensure that you have approximately 7 GB of space for this directory. For Installer
Parameters, specify any additional Oracle Universal Installer (OUI) parameters you
want to run while provisioning Oracle Grid Infrastructure. For example, -force (to
override any warnings), -debug (to view more debug information), and -invPtrLoc
<Location> (for UNIX only). Ensure that the parameters are separated by white
space.
Click on the Lock icon against the fields that you do not want to be edited in the
operator role.
Click Next. You will come back to the Configure page. If you have configured the
source and destination location for the software, the Configure Software task will
have a completed status.
9. In the Schedule page, specify a Deployment Instance name. If you want to run the
procedure immediately, then retain the default selection, that is, One Time
(Immediately). If you want to run the procedure later, then select One Time (Later)
and provide time zone, start date, and start time details. You can set the
notification preferences according to deployment procedure status. If you want to
run only prerequisites, you can select Pause the procedure to allow me to analyze
results after performing prerequisite checks to pause the procedure execution
after all prerequisite checks are performed.
Click Next.
10. In the Review page, review the details you have provided for the deployment
procedure and if you are satisfied with the details, then click Finish to run the
deployment procedure according to the schedule set. If you want to modify the
details, then click Back repeatedly to reach the page where you want to make the
changes. Click Save to save the deployment procedure for future deployment.
11. In the Operator role, launch the saved deployment procedure. Add targets for
provisioning and provide values for configurable fields in the deployment
procedure.
12. In the Procedure Activity page, view the status of the execution of the job and steps
in the deployment procedure. Click the Status link for each step to view the details
of the execution of each step. You can click Debug to set the logging level to Debug
and click Stop to stop the procedure execution.
Note:
This chapter explains how you can mass-deploy Oracle Grid Infrastructure and Oracle
Real Application Clusters (Oracle RAC) for clustered environments in an unattended,
repeatable, and reliable manner. In particular, this chapter covers the following:
• Getting Started with Provisioning Grid Infrastructure for Oracle RAC Databases
• Provisioning Oracle Real Application Clusters Database with File System on a New
Cluster
Note:
7.1 Getting Started with Provisioning Grid Infrastructure for Oracle RAC
Databases
This section helps you get started with this chapter by providing an overview of the
steps involved in provisioning Oracle Grid Infrastructure and Oracle RAC. Consider
this section to be a documentation map to understand the sequence of actions you
must perform to successfully provision Oracle Grid Infrastructure and Oracle RAC.
Click the reference links provided against the steps to reach the relevant sections that
provide more information.
Provisioning Oracle Grid Infrastructure for Oracle Real Application Clusters Databases 7-1
Getting Started with Provisioning Grid Infrastructure for Oracle RAC Databases
Table 7-1 Getting Started with Provisioning Oracle Grid Infrastructure and Oracle
RAC Databases
Table 7-1 (Cont.) Getting Started with Provisioning Oracle Grid Infrastructure and
Oracle RAC Databases
Provisioning Oracle Grid Infrastructure for Oracle Real Application Clusters Databases 7-3
Oracle Real Application Clusters Database Topology
The topology shows a N-node setup using Grid Infrastructure, clustered ASM, and
policy-managed Oracle RAC database. An ASM disk array is shared thorough the
cluster setup. The Grid Infrastructure uses an ASM diskgroup named QUORUM for
Oracle Cluster Registry (OCR) and Voting Disk (Heartbeat). The Oracle RAC database
uses another diskgroup named DATA. This stores database datafiles. The nodes are
multihomed such that a high speed internal network between nodes facilitates cluster
operation, and a public network is used for external connectivity. The networks are
public, private, and storage network between nodes and the ASM disk array.
7.3.1 Prerequisites for Provisioning Grid Infrastructure with Oracle RAC Database
Before running the Deployment Procedure, meet the prerequisites listed in Setting Up
Database Provisioning.
7.3.2 Procedure for Provisioning Grid Infrastructure with Oracle RAC Database
To provision the grid infrastructure with Oracle RAC database, and to configure the
database with Oracle ASM, follow these steps:
1. Log in as a designer, and from the Enterprise menu, select Provisioning and
Patching, then select Database Provisioning.
2. In the Database Procedures page, select the Provision Oracle RAC Database
Deployment Procedure and click Launch. The Oracle RAC Database provisioning
wizard is launched.
3. In the Select Hosts page, if you want to use a provisioning profile for the
deployment, choose Select a Provisioning Profile and then, select the profile with
previously saved configuration parameters.
In the Select destination hosts section, click Add to select the destination host
where you want to deploy and configure the software.
In the Select Tasks to Perform section, select the platform, the version for the
process, and the components you want to provision:
• To deploy database software select Deploy and create a RAC One Node
database which creates a new database and configures it after installing the
Oracle RAC database.
Click on the Lock icon against the fields that you do not want to be edited in the
operator role. For more information about the lock down feature in deployment
procedures, see Introduction to Database Provisioning.
Click Next.
Provisioning Oracle Grid Infrastructure for Oracle Real Application Clusters Databases 7-5
Provisioning Grid Infrastructure with Oracle Real Application Clusters Database and Configuring Database with Oracle Automatic
Storage Management
5. In the Specify OS Users page, specify the operating system users and groups
required to provision the database.
Note:
For Database User and ASM User, select the Normal User and Privileged User to
be added to the OS group.
Click Next.
6. In the Specify OS Groups page, specify the OS Groups to use for operating system
authentication. Ensure that the groups corresponding to the following roles already
exist on the hosts you select for provisioning.
8. In the Select Software Locations page, specify the locations where the software
binaries of Oracle Grid Infrastructure and Oracle RAC can be placed, that is, the
$ORACLE_HOME location. As a designer, you can click on the Lock icon to lock
these fields. These fields will then not be available for editing in the operator role.
In the Source section, select the Software Library location for the Grid
Infrastructure and Oracle Database binaries.
Note:
For Windows operating systems, if the Oracle Grid Infrastructure or Oracle
Database component selected is of version 12.1 or higher, you can install all
services as a named Oracle service user with limited privileges. This will
enhance security for database services.
In the Windows Security option section, you can configure the option for an
existing user and specify the User Name and Password. Select Decline
Security option if you want all the services to be installed and configured as
an administrative user.
• Oracle Base for Grid Infrastructure, a location on the destination host where
the diagnostic and administrative logs, and other logs associated with the Grid
Infrastructure can be stored.
• Grid Infrastructure Home, a location on the destination host where the Grid
Infrastructure software can be provisioned. This is the Oracle home directory
for Grid Infrastructure. Do not select a location that is a subdirectory of the
Oracle Base for Grid Infrastructure or database. Select Shared Grid
Infrastructure home to enable Grid Infrastructure Oracle Home on shared
locations. Ensure that the directory path you provide meets the requirements
described in Requirements for Grid Infrastructure Software Location Path.
• Oracle Base for Database, a location on the destination host where the
diagnostic and administrative logs, and other logs associated with the database
can be stored. This location is used for storing only the dump files and is
different from the Oracle home directory where the database software will be
installed.
• Database Oracle Home, a location on the destination host where the database
software can be provisioned. This is the Oracle home directory for the database.
Select Shared Database Oracle home to enable Database Oracle Home on
shared locations.
For Grid Infrastructure, Oracle Base is /u01/app/user and Oracle Home is
%ORACLE_BASE/../../grid. You can use %ORACLE_BASE% and
%GI_ORACLE_BASE% to specify the relative paths which will be interpolated to
their respective values.
In the Additional Parameters section, specify the Working Directory on the
destination host where the files related to cloning can be staged temporarily.
Ensure that you have approximately 7 GB of space for this directory. For Installer
Parameters, specify any additional Oracle Universal Installer (OUI) parameters you
want to run while provisioning Oracle Grid Infrastructure. For example, -force (to
override any warnings), -debug (to view more debug information), and -invPtrLoc
<Location> (for UNIX only). Ensure that the parameters are separated by white
space.
You can also specify OCFS devices in the Installer Parameters field in the following
format, separating devices with commas:
Device Number:Partition Number: Drive letter: [DATA | SOFTWARE]
For example:
Click Next. You will come back to the Configure page. If you have configured the
source and destination location for the software, the Configure Software task will
have a Configured status.
10. In the Select Storage page, select the storage type for Grid Infrastructure and
database as Automatic Storage Management or File System to indicate the storage
type for storing voting disk and Oracle Cluster Registry (OCR). Voting disk and
Provisioning Oracle Grid Infrastructure for Oracle Real Application Clusters Databases 7-7
Provisioning Grid Infrastructure with Oracle Real Application Clusters Database and Configuring Database with Oracle Automatic
Storage Management
OCR are used by Oracle Clusterware to manage its resources. You can choose from
the following options:
• Automatic Storage Management for both Grid Infrastructure and Oracle RAC
Database
• Automatic Storage Management for Grid Infrastructure and File System for
Oracle RAC Database
• File System for both Grid Infrastructure and Oracle RAC Database
• File System for Grid Infrastructure and Automatic Storage Management for
Oracle RAC Database
As a designer, you can click on the Lock icon to lock these fields. These fields will
then not be available for editing in the operator role.
Click Next.
11. In the Configure GI page, in the Basic Settings section, specify the Cluster Name,
SCAN Name, and SCAN Port. The default SCAN port is port 1521, but you can
specify another port of your choice. The deployment procedure verifies that the
SCAN port provided is a valid port number, and is not used for any other purpose.
After installation, a TNS listener listens to this port to respond to client connections
to the SCAN name.
In the GNS Settings section, select Configure GNS and auto-assign with DHCP
and specify the GNS Sub System and GNS VIP Address if you want virtual host
names outside the cluster to have dynamically assigned names.
In the GI Network section, by default, the network interfaces that have the same
name and subnet for the selected destination hosts are automatically detected and
displayed. Validate these network interface configuration details. From the Usage
column, select Public to configure the interface as public interface, or Private to
configure the interface as private interface.
Click Add to add an interface and specify the Interface Name and Interface
Subnet and click OK. Select the Usage as Public, Private, or Do Not Use if you do
not want to use the interface.
If you have chosen storage type as Automatic Storage Management for either or
both Grid Infrastructure and Oracle RAC Database, in the ASM Storage section,
select from the ASM Disk Groups that have been discovered by Cloud Control and
are displayed in the table. Click Add to add an ASM Disk Group. In the Add/Edit
Disk Group dialog box, specify the Disk Group Name, Disk List, and specify the
redundancy as Normal, High, or External. Click OK. Select the OCR/Voting Disk
to store cluster registry and voting disk files, and specify the ASM credentials for
ASMSNMP and SYS users.
Provisioning Oracle Grid Infrastructure for Oracle Real Application Clusters Databases 7-9
Provisioning Grid Infrastructure with Oracle Real Application Clusters Database and Configuring Database with Oracle Automatic
Storage Management
If you have chosen storage type as File System for Grid Infrastructure or Oracle
RAC database, in the File System Storage section, specify the storage location for
Oracle Cluster Registry (OCR) and voting disks. Select Normal or External to
indicate the redundancy level, and specify their locations.
As a designer, you can click on the Lock icon to lock these fields. These fields will
then not be available for editing in the operator role.
Click Next. You will come back to the Configure page. If you have configured the
storage options for the Grid Infrastructure and database, the Configure Grid
Infrastructure task will have a completed status.
13. In the Database Template page, choose the database template location. The location
can be Software Library or Oracle home. The template selected must be compatible
with the selected Oracle home version.
If you have selected Software Library, click on the search icon and select the
template from the Software Library. Specify Temporary Storage Location on
Managed Host(s). This location must be present on the reference node that you
selected earlier.
Click Show Template Details to view details of the selected template. You can
view initialization parameters, table spaces, data files, redo log groups, common
options, and other details of the template.
If you have selected Oracle Home, select the template from the Oracle home. The
default location is ORACLE_HOME/assistants/dbca/templates.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
14. In the Identification and Placement page, select the Oracle RAC database
configuration type, whether Policy Managed or Admin Managed.
For admin-managed database, select nodes on which you want to create the cluster
database. You must specify the node selected as the reference node in the Database
Version and Type page.
For policy-managed database, select the server pools to be used for creating the
database, from the list of existing server pools, or choose to create a new server
pool. Policy-managed databases can be created for database versions 11.2 and
higher. For database versions lower than 11.2, you will need to select nodes to
create the Oracle RAC database.
Specify Global Database Name and SID prefix. Specify the Database Credentials
for SYS, SYSTEM, and DBSNMP. You can choose to specify the same or different
passwords for each of these user accounts.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
15. In the Storage Locations page, select the same storage type you specified for Oracle
Database in the Select Storage page.
In the Database Files Location section, specify the location where data files,
temporary files, redo logs, and control files will be stored. These locations must be
on shared storage such as cluster file system location or ASM diskgroups.
• Select Use Database File Locations from Template to select defaults from the
template used.
• Select Use Common Location for All Database Files to specify a different
location.
If you select Use Oracle Managed Files (OMF), in the Multiplex Redo Logs and
Control Files section, you can specify locations to store duplicate copies of redo
logs and control files. Multiplexing provides greater fault-tolerance. You can
specify upto five locations.
In the Recovery Files Location section, select Use same storage type as database
files location to use the same storage type for recovery files as database files. Select
Use Flash Recovery Area and specify the location for recovery-related files and
Fast Recovery Area Size.
Select Enable Archiving to enable archive logging. Click Specify Archive Log
Locations and specify upto nine archive log locations. If the log location is not
specified, the logs will be saved in the default location.
Provisioning Oracle Grid Infrastructure for Oracle Real Application Clusters Databases 7-11
Provisioning Grid Infrastructure with Oracle Real Application Clusters Database and Configuring Database with Oracle Automatic
Storage Management
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
16. In the Initialization Parameters page, select the memory management type as
Automatic Memory Management or Automatic Shared Memory Management.
Select Specify Memory Settings as Percentage of Available Memory to specify
memory settings as percentage of available physical memory. For Automatic
Shared Memory management, specify Total SGA and Total PGA. For Automatic
Memory Management, specify Total Memory for Oracle.
In the Database sizing section, specify the Block Size and number of Processes. If
you have selected a database template with datafiles in the Database Template
page, you cannot edit the Block Size.
Specify the Host CPU Count. The maximum CPU count that can be specified is
equal to the number of CPUs present on the host.
In the Character Sets section, select the default character set. The default character
set is based on the locale and operating system.
Select a national character set. The default is AL16UTF16.
In the Database Connection Mode section, select the dedicated server mode. For
shared server mode, specify the number of shared servers.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
17. In the Additional Configuration Options page, select custom scripts from the
Software Library or your local disk. If you have selected a Structure Only database
template in the Database Template page, you can also view and edit database
options.
18. Review the information you have provided and click Next. You will come back to
the Configure page. If you have configured the database, the Create Databases task
will have a Configured status. Click Next.
22. The Custom Properties page will be displayed only for user customized
deployment procedures that require custom parameters. Specify custom properties
for the deployment, if any. Click Next.
23. In the Schedule page, specify a Deployment Instance name. If you want to run the
procedure immediately, then retain the default selection, that is, One Time
(Immediately). If you want to run the procedure later, then select One Time (Later)
and provide time zone, start date, and start time details. You can set the
notification preferences according to deployment procedure status. If you want to
run only prerequisites, you can select Pause the procedure to allow me to analyze
Click Next.
24. In the Review page, review the details you have provided for the deployment
procedure and if you are satisfied with the details, then click Finish to run the
deployment procedure according to the schedule set. If you want to modify the
details, then click Back repeatedly to reach the page where you want to make the
changes. Click Save to save the deployment procedure for future deployment.
25. In the Operator role, launch the saved deployment procedure. Add targets for
provisioning and provide values for configurable fields in the deployment
procedure.
26. In the Procedure Activity page, view the status of the execution of the job and steps
in the deployment procedure. Click the Status link for each step to view the details
of the execution of each step. You can click Debug to set the logging level to Debug
and click Stop to stop the procedure execution.
27. After the procedure execution is completed, click on the Targets menu and select
All Targets to navigate to the All Targets page and verify that the newly
provisioned databases appear as Cloud Control targets.
Note:
• It should be created either as a subdirectory in a path where all files can be owned
by root, or in a unique path
Provisioning Oracle Grid Infrastructure for Oracle Real Application Clusters Databases 7-13
Provisioning Oracle Real Application Clusters Database with File System on an Existing Cluster
• Procedure for Provisioning Oracle RAC with File System on an Existing Cluster
7.4.1 Prerequisites for Provisioning Oracle RAC Database with File System on an
Existing Cluster
Before running the Deployment Procedure, meet the prerequisites listed in Setting Up
Database Provisioning.
7.4.2 Procedure for Provisioning Oracle RAC with File System on an Existing Cluster
To provision Oracle RAC databases with file system on an existing cluster, follow
these steps:
Follow these steps:
1. Log in as a designer, and from the Enterprise menu, select Provisioning and
Patching, then select Database Provisioning.
2. In the Database Procedures page, select the Provision Oracle RAC Database
Deployment Procedure and click Launch. The Oracle RAC Database provisioning
wizard is launched.
3. In the Select Hosts page, if you want to use a provisioning profile for the
deployment, choose Select a Provisioning Profile and then, select the profile with
previously saved configuration parameters.
In the Select destination hosts section, click Add to select the destination host
where you want to deploy and configure the software.
In the Select Tasks to Perform section, select the platform, the version for the
process, and the components you want to provision:
• To deploy database software select Deploy and create a RAC One Node
database which creates a new database and configures it after installing the
Oracle RAC database.
Click on the Lock icon against the fields that you do not want to be edited in the
operator role. For more information about the lock down feature in deployment
procedures, see Introduction to Database Provisioning.
Click Next.
5. In the Specify OS Users page, specify the operating system users and groups
required to provision the database.
Note:
To use no root credentials, refer to Using No Root Credentials for Provisioning
Oracle Real Application Clusters (Oracle RAC) Databases.
For Database User, select the Normal User and Privileged User to be added to the
OS group.
Click Next.
6. In the Specify OS Groups page, specify the OS Groups to use for operating system
authentication. Ensure that the groups corresponding to the following roles already
exist on the hosts you select for provisioning.
8. In the Select Software Locations page, specify the locations where the software
binaries of Oracle RAC database can be placed, that is, the $ORACLE_HOME
location. As a designer, you can click on the Lock icon to lock these fields. These
fields will then not be available for editing in the operator role.
In the Source section, select the Software Library location for the Grid
Infrastructure and Oracle Database binaries.
Note:
• Oracle Base for Database, a location on the destination host where the
diagnostic and administrative logs, and other logs associated with the database
can be stored. This location is used for storing only the dump files and is
different from the Oracle home directory where the database software will be
installed.
• Database Oracle Home, a location on the destination host where the database
software can be provisioned. This is the Oracle home directory for the database.
Select Shared Database Oracle home to enable Database Oracle Home on
shared locations.
In the Additional Parameters section, specify the Working Directory on the
destination host where the files related to cloning can be staged temporarily.
Ensure that you have approximately 7 GB of space for this directory. For Installer
Parameters, specify any additional Oracle Universal Installer (OUI) parameters you
want to run while provisioning Oracle RAC database. For example, -force (to
override any warnings), -debug (to view more debug information), and -invPtrLoc
Provisioning Oracle Grid Infrastructure for Oracle Real Application Clusters Databases 7-15
Provisioning Oracle Real Application Clusters Database with File System on an Existing Cluster
<Location> (for UNIX only). Ensure that the parameters are separated by white
space.
You can also specify OCFS devices in the Installer Parameters field in the following
format, separating devices with commas:
Device Number:Partition Number: Drive letter: [DATA | SOFTWARE]
For example:
Click Next. You will come back to the Configure page. If you have configured the
source and destination location for the software, the Configure Software task will
have a Configured status.
10. In the Database Template page, choose the database template location. The location
can be Software Library or Oracle home. The template selected must be compatible
with the selected Oracle home version.
If you have selected Software Library, click on the search icon and select the
template from the Software Library. Specify Temporary Storage Location on
Managed Host(s). This location must be present on the reference node that you
selected earlier.
Click Show Template Details to view details of the selected template. You can
view initialization parameters, table spaces, data files, redo log groups, common
options, and other details of the template.
If you have selected Oracle Home, select the template from the Oracle home. The
default location is ORACLE_HOME/assistants/dbca/templates.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
11. In the Identification and Placement page, select the Oracle RAC database
configuration type, whether Policy Managed or Admin Managed.
For Admin-managed database, select nodes on which you want to create the cluster
database.
For policy-managed database, select the server pools to be used for creating the
database, from the list of existing server pools, or choose to create a new server
pool. Policy-managed databases can be created for database versions 11.2 and
higher. For database versions lower than 11.2, you will need to select nodes to
create the Oracle RAC database.
Specify Global Database Name and SID prefix. Specify the Database Credentials
for SYS, SYSTEM, and DBSNMP. You can choose to specify the same or different
passwords for each of these user accounts.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
12. In the Storage Locations page, select the storage type for Oracle RAC Database as
File System.
In the Database Files Location section, specify the location where data files,
temporary files, redo logs, and control files will be stored. These locations must be
on shared storage such as cluster file system location or ASM diskgroups.
• Select Use Database File Locations from Template to select defaults from the
template used.
• Select Use Common Location for All Database Files to specify a different
location.
If you select Use Oracle Managed Files (OMF), in the Multiplex Redo Logs and
Control Files section, you can specify locations to store duplicate copies of redo
logs and control files. Multiplexing provides greater fault-tolerance. You can
specify upto five locations.
In the Recovery Files Location section, select Use same storage type as database
files location to use the same storage type for recovery files as database files. Select
Use Flash Recovery Area and specify the location for recovery-related files and
Fast Recovery Area Size.
Select Enable Archiving to enable archive logging. Click Specify Archive Log
Locations and specify upto nine archive log locations. If the log location is not
specified, the logs will be saved in the default location.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
13. In the Initialization Parameters page, select the memory management type as
Automatic Memory Management or Automatic Shared Memory Management.
Select Specify Memory Settings as Percentage of Available Memory to specify
memory settings as percentage of available physical memory. For Automatic
Shared Memory management, specify Total SGA and Total PGA. For Automatic
Memory Management, specify Total Memory for Oracle.
In the Database sizing section, specify the Block Size and number of Processes. If
you have selected a database template with datafiles in the Database Template
page, you cannot edit the Block Size.
Specify the Host CPU Count. The maximum CPU count that can be specified is
equal to the number of CPUs present on the host.
In the Character Sets section, select the default character set. The default character
set is based on the locale and operating system.
Select a national character set. The default is AL16UTF16.
In the Database Connection Mode section, select the dedicated server mode. For
shared server mode, specify the number of shared servers.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
Provisioning Oracle Grid Infrastructure for Oracle Real Application Clusters Databases 7-17
Provisioning Oracle Real Application Clusters Database with File System on an Existing Cluster
14. In the Additional Configuration Options page, select custom scripts from the
Software Library or your local disk. If you have selected a Structure Only database
template in the Database Template page, you can also view and edit database
options.
15. Review the information you have provided and click Next. You will come back to
the Configure page. If you have configured the database, the Create Databases task
will have a Configured status. Click Next.
19. The Custom Properties page will be displayed only for user customized
deployment procedures that require custom parameters. Specify custom properties
for the deployment, if any. Click Next.
20. In the Schedule page, specify a Deployment Instance name. If you want to run the
procedure immediately, then retain the default selection, that is, One Time
(Immediately). If you want to run the procedure later, then select One Time (Later)
and provide time zone, start date, and start time details. You can set the
notification preferences according to deployment procedure status. If you want to
run only prerequisites, you can select Pause the procedure to allow me to analyze
results after performing prerequisite checks to pause the procedure execution
after all prerequisite checks are performed.
Click Next.
21. In the Review page, review the details you have provided for the deployment
procedure and if you are satisfied with the details, then click Finish to run the
deployment procedure according to the schedule set. If you want to modify the
details, then click Back repeatedly to reach the page where you want to make the
changes. Click Save to save the deployment procedure for future deployment.
22. In the Operator role, launch the saved deployment procedure. Add targets for
provisioning and provide values for configurable fields in the deployment
procedure.
23. In the Procedure Activity page, view the status of the execution of the job and steps
in the deployment procedure. Click the Status link for each step to view the details
of the execution of each step. You can click Debug to set the logging level to Debug
and click Stop to stop the procedure execution.
24. After the procedure execution is completed, click on the Targets menu and select
All Targets to navigate to the All Targets page and verify that the newly
provisioned databases appear as Cloud Control targets.
Note:
If the Deployment Procedure fails, then review log files described in
Reviewing Log Files.
• Procedure for Provisioning Oracle RAC with File System on an Existing Cluster
7.5.1 Prerequisites for Provisioning Oracle RAC Database with File System on a New
Cluster
Before running the Deployment Procedure, meet the prerequisites listed in Setting Up
Database Provisioning.
7.5.2 Procedure for Provisioning Oracle RAC Database with File System on a New
Cluster
To provision Oracle RAC databases on a new cluster, follow these steps:
1. Log in as a designer, and from the Enterprise menu, select Provisioning and
Patching, then select Database Provisioning.
2. In the Database Procedures page, select the Provision Oracle RAC Database
Deployment Procedure and click Launch. The Oracle RAC Database provisioning
wizard is launched.
3. In the Select Hosts page, if you want to use a provisioning profile for the
deployment, choose Select a Provisioning Profile and then, select the profile with
previously saved configuration parameters.
In the Select destination hosts section, click Add to select the destination host
where you want to deploy and configure the software.
In the Select Tasks to Perform section, select the platform, the version for the
process, and the components you want to provision:
• To deploy database software select Deploy and create a RAC One Node
database which creates a new database and configures it after installing the
Oracle RAC database.
Click on the Lock icon against the fields that you do not want to be edited in the
operator role. For more information about the lock down feature in deployment
procedures, see Introduction to Database Provisioning.
Click Next.
5. In the Specify OS Users page, specify the operating system users and groups
required to provision the database.
Provisioning Oracle Grid Infrastructure for Oracle Real Application Clusters Databases 7-19
Provisioning Oracle Real Application Clusters Database with File System on a New Cluster
For Database User and ASM User, select the Normal User and Privileged User to
be added to the OS group.
Click Next.
6. In the Specify OS Groups page, specify the OS Groups to use for operating system
authentication. Ensure that the groups corresponding to the following roles already
exist on the hosts you select for provisioning.
Note:
8. In the Select Software Locations page, specify the locations where the software
binaries of Oracle Grid Infrastructure and Oracle RAC can be placed, that is, the
$ORACLE_HOME location. As a designer, you can click on the Lock icon to lock
these fields. These fields will then not be available for editing in the operator role.
In the Source section, select the Software Library location for the Grid
Infrastructure and Oracle Database binaries.
Note:
• Oracle Base for Grid Infrastructure, a location on the destination host where
the diagnostic and administrative logs, and other logs associated with the Grid
Infrastructure can be stored.
• Grid Infrastructure Home, a location on the destination host where the Grid
Infrastructure software can be provisioned. This is the Oracle home directory
for Grid Infrastructure. Do not select a location that is a subdirectory of the
Oracle Base for Grid Infrastructure or database. Select Shared Grid
Infrastructure home to enable Grid Infrastructure Oracle Home on shared
locations. Ensure that the directory path you provide meets the requirements
described in Requirements for Grid Infrastructure Software Location Path.
• Oracle Base for Database, a location on the destination host where the
diagnostic and administrative logs, and other logs associated with the database
can be stored. This location is used for storing only the dump files and is
different from the Oracle home directory where the database software will be
installed.
• Database Oracle Home, a location on the destination host where the database
software can be provisioned. This is the Oracle home directory for the database.
Select Shared Database Oracle home to enable Database Oracle Home on
shared locations.
For Grid Infrastructure, Oracle Base is /u01/app/user and Oracle Home is
%ORACLE_BASE/../../grid. You can use %ORACLE_BASE% and
%GI_ORACLE_BASE% to specify the relative paths which will be interpolated to
their respective values.
In the Additional Parameters section, specify the Working Directory on the
destination host where the files related to cloning can be staged temporarily.
Ensure that you have approximately 7 GB of space for this directory. For Installer
Parameters, specify any additional Oracle Universal Installer (OUI) parameters you
want to run while provisioning Oracle Grid Infrastructure. For example, -force (to
override any warnings), -debug (to view more debug information), and -invPtrLoc
<Location> (for UNIX only). Ensure that the parameters are separated by white
space.
You can also specify OCFS devices in the Installer Parameters field in the following
format, separating devices with commas:
Device Number:Partition Number: Drive letter: [DATA | SOFTWARE]
For example:
Click Next. You will come back to the Configure page. If you have configured the
source and destination location for the software, the Configure Software task will
have a Configured status.
10. In the Select Storage page, select the storage type for Grid Infrastructure as
Automatic Storage Management and database as File System to indicate the
storage type for storing voting disk and Oracle Cluster Registry (OCR). Voting disk
and OCR are used by Oracle Clusterware to manage its resources.
As a designer, you can click on the Lock icon to lock these fields. These fields will
then not be available for editing in the operator role.
Provisioning Oracle Grid Infrastructure for Oracle Real Application Clusters Databases 7-21
Provisioning Oracle Real Application Clusters Database with File System on a New Cluster
Click Next.
11. In the Configure GI page, in the Basic Settings section, specify the Cluster Name,
SCAN Name, and SCAN Port. The default SCAN port is port 1521, but you can
specify another port of your choice. The deployment procedure verifies that the
SCAN port provided is a valid port number, and is not used for any other purpose.
After installation, a TNS listener listens to this port to respond to client connections
to the SCAN name.
In the GNS Settings section, select Configure GNS and specify the GNS Sub
System and GNS VIP Address if you want virtual host names outside the cluster
to have dynamically assigned names.
In the GI Network section, by default, the network interfaces that have the same
name and subnet for the selected destination hosts are automatically detected and
displayed. Validate these network interface configuration details. From the Usage
column, select Public to configure the interface as public interface, or Private to
configure the interface as private interface.
Click Add to add an interface and specify the Interface Name and Interface
Subnet and click OK. Select the Usage as Public, Private, or Do Not Use if you do
not want to use the interface.
If you have chosen storage type as Automatic Storage Management for Grid
Infrastructure, in the ASM Storage section, select from the ASM Disk Groups that
have been discovered by Cloud Control and are displayed in the table. Click Add
to add an ASM Disk Group. In the Add/Edit Disk Group dialog box, specify the
Disk Group Name, Disk List, and specify the redundancy as Normal, High, or
External. Click OK. Select the OCR/Voting Disk to store cluster registry and voting
disk files, and specify the ASM credentials for ASMSNMP and SYS users.
If you have chosen storage type as File System for Oracle RAC database, in the File
System Storage section, specify the storage location for Oracle Cluster Registry
(OCR) and voting disks. Select Normal or External to indicate the redundancy
level, and specify their locations.
As a designer, you can click on the Lock icon to lock these fields. These fields will
then not be available for editing in the operator role.
Click Next. You will come back to the Configure page. If you have configured the
storage options for the Grid Infrastructure and database, the Configure Grid
Infrastructure task will have a completed status.
13. In the Database Template page, choose the database template location. The location
can be Software Library or Oracle home. The template selected must be compatible
with the selected Oracle home version.
If you have selected Software Library, click on the search icon and select the
template from the Software Library. Specify Temporary Storage Location on
Managed Host(s). This location must be present on the reference node that you
selected earlier.
Provisioning Oracle Grid Infrastructure for Oracle Real Application Clusters Databases 7-23
Provisioning Oracle Real Application Clusters Database with File System on a New Cluster
Click Show Template Details to view details of the selected template. You can
view initialization parameters, table spaces, data files, redo log groups, common
options, and other details of the template.
If you have selected Oracle Home, select the template from the Oracle home. The
default location is ORACLE_HOME/assistants/dbca/templates.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
14. In the Identification and Placement page, select the Oracle RAC database
configuration type, whether Policy Managed or Admin Managed.
For admin-managed database, select nodes on which you want to create the cluster
database.
For policy-managed database, select the server pools to be used for creating the
database, from the list of existing server pools, or choose to create a new server
pool. Policy-managed databases can be created for database versions 11.2 and
higher. For database versions lower than 11.2, you will need to select nodes to
create the Oracle RAC database.
Specify Global Database Name and SID prefix. Specify the Database Credentials
for SYS, SYSTEM, and DBSNMP. You can choose to specify the same or different
passwords for each of these user accounts.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
15. In the Storage Locations page, select the storage type for Oracle RAC Database as
File System.
In the Database Files Location section, specify the location where data files,
temporary files, redo logs, and control files will be stored. These locations must be
on shared storage such as cluster file system location or ASM diskgroups.
• Select Use Database File Locations from Template to select defaults from the
template used.
• Select Use Common Location for All Database Files to specify a different
location.
If you select Use Oracle Managed Files (OMF), in the Multiplex Redo Logs and
Control Files section, you can specify locations to store duplicate copies of redo
logs and control files. Multiplexing provides greater fault-tolerance. You can
specify upto five locations.
In the Recovery Files Location section, select Use same storage type as database
files location to use the same storage type for recovery files as database files. Select
Use Flash Recovery Area and specify the location for recovery-related files and
Fast Recovery Area Size.
Select Enable Archiving to enable archive logging. Click Specify Archive Log
Locations and specify upto nine archive log locations. If the log location is not
specified, the logs will be saved in the default location.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
16. In the Initialization Parameters page, select the memory management type as
Automatic Memory Management or Automatic Shared Memory Management.
Select Specify Memory Settings as Percentage of Available Memory to specify
memory settings as percentage of available physical memory. For Automatic
Shared Memory management, specify Total SGA and Total PGA. For Automatic
Memory Management, specify Total Memory for Oracle.
In the Database sizing section, specify the Block Size and number of Processes. If
you have selected a database template with datafiles in the Database Template
page, you cannot edit the Block Size.
Specify the Host CPU Count. The maximum CPU count that can be specified is
equal to the number of CPUs present on the host.
In the Character Sets section, select the default character set. The default character
set is based on the locale and operating system.
Select a national character set. The default is AL16UTF16.
In the Database Connection Mode section, select the dedicated server mode. For
shared server mode, specify the number of shared servers.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
17. In the Additional Configuration Options page, select custom scripts from the
Software Library or your local disk. If you have selected a Structure Only database
template in the Database Template page, you can also view and edit database
options.
18. Review the information you have provided and click Next. You will come back to
the Configure page. If you have configured the database, the Create Databases task
will have a Configured status. Click Next.
22. The Custom Properties page will be displayed only for user customized
deployment procedures that require custom parameters. Specify custom properties
for the deployment, if any. Click Next.
23. In the Schedule page, specify a Deployment Instance name. If you want to run the
procedure immediately, then retain the default selection, that is, One Time
(Immediately). If you want to run the procedure later, then select One Time (Later)
and provide time zone, start date, and start time details. You can set the
notification preferences according to deployment procedure status. If you want to
run only prerequisites, you can select Pause the procedure to allow me to analyze
results after performing prerequisite checks to pause the procedure execution
after all prerequisite checks are performed.
Click Next.
Provisioning Oracle Grid Infrastructure for Oracle Real Application Clusters Databases 7-25
Using No Root Credentials for Provisioning Oracle Real Application Clusters (Oracle RAC) Databases
24. In the Review page, review the details you have provided for the deployment
procedure and if you are satisfied with the details, then click Finish to run the
deployment procedure according to the schedule set. If you want to modify the
details, then click Back repeatedly to reach the page where you want to make the
changes. Click Save to save the deployment procedure for future deployment.
25. In the Operator role, launch the saved deployment procedure. Add targets for
provisioning and provide values for configurable fields in the deployment
procedure.
26. In the Procedure Activity page, view the status of the execution of the job and steps
in the deployment procedure. Click the Status link for each step to view the details
of the execution of each step. You can click Debug to set the logging level to Debug
and click Stop to stop the procedure execution.
27. After the procedure execution is completed, click on the Targets menu and select
All Targets to navigate to the All Targets page and verify that the newly
provisioned databases appear as Cloud Control targets.
Note:
2. Select the new normal name credential for both Normal user and Priveleged user.
3. Click Submit.
When the database provisioning process reaches the step which requires root
credentials, the process will stop. You will need to run the command line manually.
To do this, set the environment to $AGENT_HOME, and then run the command line
copy from the Instructions field for the following three steps:
4. Once the command line is run manually using root user for each step, click
Confirm. The database provisioning process then continues till it completes.
This chapter explains how you can provision Oracle Real Application Clusters One
(Oracle RAC One) node databases using Oracle Enterprise Manager Cloud Control
(Cloud Control). In particular, this chapter covers the following:
8.1 Getting Started with Provisioning Oracle RAC One Node Databases
This section helps you get started with this chapter by providing an overview of the
steps involved in provisioning Oracle RAC One node databases. Consider this section
to be a documentation map to understand the sequence of actions you must perform
to successfully provision Oracle RAC One node. Click the reference links provided
against the steps to reach the relevant sections that provide more information.
Table 8-1 Getting Started with Provisioning Oracle RAC One Node Databases
Provisioning Oracle Real Application Clusters One (Oracle RAC One) Node Databases 8-1
Deployment Procedures for Provisioning Oracle RAC One Node Databases
Table 8-1 (Cont.) Getting Started with Provisioning Oracle RAC One Node
Databases
• Ensure that you meet the infrastructure requirements described in Setting Up Your
Infrastructure.
• Meet the hardware, software, and network requirements for Oracle Grid
Infrastructure and Oracle RAC installation on the target hosts. For information
about the hardware, software, and network requirements for Oracle Grid
Infrastructure and Oracle RAC installation, refer to the Oracle Grid Infrastructure
Installation Guide 11g Release 2 (11.2).
• Discover and monitor the destination hosts in Cloud Control. For this purpose, you
need the latest version of Oracle Management Agent (Management Agent) on the
destination hosts. For more information refer to the Oracle Enterprise Manager Cloud
Control Basic Installation Guide. Ensure that the agents are installed in the same
location on all hosts.
• Set up the Oracle Software Library (Software Library). Ensure that the installation
media, database templates, or provisioning entities are available in the Software
Library. For information about creating them, see Setting Up Database
Provisioning. Alternatively, use a provisioning profile to store the database
template. For information about creating a database provisioning profile, see
Creating Database Provisioning Profiles.
• The user configuring the deployment procedure will need to be a member of the
groups specified below. If these groups do not exist, then the Deployment
Procedure automatically creates them. However, if these have to be created on NIS,
then you must create them manually before running the Deployment Procedure.
For information about creating these operating system groups, refer to the Oracle
Grid Infrastructure Installation Guide 11g Release 2 (11.2).
The Oracle Database user (typically oracle) must be a member of the following
groups:
• Ensure that you use an operating system user that has write permission on the
following locations:
– Oracle base directory for Grid Infrastructure where diagnostic data files related
to Grid Infrastructure can be stored.
– Oracle base directory for database where diagnostic data files related to
database can be stored.
Provisioning Oracle Real Application Clusters One (Oracle RAC One) Node Databases 8-3
Provisioning Oracle RAC One Node Databases
• Ensure that as an operator, you have permissions to view credentials (set and
locked by the designer), view targets, submit jobs, and launch deployment
procedures.
• Ensure that the operating system groups you specify for the following groups
already exist on the hosts you select for provisioning. The operating system users
of these groups automatically get the respective privileges.
1. Log in as a designer, and from the Enterprise menu, select Provisioning and
Patching, then select Database Provisioning.
2. In the Database Procedures page, select the Provision Oracle RAC Database
Deployment Procedure and click Launch. The Oracle RAC Database provisioning
wizard is launched.
3. In the Select Hosts page, if you want to use a provisioning profile for the
deployment, choose Select a Provisioning Profile and then, select the profile with
previously saved configuration parameters.
In the Select destination hosts section, click Add to select the destination host
where you want to deploy and configure the software.
In the Select Tasks to Perform section, select the platform, the version for the
process, and the components you want to provision:
• To deploy database software select Deploy and create a RAC One Node
database which creates a new database and configures it after installing the
Oracle RAC database.
Click on the Lock icon against the fields that you do not want to be edited in the
operator role. For more information about the lock down feature in deployment
procedures, see Introduction to Database Provisioning.
Click Next.
5. In the Specify OS Users page, specify the operating system users and groups
required to provision the database.
For Database User and ASM User, select the Normal User and Privileged User to
be added to the OS group.
Click Next.
6. In the Specify OS Groups page, specify the OS Groups to use for operating system
authentication. Ensure that these groups already exist on the hosts you select for
provisioning.
8. In the Select Software Locations page, specify the locations where the software
binaries of Oracle Grid Infrastructure and Oracle RAC can be placed, that is, the
$ORACLE_HOME location. As a designer, you can click on the Lock icon to lock
these fields. These fields will then not be available for editing in the operator role.
In the Source section, select the Software Library location for the Grid
Infrastructure and Oracle Database binaries.
In the Destination location, specify the following:
• Oracle Base for Grid Infrastructure, a location on the destination host where
the diagnostic and administrative logs, and other logs associated with the Grid
Infrastructure can be stored.
• Grid Infrastructure Home, a location on the destination host where the Grid
Infrastructure software can be provisioned. This is the Oracle home directory
for Grid Infrastructure. Do not select a location that is a subdirectory of the
Oracle Base for Grid Infrastructure or database. Select Shared Grid
Infrastructure home to enable Grid Infrastructure Oracle Home on shared
locations. Ensure that the directory path you provide meets the requirements
described in Requirements for Grid Infrastructure Software Location Path.
• Oracle Base for Database, a location on the destination host where the
diagnostic and administrative logs, and other logs associated with the database
can be stored. This location is used for storing only the dump files and is
different from the Oracle home directory where the database software will be
installed.
Provisioning Oracle Real Application Clusters One (Oracle RAC One) Node Databases 8-5
Provisioning Oracle RAC One Node Databases
• Database Oracle Home, a location on the destination host where the database
software can be provisioned. This is the Oracle home directory for the database.
Select Shared Database Oracle home to enable Database Oracle Home on
shared locations.
In the Additional Parameters section, specify the Working Directory on the
destination host where the files related to cloning can be staged temporarily.
Ensure that you have approximately 7 GB of space for this directory. For Installer
Parameters, specify any additional Oracle Universal Installer (OUI) parameters you
want to run while provisioning Oracle Grid Infrastructure. For example, -force (to
override any warnings), -debug (to view more debug information), and -invPtrLoc
<Location> (for UNIX only). Ensure that the parameters are separated by white
space.
Click Next. You will come back to the Configure page. If you have configured the
source and destination location for the software, the Configure Software task will
have a Configured status.
10. In the Select Storage page, select the storage type for Grid Infrastructure and
database as Automatic Storage Management or File System to indicate the storage
type for storing voting disk and Oracle Cluster Registry (OCR). Voting disk and
OCR are used by Oracle Clusterware to manage its resources. You can choose from
the following options:
• Automatic Storage Management for both Grid Infrastructure and Oracle RAC
Database
• Automatic Storage Management for Grid Infrastructure and File System for
Oracle RAC Database
• File System for both Grid Infrastructure and Oracle RAC Database
• File System for Grid Infrastructure and Automatic Storage Management for
Oracle RAC Database
As a designer, you can click on the Lock icon to lock these fields. These fields will
then not be available for editing in the operator role.
Click Next.
11. In the Configure GI page, in the Basic Settings section, specify the Cluster Name,
SCAN Name, and SCAN Port. The default SCAN port is port 1521, but you can
specify another port of your choice. The deployment procedure verifies that the
SCAN port provided is a valid port number, and is not used for any other purpose.
After installation, a TNS listener listens to this port to respond to client connections
to the SCAN name.
In the GNS Settings section, select Configure GNS and auto-assign with DHCP
and specify the GNS Sub System and GNS VIP Address if you want virtual host
names outside the cluster to have dynamically assigned names.
In the GI Network section, by default, the network interfaces that have the same
name and subnet for the selected destination hosts are automatically detected and
displayed. Validate these network interface configuration details. From the Usage
column, select Public to configure the interface as public interface, or Private to
configure the interface as private interface.
Click Add to add an interface and specify the Interface Name and Interface
Subnet and click OK. Select the Usage as Public, Private, or Do Not Use if you do
not want to use the interface.
If you have chosen storage type as Automatic Storage Management for either or
both Grid Infrastructure and Oracle RAC Database, in the ASM Storage section,
select from the ASM Disk Groups that have been discovered by Cloud Control and
are displayed in the table. Click Add to add an ASM Disk Group. In the Add/Edit
Disk Group dialog box, specify the Disk Group Name, Disk List, and specify the
redundancy as Normal, High, or External. Click OK. Select the OCR/Voting Disk
to store cluster registry and voting disk files, and specify the ASM credentials for
ASMSNMP and SYS users.
If you have chosen storage type as File System for Grid Infrastructure or Oracle
RAC database, in the File System Storage section, specify the storage location for
Oracle Cluster Registry (OCR) and voting disks. Select Normal or External to
indicate the redundancy level, and specify their locations.
As a designer, you can click on the Lock icon to lock these fields. These fields will
then not be available for editing in the operator role.
Click Next. You will come back to the Configure page. If you have configured the
storage options for the Grid Infrastructure and database, the Configure Grid
Infrastructure task will have a completed status. Click Next.
12. The Custom Properties page will be displayed only for user customized
deployment procedures that require custom parameters. Specify custom properties
for the deployment, if any. Click Next.
13. In the Schedule page, if you want to run the procedure immediately, then retain the
default selection, that is, One Time (Immediately). If you want to run the procedure
later, then select One Time (Later) and provide time zone, start date, and start time
details. Click Next.
14. In the Review page, review the details you have provided for the deployment
procedure and if you are satisfied with the details, then click Finish to run the
deployment procedure according to the schedule set. If you want to modify the
details, then click Back repeatedly to reach the page where you want to make the
changes. Click Save to save the deployment procedure for future deployment.
15. In the Database Procedures page, select the Create Oracle Database Deployment
Procedure and click Launch. The Create Oracle Database provisioning wizard is
launched.
16. In the Database Version and Type page, select the database Version and select
Oracle RAC One Node Database.
In the Cluster section, select the cluster and Oracle Home provisioned earlier. Select
a reference host to perform validations to use as reference for creating the database
on the cluster.
Select Cluster Credentials or add new. Click the plus icon to add new credentials
and specify User Name, Password, and Run Privileges and save the credentials.
Click Next.
17. In the Database Template page, choose the database template location. The location
can be Software Library or Oracle home. The template selected must be compatible
with the selected Oracle home version.
Provisioning Oracle Real Application Clusters One (Oracle RAC One) Node Databases 8-7
Provisioning Oracle RAC One Node Databases
If you have selected Software Library, click on the search icon and select the
template from the Software Library. Specify Temporary Storage Location on
Managed Host(s). This location must be present on the reference node that you
selected earlier.
Click Show Template Details to view details of the selected template. You can
view initialization parameters, table spaces, data files, redo log groups, common
options, and other details of the template.
If you have selected Oracle Home, select the template from the Oracle home. The
default location is ORACLE_HOME/assistants/dbca/templates.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
18. In the Identification and Placement page, select nodes on which you want to create
the cluster database. Specify Global Database Name and SID prefix. Specify the
Database Credentials for SYS, SYSTEM, and DBSNMP. Select the type of Oracle
RAC database, whether Policy Managed or Admin Managed. Specify the Service
Name.
Note:
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
19. In the Storage Locations page, select the storage type, whether File System or
Automatic Storage Management (ASM).
In the Database Files Location section, specify the location where data files,
temporary files, redo logs, and control files will be stored.
• Select Use Database File Locations from Template to select defaults from the
template used.
• Select Use Common Location for All Database Files to specify a different
location.
If you select Use Oracle Managed Files (OMF), in the Multiplex Redo Logs and
Control Files section, you can specify locations to store duplicate copies of redo
logs and control files. Multiplexing provides greater fault-tolerance. You can
specify up to five locations.
In the Recovery Files Location section, select Use Flash Recovery Area and specify
the location for recovery-related files and Fast Recovery Area Size.
In the Archive Log Settings section, select Enable Archiving to enable archive
logging. In the Specify Archive Log Locations, you can specify up todata files nine
archive log locations. If the log location is not specified, the logs will be saved in the
default location.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
20. In the Initialization Parameters page, select the memory management type as
Automatic Memory Management or Automatic Shared Memory Management.
Select Specify Memory Settings as Percentage of Available Memory to specify
memory settings as percentage of available physical memory. For Automatic
Shared Memory management, specify Total SGA and Total PGA. For Automatic
Memory Management, specify Total Memory for Oracle.
In the Database sizing section, specify the Block Size and number of Processes. If
you have selected a database template with data files in the Database Template
page, you cannot edit the Block Size.
Specify the Host CPU Count. The maximum CPU count that can be specified is
equal to the number of CPUs present on the host.
In the Character Sets section, select the default character set. The default character
set is based on the locale and operating system.
Select a national character set. The default is AL16UTF16.
In the Database Connection Mode section, select the dedicated server mode. For
shared server mode, specify the number of shared servers.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
21. In the Additional Configuration Options page, select custom scripts from the
Software Library. If you have selected a Structure Only database template in the
Database Template page, you can also view and edit database options. Click on the
Lock icon to lock the field. Click Next.
22. In the Schedule page, specify a Deployment Procedure Instance Name and a
schedule for the deployment. If you want to run the procedure immediately, then
retain the default selection, that is Immediately. If you want to run the procedure
later, then select Later and provide time zone, start date, and start time details.
Click Next.
23. In the Review page, review the details you have provided for the deployment
procedure and if you are satisfied with the details, then click Finish to run the
deployment procedure according to the schedule set. If you want to modify the
details, then click Back repeatedly to reach the page where you want to make the
changes. Click Save to save the deployment procedure for future deployment.
24. In the Procedure Activity page, view the status of the execution of the job and steps
in the deployment procedure. Click the Status link for each step to view the details
of the execution of each step. You can click Debug to set the logging level to Debug
and click Stop to stop the procedure execution.
Provisioning Oracle Real Application Clusters One (Oracle RAC One) Node Databases 8-9
Provisioning Oracle RAC One Node Databases
This chapter explains how you can provision Oracle Real Application Clusters (Oracle
RAC) for 10g and 11g Release 1 using Oracle Enterprise Manager Cloud Control
(Cloud Control). In particular, this chapter covers the following:
• Getting Started with Provisioning Oracle Real Application Clusters for 10g and 11g
9.1 Getting Started with Provisioning Oracle Real Application Clusters for
10g and 11g
This section helps you get started with this chapter by providing an overview of the
steps involved in provisioning Oracle RAC for 10g and 11g Release 1. Consider this
section to be a documentation map to understand the sequence of actions you must
perform to successfully provision Oracle RAC for 10g and 11g Release 1. Click the
reference links provided against the steps to reach the relevant sections that provide
more information.
Step 2 Selecting the Use Case • To learn about cloning an existing Oracle
This chapter covers a few use cases RAC, see Cloning a Running Oracle Real
for provisioning Oracle RAC. Select Application Clusters .
the use case that best matches your • To learn about provisioning Oracle RAC
requirements. using a gold image, see Provisioning Oracle
Real Application Clusters Using Gold Image .
• To learn about provisioning Oracle RAC
using the software binaries from an
installation medium, see Provisioning Oracle
Real Application Clusters Using Archived
Software Binaries.
Provisioning Oracle Real Application Clusters for 10g and 11g 9-1
Core Components Deployed When Provisioning Oracle RAC
Step 4 Running the Deployment Procedure • To clone an existing Oracle RAC, follow the
Run the Deployment Procedure to steps explained in Procedure for Cloning a
successfully provision Oracle RAC. Running Oracle Real Application Clusters.
• To provision Oracle RAC using a gold image,
follow the steps explained in Procedure for
Provisioning Oracle Real Application Clusters
Using Gold Image.
• To provision Oracle RAC using the software
binaries from an installation medium, follow
the steps explained in Procedure for
Provisioning Oracle Real Application Clusters
Using Archived Software Binaries.
• Oracle Clusterware
Note:
When you run the Deployment Procedures to provision Oracle RAC on a
shared file system, the software binaries are installed in the shared location,
but the configuration happens on all nodes. To configure new nodes, run the
One Click Extend Cluster Database procedure to extend the Oracle RAC stack to
other nodes.
• Ensure that you meet the prerequisites described in Setting Up Your Infrastructure.
• If you want to clone Oracle RAC 11g Release 1 (11.1.0.6) on Solaris platforms, then
apply patch# 6486988 on the Oracle home that needs to be cloned.
• Ensure that the target hosts have the necessary hardware and software required for
Oracle RAC. The hardware requirements include setting up of the following:
– Private Network: The network interface cards must be installed on each node
and connected to each other.
– Shared Storage Between Nodes: The shared storage is required for OCR, Voting
disks and the data files.
• Ensure that the Virtual IPs are set up in the DNS. If you choose to set up the Virtual
IPs locally, then the IP addresses can be specified using the Deployment Procedure,
and the procedure will set them up for you.
• If you want to use a custom template to create a structure for the database, then
create a template (a .dbt file), and store it in a location accessible from the target
hosts. The file may be on the target host or on a shared location. For information
about creating templates, see Creating Database Templates.
• Ensure that operating system users such as oracle and crsuser are available on all
nodes of the cluster. These users must be a part of the relevant operating system
groups such as dba and oinstall.
For more information, see Oracle Clusterware Installation Guide available at:
http://www.oracle.com/pls/db111/homepage
• Ensure that the User IDs for operating system users and the Group IDs for
operating system groups are identical on all nodes of the cluster.
• Ensure that you use an operating system user that has the privileges to run the
Deployment Procedure and its commands on the target hosts. If you do not have
the privileges to do so, that is, if you are using a locked account, then request your
Provisioning Oracle Real Application Clusters for 10g and 11g 9-3
Cloning a Running Oracle Real Application Clusters
• Compare the configuration of the source and target hosts and ensure that they have
the same configuration. If the configurations are different, then contact your system
administrator and fix the inconsistencies before running the Deployment
Procedure.
To compare the configuration of the hosts, in Cloud Control, click Targets and then
Hosts. On the Hosts page, click the name of the source host to access its Home
page, and then from the Host menu, click Configuration and then click Compare.
• While selecting the source, remember to remove sqlnet.ora from the list of files
mentioned in Files to Exclude.
1. From the Enterprise menu, select Provisioning and Patching, then select
Database Provisioning.
2. In the Database Provisioning page, select one of the following, and click Launch.
a. In the Select Source section, select Select from Existing Installations. Then
click the torch icon for Reference Host and select the host on which the
existing Oracle RAC installation is running. Once you select the reference
host, the application automatically displays the working directory and the
details of the selected Oracle Clusterware and Oracle Database.
If you want to save the selected Oracle Clusterware and Oracle Database as
gold images in the Software Library, then click Save to Software Library.
Oracle Clusterware is saved as a Clusterware Clone component type and
Oracle Database is stored as a Database Clone component type, respectively.
Note:
b. Click Next.
a. In the Hosts to Include in Cluster section, click Add and select the target hosts
that should form the cluster. To see more details about the selected hosts,
click Show Options.
Note:
When you click Add, the Select Target pop-up window appears. On this page,
by default, the Show Suitable Hosts option is selected and the table lists only
those hosts that are best suited for provisioning. If you do not find the host
you want to add, then select Show All Hosts to view a complete list of hosts.
By default, Private Host Name and Virtual Host Name are automatically
prefilled with values. Edit them and specify values that match with your
environment. Optionally, you can also specify their IP addresses.
Note:
If the prefilled, default values of Private Host Name and Virtual Host Name
are incorrect, then see the workaround described in Troubleshooting Issues.
If you already have these details stored in the cluster configuration file, then
click Import From File to select that cluster configuration file. This file
typically contains information about the new hosts to be added. To
understand how a cluster configuration file looks, see the sample file shown
in Sample Cluster Configuration File.
To configure the private and public network interfaces, click Select
Interfaces. By default, the interfaces that have the same name and subnet for
the selected target hosts are displayed. However, you can also choose to view
all the interfaces for the selected target hosts. You can either select one of the
existing interfaces or specify a completely new one if the interface you want
to use does not exist.
c. Click Next.
Provisioning Oracle Real Application Clusters for 10g and 11g 9-5
Cloning a Running Oracle Real Application Clusters
a. In the Reference Host Credentials section, retain the default selection, that is,
Use Preferred Credentials.
Note:
You can optionally override these preferred credentials. The credentials you
specify here are used by the Deployment Procedure to run the provisioning
operation. If this environment is secure and has locked accounts, then make
sure that:
• The credentials you specify here have the necessary privileges to switch to
the locked account for performing the provisioning operation.
From the Host Credentials list, select Different for each Oracle Home if you
want to use different operating system credentials for each Oracle home, or
Same for all Oracle Homes if you want to use the same set of credentials for
all Oracle homes. Depending on the selection you make, specify the
credentials. Ensure that the users belong to the same group (dba/oinstall).
Note:
If you are using vendor clusterware, then ensure that root and the operating
system users, such as oracle and crsuser, owning the clusterware and
various Oracle homes are a part of the operating system groups required by
the vendor clusterware.
For example, if your system uses High Availability Cluster Multiprocessing
(HACMP) clusterware, then create or check for the existence of the group
hagsuser. Ensure that the relevant operating system users and root user are
members of this group.
For more information, refer to the Oracle Clusterware and Oracle Real Application
Clusters Installation and Configuration Guide.
d. Click Next.
a. In the Cluster Name and Location section, review the default name and
location details provided for Oracle Clusterware and Oracle RAC Database.
While Oracle recommends you to retain the default values, you can always
edit them to provide custom values.
For security purposes, the clusterware configuration sets the ownership of
Oracle Clusterware home and all its parent directories to be owned by root.
Note:
• If you do not see a default cluster name in the Cluster Name field, then
you might have selected nodes that are not master nodes of the cluster. In
this case, manually specify a cluster name, but ensure that the name you
specify is the same host cluster name you provided in the Agent Deploy
application in Cloud Control, while deploying Management Agents on that
cluster.
b. In the Database Details section, retain the default selection for creating a
starter database.
Note:
If the database creation steps are disabled in the Deployment Procedure, then
you will not see this section.
If you want to create a general-purpose database, then leave all the fields in
this section blank. Otherwise, provide the required details as described in this
step.
If you have a custom response file that already has the options enabled, then
select Use response file to create database, and specify the full path to a
location where the file is available. The file may be available on the target
host, in a shared location accessible from the target host, in the Software
Library, or in a location where an existing database is running.
Note:
From the Software Library or from the location where an existing database is
running, only a .dbt template file can be used. However, from the target host
or a shared location, any template file can be used.
Provisioning Oracle Real Application Clusters for 10g and 11g 9-7
Cloning a Running Oracle Real Application Clusters
If you do not have a custom response file, then select Do not use response
file, and provide the global database name, the credentials, and the additional
parameters you want to run while creating the starter database.
Note:
Ensure that the database name you specify is in the format
database_name.database_domain. It must have 1 to 8 alphanumeric
characters. For example, orcl.mydomain.com. Also note that the credentials
you provide are used for SYS, SYSTEM, SYSMAN, and DBSNMP accounts.
If you want to use the structure of an existing database and have a custom
template to structure the new database, then in Template File for Database,
specify the full path to a location where the template file is available. The file
may be available on the target host or on a shared location accessible from the
target host.
Note:
If you do not store the response files and templates in a central location, you
can always customize the Deployment Procedure to add another step that
copies the response file or template to the target host before invoking the
configuration tools to create the database.
c. In the Backup and Recovery Details section, retain the default selection, that
is, Do not Enable Automated Backups if you do not want to have backups
taken.
Alternatively, if you want to enable automated backups, select Enable
Automated Backups, specify the full path to a directory location from where
the backed-up files can be recovered, and provide the operating system
credentials for running the backup job. Note that recovery location is the
same location as the backup location because this is where the files are backed
up and also recovered from.
d. In the ASM Instance Details section (appears only if you had selected to
deploy ASM), retain the default selection, that is, Create ASM Instance, and
specify the credentials, additional ASM parameters to be used, and the ASM
disk string to be used.
Note:
If you are provisioning Oracle Database 10g and Oracle ASM 10g, then ensure
that you specify the same password for database as well as ASM.
If you have a custom response file that already has the options enabled, then
select Use response file to create ASM database, and specify the full path to
a location where the file is available. The file may be available on the target
host or on a shared location accessible from the target hosts.
If you do not want to use a response file, then select Do not use response file.
e. Click Next.
a. In the Shared Storage Configuration section, provide details about the storage
devices and click Next. Specify the partition name and the mount location,
and select the mount format and a storage device for storing data. While
partition name is the path to the location where the device is installed, mount
location is the mount point that represents the partition location.
While configuring the storage device, at a minimum, you must have a
partition for at least OCR, Voting Disk, and data files. You cannot designate
the same storage device to multiple partitions.
Oracle recommends designating the OCR and the OCR Mirror devices to
different partitions. Similarly, Oracle recommends designating the Voting
Disk, Voting Disk1, and Voting Disk2 to different partitions.
Before clicking Next, do the following:
- If you want to clear the data on selected raw devices before creating and
configuring the cluster, then select Clear raw devices.
- If you have configured only for a few storage devices, then select Do not
provision storage for others that you do not want to provision.
- Specify the ASM disk string to be used.
b. In the Options section, select the ASM redundancy mode. The default is
None, which requires 7 GB of space. While Normal requires 16 GB of space,
High requires 32 GB.
Note:
If the configuration steps are disabled in the Deployment Procedure, then you
will not see this page.
b. In the Sysctl File Configuration section, select Configure Sysctl file if you
want to configure the sysctl.conf file. Specify the mode of editing the system
configuration file and the location of the reference system configuration file
used for modifying the kernel parameters.
The default mode is append. You can however select edit to modify, and replace
to replace the current sysctl.conf file.
Ensure that the reference file you specify is available in a shared location
accessible by the Oracle Management Service.
9. On the Review page, review the details you have provided for provisioning
Oracle RAC, and click Submit. If the details you provided seem to be missing on
this page, then see the workaround described in Troubleshooting Issues.
10. After the Deployment Procedure ends successfully, instrument the database to
collect configuration information.
Provisioning Oracle Real Application Clusters for 10g and 11g 9-9
Provisioning Oracle Real Application Clusters Using Gold Image
Note:
Ensure that you use a gold image that was created using the Oracle home
directory of a RAC database. You cannot use a gold image that was created
using the Oracle home directory of a standalone database.
• Prerequisites for Provisioning Oracle Real Application Clusters Using Gold Image
• Procedure for Provisioning Oracle Real Application Clusters Using Gold Image
9.4.1 Prerequisites for Provisioning Oracle Real Application Clusters Using Gold Image
Before running the Deployment Procedure, meet the following prerequisites.
• Ensure that you meet the prerequisites described in Setting Up Your Infrastructure.
• Ensure that you create gold images of existing Oracle RAC Database and Oracle
Grid Infrastructure.
To understand how you can create a gold image, see Setting Up Database
Provisioning.
• Ensure that the target hosts have the necessary hardware and software required for
Oracle RAC. The hardware requirements include setting up of the following:
– Private Network: The network interface cards must be installed on each node
and connected to each other.
– Shared Storage Between Nodes: The shared storage is required for OCR, Voting
disks and the data files.
• Ensure that the Virtual IPs are set up in the DNS. If you choose to set up the Virtual
IPs locally, then the IP addresses can be specified using the Deployment Procedure,
and the procedure will set them up for you.
• If you want to use a custom template to create a structure for the database, then
create a template (a .dbt file), and store it in a location accessible from the target
hosts. The file may be on the target host or on a shared location.
To understand how a template can be created and used for creating databases, see
Creating Database Templates.
• Ensure that operating system users such as oracle and crsuser are available on all
nodes of the cluster. These users must be a part of the relevant operating system
groups such as dba and oinstall.
For more information, see Oracle Clusterware Installation Guide available at:
http://www.oracle.com/pls/db111/homepage
• Ensure that the User IDs for operating system users and the Group IDs for
operating system groups are identical on all nodes of the cluster.
• Ensure that you use an operating system user that has the privileges to run the
Deployment Procedure and its commands on the target hosts. If you do not have
the privileges to do so, that is, if you are using a locked account, then request your
administrator (a designer) to either customize the Deployment Procedure to run it
as another user or ignore the steps that require special privileges. For information
about customization, see Customizing Deployment Procedures .
• While selecting the source, remember to remove sqlnet.ora from the list of files
mentioned in Files to Exclude.
9.4.2 Procedure for Provisioning Oracle Real Application Clusters Using Gold Image
To provision a gold image of an Oracle RAC installation, follow these steps:
1. From the Enterprise menu, select Provisioning and Patching, then select
Database Provisioning.
2. In the Database Provisioning page, select one of the following, and click Launch.
b. In the Source for Clusterware section, click the torch icon and select the
generic component that has the gold image of Oracle Clusterware. Ensure
that you select only components that are in "Ready" status. Once you select
the component name, the application automatically displays the component
location.
Note:
If you do not see the required component in the Software Library, then follow
the workaround described in Troubleshooting Issues.
c. In the Source for RAC section, click the torch icon and select the generic
component that has the gold image of Oracle Database. Ensure that you select
only components that are in "Ready" status. Once you select the component
name, the application automatically displays the component location.
Provisioning Oracle Real Application Clusters for 10g and 11g 9-11
Provisioning Oracle Real Application Clusters Using Gold Image
Note:
If you do not see the required component in the Software Library, then follow
the workaround described in Troubleshooting Issues.
e. Click Next.
a. In the Hosts to Include in Cluster section, click Add and select the target hosts
that should form the cluster. To see more details about the selected hosts,
click Show Options.
Note:
When you click Add, the Select Target pop-up window appears. On this page,
by default, the Show Suitable Hosts option is selected and the table lists only
those hosts that are best suited for provisioning. If you do not find the host
you want to add, then select Show All Hosts to view a complete list of hosts.
By default, Private Host Name and Virtual Host Name are automatically
prefilled with values. Edit them and specify values that match with your
environment. Optionally, you can also specify their IP addresses.
Note:
If the prefilled, default values of Private Host Name and Virtual Host Name
are incorrect, then see the workaround described in Troubleshooting Issues.
If you already have these details stored in the cluster configuration file, then
click Import From File to select that cluster configuration file. This file
typically contains information about the new hosts to be added. To
understand how a cluster configuration file looks, see the sample file shown
in Sample Cluster Configuration File.
To configure the private and public network interfaces, click Select
Interfaces. By default, the interfaces that have the same name and subnet for
the selected target hosts are displayed. However, you can also choose to view
all the interfaces for the selected target hosts. You can either select one of the
existing interfaces or specify a completely new one if the interface you want
to use does not exist.
c. Click Next.
a. In the Target Host(s) Credentials section, retain the default selection, that is,
Use Preferred Credentials.
Note:
You can optionally override these preferred credentials. The credentials you
specify here are used by the Deployment Procedure to run the provisioning
operation. If this environment is secure and has locked accounts, then make
sure that:
• The credentials you specify here have the necessary privileges to switch to
the locked account for performing the provisioning operation.
From the Host Credentials list, select Different for each Oracle Home if you
want to use different operating system credentials for each Oracle home, or
Same for all Oracle Homes if you want to use the same set of credentials for
all Oracle homes. Depending on the selection you make, specify the
credentials. Ensure that the users belong to the same group (dba/oinstall).
Note:
If you are using vendor clusterware, then ensure that root and the operating
system users, such as oracle and crsuser, owning the clusterware and
various Oracle homes are a part of the operating system groups required by
the vendor clusterware.
For example, if your system uses High Availability Cluster Multiprocessing
(HACMP) clusterware, then create or check for the existence of the group
hagsuser. Ensure that the relevant operating system users and root user are
members of this group.
For more information, refer to the Oracle Clusterware and Oracle Real Application
Clusters Installation and Configuration Guide.
c. Click Next.
a. In the Cluster Name and Location section, review the default name and
location details provided for Oracle Clusterware and Oracle RAC Database.
While Oracle recommends you to retain the default values, you can always
edit them to provide custom values.
For security purposes, the clusterware configuration sets the ownership of
Oracle Clusterware home and all its parent directories to be owned by root.
Provisioning Oracle Real Application Clusters for 10g and 11g 9-13
Provisioning Oracle Real Application Clusters Using Gold Image
b. In the Database Details section, retain the default selection for creating a
starter database.
Note:
If the database creation steps are disabled in the Deployment Procedure, then
you will not see this section.
If you want to create a general-purpose database, then leave all the fields in
this section blank. Otherwise, provide the required details as described in this
step.
If you have a custom response file that already has the options enabled, then
select Use response file to create database, and specify the full path to a
location where the file is available. The file may be available on the target
host, in a shared location accessible from the target host, in the Software
Library, or in a location where an existing database is running.
Note:
From the Software Library or from the location where an existing database is
running, only a .dbt template file can be used. However, from the target host
or a shared location, any template file can be used.
If you do not have a custom response file, then select Do not use response
file, and provide the global database name, the credentials, and the additional
parameters you want to run while creating the starter database.
Note:
Ensure that the database name you specify is in the format
database_name.database_domain. It must have 1 to 8 alphanumeric
characters. For example, orcl.mydomain.com. Also note that the credentials
you provide are used for SYS, SYSTEM, SYSMAN, and DBSNMP accounts.
If you want to use the structure of an existing database and have a custom
template to structure the new database, then in Template File for Database,
specify the full path to a location where the template file is available. The file
may be available on the target host or on a shared location accessible from the
target host.
Note:
If you do not store the response files and templates in a central location, you
can always customize the Deployment Procedure to add another step that
copies the response file or template to the target host before invoking the
configuration tools to create the database.
c. In the Backup and Recovery Details section, retain the default selection, that
is, Do not Enable Automated Backups if you do not want to have backups
taken.
Alternatively, if you want to enable automated backups, select Enable
Automated Backups, specify the full path to a directory location from where
the backed-up files can be recovered, and provide the operating system
credentials for running the backup job. Note that recovery location is the
same location as the backup location because this is where the files are backed
up and also recovered from.
d. In the ASM Instance Details section (appears only if you had selected to
deploy ASM), retain the default selection, that is, Create ASM Instance, and
specify the credentials, additional ASM parameters to be used, and the ASM
disk string to be used.
Note:
If you are provisioning Oracle Database 10g and Oracle ASM 10g, then ensure
that you specify the same password for database as well as ASM.
If you have a custom response file that already has the options enabled, then
select Use response file to create ASM database, and specify the full path to
a location where the file is available. The file may be available on the target
host or on a shared location accessible from the target hosts.
If you do not want to use a response file, then select Do not use response file.
e. Click Next.
a. In the Shared Storage Configuration section, provide details about the storage
devices and click Next. Specify the partition name and the mount location,
and select the mount format and a storage device for storing data. While
partition name is the path to the location where the device is installed, mount
location is the mount point that represents the partition location.
While configuring the storage device, at a minimum, you must have a
partition for at least OCR, Voting Disk, and data files. You cannot designate
the same storage device to multiple partitions.
Provisioning Oracle Real Application Clusters for 10g and 11g 9-15
Provisioning Oracle Real Application Clusters Using Archived Software Binaries
Oracle recommends designating the OCR and the OCR Mirror devices to
different partitions. Similarly, Oracle recommends designating the Voting
Disk, Voting Disk1, and Voting Disk2 to different partitions.
Before clicking Next, do the following:
- If you want to clear the data on selected raw devices before creating and
configuring the cluster, then select Clear raw devices.
- If you have configured only for a few storage devices, then select Do not
provision storage for others that you do not want to provision.
- Specify the ASM disk string to be used.
b. In the Options section, select the ASM redundancy mode. The default is
None, which requires 7 GB of space. While Normal requires 16 GB of space,
High requires 32 GB.
Note:
If the configuration steps are disabled in the Deployment Procedure, then you
will not see this page.
b. In the Sysctl File Configuration section, select Configure Sysctl file if you
want to configure the sysctl.conf file. Specify the mode of editing the system
configuration file and the location of the reference system configuration file
used for modifying the kernel parameters.
The default mode is append. You can however select edit to modify, and replace
to replace the current sysctl.conf file.
Ensure that the reference file you specify is available in a shared location
accessible by the Oracle Management Service.
9. On the Review page, review the details you have provided for provisioning
Oracle RAC, and click Submit. If the details you provided seem to be missing on
this page, then see the workaround described in Troubleshooting Issues.
10. After the Deployment Procedure ends successfully, instrument the database to
collect configuration information.
9.5.1 Prerequisites for Provisioning Oracle Real Application Clusters Using Archived
Software Binaries
Before running the Deployment Procedure, meet the following prerequisites.
• Ensure that you meet the prerequisites described in Provisioning Oracle Real
Application Clusters for 10g and 11g .
• Ensure that you upload the software binaries of Oracle RAC Database and Oracle
Grid Infrastructure to the Software Library.
• Ensure that the target hosts have the necessary hardware and software required for
Oracle RAC. The hardware requirements include setting up of the following:
– Private Network: The network interface cards must be installed on each node
and connected to each other.
– Shared Storage Between Nodes: The shared storage is required for OCR, Voting
disks and the data files.
• Ensure that the Virtual IPs are set up in the DNS. If you choose to set up the Virtual
IPs locally, then the IP addresses can be specified using the Deployment Procedure,
and the procedure will set them up for you.
• If you want to use a custom template to create a structure for the database, then
create a template (a .dbt file), and store it in a location accessible from the target
hosts. The file may be on the target host or on a shared location.
To understand how a template can be created and used for creating databases, see
Creating Database Templates.
• Ensure that operating system users such as oracle and crsuser are available on all
nodes of the cluster. These users must be a part of the relevant operating system
groups such as dba and oinstall.
For more information, see Oracle Clusterware Installation Guide available at:
http://www.oracle.com/pls/db111/homepage
• Ensure that the User IDs for operating system users and the Group IDs for
operating system groups are identical on all nodes of the cluster.
• Ensure that you use an operating system user that has the privileges to run the
Deployment Procedure and its commands on the target hosts. If you do not have
the privileges to do so, that is, if you are using a locked account, then request your
administrator (a designer) to either customize the Deployment Procedure to run it
as another user or ignore the steps that require special privileges. For information
about customization, see Customizing Deployment Procedures .
Provisioning Oracle Real Application Clusters for 10g and 11g 9-17
Provisioning Oracle Real Application Clusters Using Archived Software Binaries
9.5.2 Procedure for Provisioning Oracle Real Application Clusters Using Archived
Software Binaries
To provision a fresh Oracle RAC installation, follow these steps:
1. From the Enterprise menu, select Provisioning and Patching and then select
Database Provisioning.
2. In the Database Provisioning page, select one of the following, and click Launch.
b. In the Source for Clusterware section, click the torch icon and select the
generic component that has the software binaries of Oracle Clusterware.
Ensure that you select only components that are in "Ready" status. Once you
select the component name, the application automatically displays the
component location.
Note:
If you do not see the required component in the Software Library, then follow
the workaround described in Troubleshooting Issues.
c. In the Source for RAC section, click the torch icon and select the generic
component that has the software binaries of Oracle Database. Ensure that you
select only components that are in "Ready" status. Once you select the
component name, the application automatically displays the component
location.
Note:
If you do not see the required component in the Software Library, then follow
the workaround described in Troubleshooting Issues.
e. Click Next.
a. In the Hosts to Include in Cluster section, click Add and select the target hosts
that should form the cluster. To see more details about the selected hosts,
click Show Options.
Note:
When you click Add, the Select Target pop-up window appears. On this page,
by default, the Show Suitable Hosts option is selected and the table lists only
those hosts that are best suited for provisioning. If you do not find the host
you want to add, then select Show All Hosts to view a complete list of hosts.
By default, Private Host Name and Virtual Host Name are automatically
prefilled with values. Edit them and specify values that match with your
environment. Optionally, you can also specify their IP addresses.
Note:
If the prefilled, default values of Private Host Name and Virtual Host Name
are incorrect, then see the workaround described in Troubleshooting Issues.
If you already have these details stored in a cluster configuration file, then
click Import From File to select that cluster configuration file. This file
typically contains information about the new hosts to be added. To
understand how a cluster configuration file looks, see the sample file shown
in Sample Cluster Configuration File.
To configure the private and public network interfaces, click Select
Interfaces. By default, the interfaces that have the same name and subnet for
the selected target hosts are displayed. However, you can also choose to view
all the interfaces for the selected target hosts. You can either select one of the
existing interfaces or specify a completely new one if the interface you want
to use does not exist.
c. Click Next.
a. In the Target Host(s) Credentials section, retain the default selection, that is,
Use Preferred Credentials.
Provisioning Oracle Real Application Clusters for 10g and 11g 9-19
Provisioning Oracle Real Application Clusters Using Archived Software Binaries
Note:
You can optionally override these preferred credentials. The credentials you
specify here are used by the Deployment Procedure to run the provisioning
operation. If this environment is secure and has locked accounts, then make
sure that:
• The credentials you specify here have the necessary privileges to switch to
the locked account for performing the provisioning operation.
From the Host Credentials list, select Different for each Oracle Home if you
want to use different operating system credentials for each Oracle home, or
Same for all Oracle Homes if you want to use the same set of credentials for
all Oracle homes. Depending on the selection you make, specify the
credentials. Ensure that the users belong to the same group (dba/oinstall).
Note:
If you are using vendor clusterware, then ensure that root and the operating
system users, such as oracle and crsuser, owning the clusterware and
various Oracle homes are a part of the operating system groups required by
the vendor clusterware.
For example, if your system uses High Availability Cluster Multiprocessing
(HACMP) clusterware, then create or check for the existence of the group
hagsuser. Ensure that the relevant operating system users and root user are
members of this group.
For more information, refer to the Oracle Clusterware and Oracle Real Application
Clusters Installation and Configuration Guide.
c. Click Next.
a. In the Cluster Name and Location section, review the default name and
location details provided for Oracle Clusterware and Oracle RAC Database.
While Oracle recommends you to retain the default values, you can always
edit them to provide custom values.
For security purposes, the clusterware configuration sets the ownership of
Oracle Clusterware home and all its parent directories to be owned by root.
Hence, Oracle recommends you to install Oracle Clusterware outside the
Oracle base of the Oracle RAC home.
The default cluster name you see here is based on the host cluster name you
provided in the Agent Deploy application in Cloud Control, while deploying
Management Agents on a cluster. The scratch location you see here is a
temporary location on the target host where temporary files are placed before
provisioning and configuring Oracle RAC.
For Additional Parameters, specify any additional parameters you want to
run while installing Oracle Clusterware. For example, -debug.
You can specify any Oracle Universal Installer (OUI) parameter that can be
used in this provisioning operation. Using these parameters, you can even
change the installation type of the database. For example,
INSTALL_TYPE=SE. Ensure that the parameters are separated by white
space.
b. In the Database Details section, retain the default selection for creating a
starter database.
Note:
If the database creation steps are disabled in the Deployment Procedure, then
you will not see this section.
If you want to create a general-purpose database, then leave all the fields in
this section blank. Otherwise, provide the required details as described in this
step.
If you have a custom response file that already has the options enabled, then
select Use response file to create database, and specify the full path to a
location where the file is available. The file may be available on the target
host, in a shared location accessible from the target host, in the Software
Library, or in a location where an existing database is running.
Note:
From the Software Library or from the location where an existing database is
running, only a .dbt template file can be used. However, from the target host
or a shared location, any template file can be used.
If you do not have a custom response file, then select Do not use response
file, and provide the global database name, the credentials, and the additional
parameters you want to run while creating the starter database.
Note:
Ensure that the database name you specify is in the format
database_name.database_domain. It must have 1 to 8 alphanumeric
characters. For example, orcl.mydomain.com. Also note that the credentials
you provide are used for SYS, SYSTEM, SYSMAN, and DBSNMP accounts.
If you want to use the structure of an existing database and have a custom
template to structure the new database, then in Template File for Database,
specify the full path to a location where the template file is available. The file
may be available on the target host or on a shared location accessible from the
target host.
Provisioning Oracle Real Application Clusters for 10g and 11g 9-21
Provisioning Oracle Real Application Clusters Using Archived Software Binaries
Note:
If you do not store the response files and templates in a central location, you
can always customize the Deployment Procedure to add another step that
copies the response file or template to the target host before invoking the
configuration tools to create the database.
c. In the Backup and Recovery Details section, retain the default selection, that
is, Do not Enable Automated Backups if you do not want to have backups
taken.
Alternatively, if you want to enable automated backups, select Enable
Automated Backups, specify the full path to a directory location from where
the backed-up files can be recovered, and provide the operating system
credentials for running the backup job. Note that recovery location is the
same location as the backup location because this is where the files are backed
up and also recovered from.
d. In the ASM Instance Details section (appears only if you had selected to
deploy ASM), retain the default selection, that is, Create ASM Instance, and
specify the credentials, additional ASM parameters to be used, and the ASM
disk string to be used.
Note:
If you are provisioning Oracle Database 10g and Oracle ASM 10g, then ensure
that you specify the same password for database as well as ASM.
If you have a custom response file that already has the options enabled, then
select Use response file to create ASM database, and specify the full path to
a location where the file is available. The file may be available on the target
host or on a shared location accessible from the target hosts.
If you do not want to use a response file, then select Do not use response file.
e. Click Next.
a. In the Shared Storage Configuration section, provide details about the storage
devices and click Next. Specify the partition name and the mount location,
and select the mount format and a storage device for storing data. While
partition name is the path to the location where the device is installed, mount
location is the mount point that represents the partition location.
While configuring the storage device, at a minimum, you must have a
partition for at least OCR, Voting Disk, and data files. You cannot designate
the same storage device to multiple partitions.
Oracle recommends designating the OCR and the OCR Mirror devices to
different partitions. Similarly, Oracle recommends designating the Voting
Disk, Voting Disk1, and Voting Disk2 to different partitions.
Before clicking Next, do the following:
- If you want to clear the data on selected raw devices before creating and
configuring the cluster, then select Clear raw devices.
- If you have configured only for a few storage devices, then select Do not
provision storage for others that you do not want to provision.
- Specify the ASM disk string to be used.
b. In the Options section, select the ASM redundancy mode. The default is
None, which requires 7 GB of space. While Normal requires 16 GB of space,
High requires 32 GB.
Note:
If the configuration steps are disabled in the Deployment Procedure, then you
will not see this page.
b. In the Sysctl File Configuration section, select Configure Sysctl file if you
want to configure the sysctl.conf file. Specify the mode of editing the system
configuration file and the location of the reference system configuration file
used for modifying the kernel parameters.
The default mode is append. You can however select edit to modify, and replace
to replace the current sysctl.conf file.
Ensure that the reference file you specify is available in a shared location
accessible by the Oracle Management Service.
9. On the Review page, review the details you have provided for provisioning
Oracle RAC, and click Submit. If the details you provided seem to be missing on
this page, then see the workaround described in Troubleshooting Issues.
10. After the Deployment Procedure ends successfully, instrument the database to
collect configuration information.
Element Description
Bonding Device Specify the name of the bond to be created. For example, bond0
Name
Subnet Mask Specify the subnet mask for the IP address. For example, 255.255.255.0
Default Gateway Specify the default gateway for the bonding device. For example, 10.1.2.3
DNS Servers Specify the Domain Name Server (DNS) list for the bonding device. For multiple DNS
servers, the values should be comma- separated. Default values are picked up from
the /etc/resolv.conf file. Entries provided here will be appended.
Slave Devices List Specify the list of slave devices for the bonding device. For multiple slave devices, the
values should be comma-separated. For example, eth1,eth2,eth3.
Provisioning Oracle Real Application Clusters for 10g and 11g 9-23
Provisioning Oracle Real Application Clusters Using Archived Software Binaries
Element Description
Bonding Mode Specifies one of four policies allowed for the bonding module. Acceptable values for this
parameter are:
• 0 (Balance-rr)— Sets a round-robin policy for fault tolerance and load balancing.
Transmissions are received and sent out sequentially on each bonded slave interface
beginning with the first one available.
• 1 (Active-backup)— Sets an active-backup policy for fault tolerance. Transmissions are
received and sent out through the first available bonded slave interface. Another
bonded slave interface is only used if the active bonded slave interface fails.
• 2 (Balance-xor)— Sets an XOR (exclusive-or) policy for fault tolerance and load
balancing. Using this method, the interface matches up the incoming request's MAC
address with the MAC address for one of the slave NICs. Once this link is established,
transmissions are sent out sequentially beginning with the first available interface.
• 3 (Broadcast)— Sets a round-robin policy for fault tolerance and load balancing.
Transmissions are send out sequentially on each bonded slave interface beginning
with the first one available.
Domain Name Specify the domain name for the assigned host name. For example, foo.com
Primary Slave Specify the interface name, such as eth0, of the primary device. The primary device is the
Device first of the bonding interfaces to be used and is not abandoned unless it fails. This setting
is particularly useful when one NIC in the bonding interface is faster and, therefore, able
to handle a bigger load. This setting is only valid when the bonding interface is in active-
backup mode.
ARP Interval Specify (in milliseconds) how often ARP monitoring occurs. If using this setting while in
mode 0 or 2 (the two load-balancing modes) the network switch must be configured to
distribute packets evenly across the NICs. The value is set to 0 by default, which disables
it.
MII Interval Specify (in milliseconds) how often MII link monitoring occurs. This is useful if high
availability is required because MII is used to verify that the NIC is active to verify that
the driver for a particular NIC supports the MII tool. If using a bonded interface for high
availability, the module for each NIC must support MII. Setting the value to 0 (the
default), turns this feature off. When configuring this setting, a good starting point for this
parameter is 100.
MII Interval Down Specify (in milliseconds) how long to wait after link failure before disabling the link. The
Delay value must be a multiple of the value specified in the miimon parameter. The value is set
to 0 by default, which disables it.
MII Interval Up Specify (in milliseconds) how long to wait before enabling a link. The value must be a
Delay multiple of the value specified in the miimon parameter. The value is set to 0 by default,
which disables it.
NTP Server Specify the NTP server for the assigned host name. For example, 1.2.3.4.
# Node information
# Public Node Private Node Name Private IP Virtual Host Name Virtual IP
Name (Optional) (Optional)
2. Select the new normal name credential for both Normal user and Privileged user.
3. Click Submit.
When the database provisioning process reaches the step which requires root
credentials, the process will stop. You will need to run the command line manually.
To do this, set the environment to $AGENT_HOME, and then run the command line
copy from the Instructions field for the following three steps:
4. Once the command line is run manually using root user for each step, click
Confirm. The database provisioning process then continues till it completes.
Provisioning Oracle Real Application Clusters for 10g and 11g 9-25
Provisioning Oracle Real Application Clusters (Oracle RAC) Databases Using No Root Credentials
This chapter explains how you can extend and scale up an existing Oracle RAC stack
(Oracle Clusterware, Oracle ASM, Oracle RAC database), in a single click using Oracle
Enterprise Manager Cloud Control (Cloud Control). In particular, this chapter covers
the following:
Step 2 Running the Deployment Procedure To extend Oracle RAC, follow the
Run the Deployment Procedure to steps explained in Procedure for
successfully extend an existing Oracle Extending Oracle Real Application
RAC. Clusters.
• Ensure that you meet the prerequisites described in Setting Up Your Infrastructure.
• Ensure that operating system users such as oracle and crsuser are available on all
nodes of the cluster. These users must be a part of the relevant operating system
groups such as dba and oinstall.
• Ensure that the credentials being used to run this operation along with the group
ID are the same on all nodes of the selected cluster.
• If you have PAM/LDAP enabled in your environment, then ensure that the target
agents are configured with PAM/LDAP. For more information, see My Oracle
Support note 422073.1.
• Ensure that you use an operating system user that has the privileges to run the
Deployment Procedure, and that can switch to root user and run all commands on
the target hosts. For example, commands such as mkdir, ls, and so on.
If you do not have the privileges to do so, that is, if you are using a locked account,
then request your administrator (a designer) to either customize the Deployment
Procedure to run it as another user or ignore the steps that require special
privileges.
For example, user account A might have the root privileges, but you might use user
account B to run the Deployment Procedure. In this case, you can switch from user
account B to A by customizing the Deployment Procedure.
For information about customization, see Customizing Deployment Procedures .
• Ensure that the shared storage used for existing cluster nodes are accessible to the
nodes you want to add.
• Ensure that the umask value on the target host is 022. To verify this, run the
following command:
$ umask
Depending on the shell you are using, you can also verify this value in /etc/
profile, /etc/bashrc, or /etc/csh.cshrc.
1. From the Enterprise menu, select Provisioning and Patching, then select
Database Provisioning.
a. In the Select Real Application Clusters (RAC) section, select the Oracle RAC
you want to extend. The associated clusterware and Automatic Storage
Management (ASM) also get extended if they do not already exist.
You can use the Search section to search for a particular Oracle RAC. From
the Search list, select the target type based on which you want to search, and
click Go. You can use wildcards such as % and *.
Note:
If the cluster database you want to extend does not appear on this page, then:
• Specify the Clusterware home location for the cluster target in Cloud
Control. In Cloud Control, click the Targets menu, and then click All
Targets. On the All Targets page, from the Search list, select Cluster and
click Go. From the results table, select the cluster for which you want to
specify the clusterware home location. On the Cluster Home page, click
Monitoring Configuration. On the Configure Target page, specify the
clusterware home location and click Update.
b. In the Reference Host Options section, from the Reference Host list, select a
host that you want to use as the primary host for performing this operation.
Reference Host is the host that is selected for creation of clone archives and
then transferred to the new target nodes being added.
For Working directory, specify the full path to an existing directory on the
selected host that can be used for staging files for cloning. If the directories
you specify do not exist on the target hosts, then they will be created by the
Deployment Procedure. Ensure that the working directory is NOT shared on
the nodes.
For Files To Exclude, for each Oracle home, specify the files you want to
exclude while performing this operation. Note that any file or folder
corresponding to the regular expressions provided here will be excluded.
c. In the Oracle Home Shared Storage Options section, select the Oracle home
locations that are on shared storage.
d. In the Select New Nodes section, click Add to add new nodes that you want
to include to the selected Oracle RAC. After adding the node, specify the
virtual node name for each new node and verify the values displayed by
default.
Note:
Ensure that you select nodes that are monitored by Oracle Management
Agents 12c Release 1 (12.1.0.1) or higher.
Optionally, you can click Show Options to specify Private Node Name,
Private IP, Virtual IP, and Working Directory. Private Node Name and
Private IP are required only if you want to set up a private network as part of
the procedure.Virtual Node Name and Virtual IP are required only if they
are fixed and not DHCP-based. If the node is already part of the Oracle RAC
system, it will be ignored. If the node is part of the Oracle Clusterware, the
private network and virtual host information will be ignored. For Working
Directory, ensure that the location you specify is NOT shared on the nodes.
If you already have these details stored in cluster configuration file, then click
Import From File to select that cluster configuration file. This file typically
contains information about the new nodes to be added. It may also include
information about the private node name, private IP address, virtual host
name, and virtual IP address to which the Oracle RAC should be extended.
Note:
You can optionally override these preferred credentials. For example, if you
have added two destination hosts where the users are A and B, then you can
choose to override the preferred credentials with different credentials for each
of the hosts. Similarly, if the destinations hosts have the same credentials,
which may be different from the preferred credentials, then you can override
the preferred credentials with the same credentials for all hosts.
The credentials you specify here are used by the Deployment Procedure to run
the provisioning operation. If this environment is secure and has locked
accounts, then make sure that:
• The credentials you specify here have the necessary privileges to switch to
the locked account for performing the provisioning operation.
From the Host Credentials list, select Different for each Oracle Home if you
want to use different operating system credentials for each Oracle home, or
Same for all Oracle Homes if you want to use the same set of credentials for
all Oracle homes. Depending on the selection you make, specify the
credentials. Ensure that the users belong to the same group (dba/oinstall).
Note:
If you are using vendor clusterware, then ensure that root and the operating
system users, such as oracle and crsuser, owning the clusterware and
various Oracle homes are a part of the operating system groups required by
the vendor clusterware.
For example, if your system uses High Availability Cluster Multiprocessing
(HACMP) clusterware, then create or check for the existence of the group
hagsuser. Ensure that the relevant operating system users and root user are
members of this group.
For more information, refer to the Oracle Clusterware and Oracle Real Application
Clusters Installation and Configuration Guide.
h. Click Review.
4. On the Review page, review the details you have provided for extending Oracle
RAC, and click Submit.
Note:
If the Deployment Procedure fails, then review log files described in
Reviewing Log Files.
Note:
When you run the Deployment Procedure on Linux Itanium x64, if the CVU
Run to verify shared locations step fails, then manually fix it before proceeding
to the next step. No automated fix-ups are available for this platform.
This chapter describes how you can delete or scale down Oracle Real Application
Clusters (Oracle RAC). In particular, this chapter covers the following:
• Getting Started with Deleting or Scaling Down Oracle Real Application Clusters
Table 11-1 Getting Started with Deleting or Scaling Down an Existing Oracle RAC
Table 11-1 (Cont.) Getting Started with Deleting or Scaling Down an Existing Oracle
RAC
• Oracle CRS, Oracle ASM, and Oracle Database homes owned by the same or
different users.
• Separate Oracle CRS, Oracle ASM, and Oracle Database homes present on a shared
storage, which is shared by all member nodes.
• Nodes that were reimaged or shut down, and the existing configuration has to be
resolved to remove all references to this node in Cloud Control.
• Ensure that you meet the prerequisites described in Setting Up Your Infrastructure.
• Ensure that operating system users such as oracle and crsuser are available on all
nodes of the cluster. These users must be a part of the relevant operating system
groups such as dba and oinstall.
• Ensure that the credentials being used to run this operation along with the group
ID are the same on all nodes of the selected cluster.
• If you have PAM/LDAP enabled in your environment, then ensure that the target
agents are configured with PAM/LDAP. For more information, see My Oracle
Support note 422073.1.
• Ensure that you use an operating system user that has the privileges to run the
Deployment Procedure, and that can switch to root user and run all commands on
the target hosts. For example, commands such as mkdir, ls, and so on.
If you do not have the privileges to do so, that is, if you are using a locked account,
then request your administrator (a designer) to either customize the Deployment
Procedure to run it as another user or ignore the steps that require special
privileges.
For example, user account A might have the root privileges, but you might use user
account B to run the Deployment Procedure. In this case, you can switch from user
account B to A by customizing the Deployment Procedure.
For information about customization, see Customizing Deployment Procedures .
1. From the Enterprise menu, select Provisioning and Patching, then select
Database Provisioning.
a. In the Select Cluster section, click the torch icon for Select Cluster and select
an Oracle Clusterware instance that you want to delete. Along with the
selected Oracle Clusterware, the associated Oracle RAC and ASM instances
will also be deleted. The table displays details about the member nodes that
are part of the selected Oracle Clusterware.
Note:
When you use the torch icon to search for Oracle Clusterware, if you do not
find the Oracle Clusterware that you are looking for, then from the tip
mentioned below the table, click here to manually provide details about that
clusterware and search for it.
This is particularly useful when you want to delete partially-provisioned or
configured Oracle Clusterware instances because, by default, when you click
the torch icon, only the fully-provisioned clusterware instances appear for
selection. In this case, to search, select, and delete the partially-provisioned
instances, click here, and in the Enter Cluster Details window, manually
provide details about the cluster node that contains the partially-provisioned
instance and click OK. You can then select the host that appears in the Select
Nodes to Delete section and mark for deletion.
b. In the Reference Host Options section, from the Cluster Node list, select a
node that you want to use as the primary node for all cleanup operations.
For Working directory, specify the full path to an existing directory on the
selected node that can be used for staging files temporarily.
c. In the Select Nodes to Delete section, click Mark all to select all the nodes for
deletion. On clicking Mark all, you should see a cross icon against all the
nodes in the Deletion column. These cross icons indicate that the nodes have
been selected for deletion.
Note:
You can optionally override these preferred credentials. For example, if you
have added two destination hosts where the users are A and B, then you can
choose to override the preferred credentials with different credentials for each
of the hosts. Similarly, if the destinations hosts have the same credentials,
which may be different from the preferred credentials, then you can override
the preferred credentials with the same credentials for all hosts.
For example, if you have added two destination hosts where the users are A
and B, then you can choose to override the preferred credentials with different
credentials for each of the hosts. Similarly, if the destinations hosts have the
same credentials, which may be different from the preferred credentials, then
you can override the preferred credentials with the same credentials for all
hosts.
The credentials you specify here are used by the Deployment Procedure to run
the provisioning operation. If this environment is secure and has locked
accounts, then make sure that:
• The credentials you specify here have the necessary privileges to switch to
the locked account for performing the provisioning operation.
From the Host Credentials list, select Different for each Oracle Home if you
want to use different operating system credentials for each Oracle home, or
Same for all Oracle Homes if you want to use the same set of credentials for
all Oracle homes. Depending on the selection you make, specify the
credentials. Ensure that the users belong to the same group (dba/oinstall).
f. Click Review.
4. On the Review page, review the details you have provided for deleting Oracle
RAC, and click Submit.
Note:
If the Deployment Procedure fails, then review log files described in
Reviewing Log Files.
• Prerequisites for Scaling Down Oracle RAC by Deleting Some of Its Nodes
• Procedure for Scaling Down Oracle RAC by Deleting Some of Its Nodes
11.4.1 Prerequisites for Scaling Down Oracle RAC by Deleting Some of Its Nodes
• Ensure that you meet the prerequisites described in Setting Up Your Infrastructure.
• Ensure that operating system users such as oracle and crsuser are available on all
nodes of the cluster. These users must be a part of the relevant operating system
groups such as dba and oinstall.
• Ensure that the credentials being used to run this operation along with the group
ID are the same on all nodes of the selected cluster.
• If you have PAM/LDAP enabled in your environment, then ensure that the target
agents are configured with PAM/LDAP. For more information, see My Oracle
Support note 422073.1.
• Ensure that you use an operating system user that has the privileges to run the
Deployment Procedure, and that can switch to root user and run all commands on
the target hosts. For example, commands such as mkdir, ls, and so on.
If you do not have the privileges to do so, that is, if you are using a locked account,
then request your administrator (a designer) to either customize the Deployment
Procedure to run it as another user or ignore the steps that require special
privileges.
For example, user account A might have the root privileges, but you might use user
account B to run the Deployment Procedure. In this case, you can switch from user
account B to A by customizing the Deployment Procedure.
For information about customization, see Customizing Deployment Procedures .
11.4.2 Procedure for Scaling Down Oracle RAC by Deleting Some of Its Nodes
To scale down Oracle RAC by deleting one or more nodes that are part of it, follow
these steps:
1. From the Enterprise menu, select Provisioning and Patching, then select
Database Provisioning.
a. In the Select Cluster section, click the torch icon for Select Cluster and select
an Oracle Clusterware instance that you want to scale down. Along with the
selected Oracle Clusterware, the associated Oracle RAC and ASM instances
will also be deleted. The table displays details about the member nodes that
are part of the selected Oracle Clusterware.
Note:
When you use the torch icon to search for Oracle Clusterware, if you do not
find the Oracle Clusterware that you are looking for, then you can manually
provide details about that clusterware and search for it. To do so, from the tip
mentioned below the table, click here.
b. In the Reference Host Options section, from the Cluster Node list, select a
node that you want to use as the primary node for all cleanup operations.
For Working directory, specify the full path to an existing directory on the
selected node that can be used for staging files.
c. In the Select Nodes to Delete section, select the nodes you want to delete, and
click Mark for delete. On clicking Mark for delete, you should see a cross
icon against the selected nodes in the Deletion column. These cross icons
indicate that the nodes have been selected for deletion.
If you do not see the nodes that are part of the cluster, then click Add more
nodes to add those nodes so that nodes that do not appear as targets in Cloud
Control also are selected for deletion.
If you want to deselect a node, click Unmark. If you want to select all nodes at
a time, click Mark all, and if you want to deselect all nodes, click Unmark all.
Note:
You can optionally override these preferred credentials. For example, if you
have added two destination hosts where the users are A and B, then you can
choose to override the preferred credentials with different credentials for each
of the hosts. Similarly, if the destinations hosts have the same credentials,
which may be different from the preferred credentials, then you can override
the preferred credentials with the same credentials for all hosts.
The credentials you specify here are used by the Deployment Procedure to run
the provisioning operation. If this environment is secure and has locked
accounts, then make sure that:
• The credentials you specify here have the necessary privileges to switch to
the locked account for performing the provisioning operation.
From the Host Credentials list, select Different for each Oracle Home if you
want to use different operating system credentials for each Oracle home, or
Same for all Oracle Homes if you want to use the same set of credentials for
all Oracle homes. Depending on the selection you make, specify the
credentials. Ensure that the users belong to the same group (dba/oinstall).
f. Click Review.
4. On the Review page, review the details you have provided for deleting or scaling
down Oracle RAC, and click Submit.
Note:
This chapter explains how you can provision Oracle Database Replay Client using
Oracle Enterprise Manager Cloud Control (Cloud Control). In particular, this chapter
covers the following:
Table 12-1 Getting Started with Provisioning Oracle Database Replay Client
Table 12-1 (Cont.) Getting Started with Provisioning Oracle Database Replay Client
Step 3 Running the Deployment Procedure • To clone an existing Oracle Database Replay Client,
Run the Deployment Procedure to follow the steps explained in Procedure for Cloning
successfully provision Oracle a Running Oracle Database Replay Client.
Database Replay Client. • To provision Oracle Database Replay Client using a
gold image, follow the steps explained in Procedure
for Provisioning an Oracle Database Replay Client
Using Gold Image.
• To provision a standalone Oracle Database Replay
Client, follow the steps explained in Procedure for
Provisioning an Oracle Database Replay Client
Using Installation Binaries.
• Ensure that you meet the prerequisites described in Setting Up Your Infrastructure.
• Compare the configuration of the source and target hosts and ensure that they have
the same configuration. If the configurations are different, then contact your system
• If you have PAM/LDAP enabled in your environment, then ensure that the target
agents are configured with PAM/LDAP. For more information, see My Oracle
Support note 422073.1.
• Ensure that you use an operating system user that has the privileges to run the
Deployment Procedure, and that can switch to root user and run all commands on
the target hosts. For example, commands such as mkdir, ls, and so on.
If you do not have the privileges to do so, that is, if you are using a locked account,
then request your administrator (a designer) to either customize the Deployment
Procedure to run it as another user or ignore the steps that require special
privileges.
For example, user account A might have the root privileges, but you might use user
account B to run the Deployment Procedure. In this case, you can switch from user
account B to A by customizing the Deployment Procedure.
For information about customization, see Customizing Deployment Procedures .
• Ensure that the umask value on the target host is 022. To verify this, run the
following command:
$ umask
Depending on the shell you are using, you can also verify this value in /etc/
profile, /etc/bashrc, or /etc/csh.cshrc.
1. From the Enterprise menu, select Provisioning and Patching, then select
Database Provisioning.
2. In the Database Procedures page, select the Provision Oracle Database Client
Deployment Procedure and click Launch. The Oracle Database Client
provisioning wizard is launched.
extension. For example, *.trc. Note that any file or folder corresponding to the
regular expressions provided here will be excluded.
In the Source Host Credentials section, select Use Preferred Credentials to
use the credentials stored in the Management Repository. Select Override
Preferred Credentials to specify other credentials.
b. In the Specify Destination Host Settings section, click Add and select the
target hosts on which you want to clone the existing instance of Oracle
Database Replay Client.
Note:
On clicking Add, a window appears with a list of suitable hosts. If you do not
see your desired host, then select Show All Hosts and click Go to view all
other hosts.
By default, Oracle Base, Oracle Home, and Working Directory are prefilled
with sample values. Edit them and specify values that match with your
environment and standards. If the directories you specify do not exist on the
target hosts, then they will be created by the Deployment Procedure.
From the Credentials list, retain the default selection, that is, Preferred, so
that the preferred credentials stored in the Management Repository can be
used. Credentials here refer to operating system credentials.
Note:
You can optionally override these preferred credentials. For example, if you
have added two destination hosts where the users are A and B, then you can
choose to override the preferred credentials with different credentials for each
of the hosts. Similarly, if the destinations hosts have the same credentials,
which may be different from the preferred credentials, then you can override
the preferred credentials with the same credentials for all hosts.
The credentials you specify here are used by the Deployment Procedure to run
the provisioning operation. If this environment is secure and has locked
accounts, then make sure that:
• The credentials you specify here have the necessary privileges to switch to
the locked account for performing the provisioning operation.
If you have selected multiple hosts, then from the Path list, select Same for all
hosts if you want to use the same path across hosts, or select Different for
each host if you want to use different paths for each host.
Note:
If you select Same for all hosts, then ensure that the Oracle home and the user
are present on all the hosts.
If you want to customize the host settings, then click Customize Host
Settings. For example, you can specify the Management Agent home
credentials, a name for your installation, or an alternate host name instead of
the first host name found on the system.
e. Click Continue.
4. On the Review page, review the details you have provided for provisioning an
Oracle Database Replay Client, and click Submit.
Note:
• Prerequisites for Provisioning an Oracle Database Replay Client Using Gold Image
• Procedure for Provisioning an Oracle Database Replay Client Using Gold Image
12.3.1 Prerequisites for Provisioning an Oracle Database Replay Client Using Gold
Image
Before running the Deployment Procedure, meet the following prerequisites:
• Ensure that you meet the prerequisites described in Setting Up Your Infrastructure.
• Ensure that the gold image is available either in the Software Library or in a shared,
staging location.
• If you have PAM/LDAP enabled in your environment, then ensure that the target
agents are configured with PAM/LDAP. For more information, see My Oracle
Support note 422073.1.
• Ensure that you use an operating system user that has the privileges to run the
Deployment Procedure, and that can switch to root user and run all commands on
the target hosts. For example, commands such as mkdir, ls, and so on.
If you do not have the privileges to do so, that is, if you are using a locked account,
then request your administrator (a designer) to either customize the Deployment
Procedure to run it as another user or ignore the steps that require special
privileges.
For example, user account A might have the root privileges, but you might use user
account B to run the Deployment Procedure. In this case, you can switch from user
account B to A by customizing the Deployment Procedure.
For information about customization, see Customizing Deployment Procedures .
• Ensure that you use an operating system user that has write permission on the
staging areas used for placing software binaries of Oracle Database Replay Client.
Deployment Procedures allow you to use staging locations for quick file-transfer of
binaries and prevent high traffic over the network. While providing a staging
location, ensure that the operating system user you use has write permission on
those staging locations.
• Ensure that the umask value on the target host is 022. To verify this, run the
following command:
$ umask
Depending on the shell you are using, you can also verify this value in /etc/
profile, /etc/bashrc, or /etc/csh.cshrc.
12.3.2 Procedure for Provisioning an Oracle Database Replay Client Using Gold Image
To provision a gold image of Oracle Database Replay Client from the software library,
follow these steps:
1. From the Enterprise menu, select Provisioning and Patching, then select
Database Provisioning.
2. In the Database Procedures page, select the Provision Oracle Database Client
Deployment Procedure and click Launch. The Oracle Database Client
provisioning wizard is launched.
3. On the Deployment Procedure Manager page, in the Procedures subtab, from the
table, select Oracle Database Replay Client Provisioning. Then click Schedule
Deployment.
Cloud Control displays the Select Source and Destination page of the Deployment
Procedure.
Note:
If you do not see the required component in the Software Library, then follow
the workaround described in Troubleshooting Issues.
b. In the Specify Destination Host Settings section, click Add and select the
target hosts on which you want to install the gold image of Oracle Database
Replay Client.
Note:
On clicking Add, a window appears with a list of suitable hosts. If you do not
see your desired host, then select Show All Hosts and click Go to view all
other hosts.
By default, Oracle Base, Oracle Home, and Working Directory are prefilled
with sample values. Edit them and specify values that match with your
environment and standards. If the directories you specify do not exist on the
target hosts, then they will be created by the Deployment Procedure.
From the Credentials list, retain the default selection, that is, Preferred, so
that the preferred credentials stored in the Management Repository can be
used. Credentials here refer to operating system credentials.
Note:
You can optionally override these preferred credentials. For example, if you
have added two destination hosts where the users are A and B, then you can
choose to override the preferred credentials with different credentials for each
of the hosts. Similarly, if the destinations hosts have the same credentials,
which may be different from the preferred credentials, then you can override
the preferred credentials with the same credentials for all hosts.
The credentials you specify here are used by the Deployment Procedure to run
the provisioning operation. If this environment is secure and has locked
accounts, then make sure that:
• The credentials you specify here have the necessary privileges to switch to
the locked account for performing the provisioning operation.
If you have selected multiple hosts, then from the Path list, select Same for all
hosts if you want to use the same path across hosts, or select Different for
each host if you want to use different paths for each host.
Note:
If you select Same for all hosts, then ensure that the Oracle home and the user
are present on all the hosts.
If you want to customize the host settings, then click Customize Host
Settings. For example, you can specify the Management Agent home
credentials, a name for your installation, or an alternate host name instead of
the first host name found on the system.
e. Click Continue.
5. On the Review page, review the details you have provided for provisioning an
Oracle Database Replay Client, and click Submit.
Note:
Note:
The Oracle Database Replay Client version to be used for replaying workload
must be the same version as the version of the test database on which the
workload has to be replayed. Oracle Database Replay Client is supported in
Oracle Database 10g Release 4 (10.2.0.4) and higher. While you can use
archived software binaries for installing Oracle Database Client 11g Release 1
(11.1.0.6) and Oracle Database Client 11g Release 2, for test database versions
10.2.0.4, 10.2.0.5, and 11.1.0.7, you must create a gold image of the respective
versions of Oracle Database Replay Client homes and use the same.
• Ensure that you meet the prerequisites described in Setting Up Your Infrastructure.
Note:
If you want to create a component for the software binaries of Oracle Database
Replay Client, then before you access the Software Library, see My Oracle
Support note 815567.1. This note explains the different requirements for each
OS platform prior to using the media with Cloud Control Deployment
Procedure.
Note:
If you want to create a component for the software binaries of Oracle Database
Replay Client, do not save the shiphome or component to the Components
folder in Software Library. Create a new folder in Software Library and then
save the component.
• Ensure that the installation binaries are downloaded, and archived and uploaded
as a component in the Software Library.
• Compare the configuration of the source and target hosts and ensure that they have
the same configuration. If the configurations are different, then contact your system
administrator and fix the inconsistencies before running the Deployment
Procedure.
To compare the configuration of the hosts, in Cloud Control, click Targets and then
Hosts. On the Hosts page, click the name of the source host to access its Home
page, and then from the Host menu, click Configuration and then click Compare.
• If you have PAM/LDAP enabled in your environment, then ensure that the target
agents are configured with PAM/LDAP. For more information, see My Oracle
Support note 422073.1.
• Ensure that you use an operating system user that has the privileges to run the
Deployment Procedure, and that can switch to root user and run all commands on
the target hosts. For example, commands such as mkdir, ls, and so on.
If you do not have the privileges to do so, that is, if you are using a locked account,
then request your administrator (a designer) to either customize the Deployment
Procedure to run it as another user or ignore the steps that require special
privileges.
For example, user account A might have the root privileges, but you might use user
account B to run the Deployment Procedure. In this case, you can switch from user
account B to A by customizing the Deployment Procedure.
For information about customization, see Customizing Deployment Procedures .
• Ensure that the umask value on the target host is 022. To verify this, run the
following command:
$ umask
Depending on the shell you are using, you can also verify this value in /etc/
profile, /etc/bashrc, or /etc/csh.cshrc.
12.4.2 Procedure for Provisioning an Oracle Database Replay Client Using Installation
Binaries
To provision a fresh Oracle Database Replay Client, follow these steps:
1. From the Enterprise menu, select Provisioning and Patching, then select
Database Provisioning.
2. In the Database Procedures page, select the Provision Oracle Database Client
Deployment Procedure and click Launch. The Oracle Database Replay Client
provisioning wizard is launched.
Cloud Control displays the Select Source and Destination page of the Deployment
Procedure.
Note:
If you do not see the required component in the Software Library, then follow
the workaround described in Troubleshooting Issues.
b. In the Specify Destination Host Settings section, click Add and select the
target hosts on which you want to install the Oracle Database Replay Client.
Note:
On clicking Add, a window appears with a list of suitable hosts. If you do not
see your desired host, then select Show All Hosts and click Go to view all
other hosts.
By default, Oracle Base, Oracle Home, and Working Directory are prefilled
with sample values. Edit them and specify values that match with your
environment and standards. If the directories you specify do not exist on the
target hosts, then they will be created by the Deployment Procedure.
From the Credentials list, retain the default selection, that is, Preferred, so
that the preferred credentials stored in the Management Repository can be
used. Credentials here refer to operating system credentials.
Note:
You can optionally override these preferred credentials. For example, if you
have added two destination hosts where the users are A and B, then you can
choose to override the preferred credentials with different credentials for each
of the hosts. Similarly, if the destinations hosts have the same credentials,
which may be different from the preferred credentials, then you can override
the preferred credentials with the same credentials for all hosts.
The credentials you specify here are used by the Deployment Procedure to run
the provisioning operation. If this environment is secure and has locked
accounts, then make sure that:
• The credentials you specify here have the necessary privileges to switch to
the locked account for performing the provisioning operation.
If you have selected multiple hosts, then from the Path list, select Same for all
hosts if you want to use the same path across hosts, or select Different for
each host if you want to use different paths for each host.
Note:
If you select Same for all hosts, then ensure that the Oracle home and the user
are present on all the hosts.
If you want to customize the host settings, then click Customize Host
Settings. For example, you can specify the Management Agent home
credentials, a name for your installation, or an alternate host name instead of
the first host name found on the system.
e. Click Continue.
4. On the Review page, review the details you have provided for provisioning an
Oracle Database Replay Client, and click Submit.
Note:
• Performs an online backup (or optionally uses an existing backup) of the primary
database control file, datafiles, and archived redo log files
• Transfers the backup pieces from the primary host to the standby host
• Creates other needed files (e.g., initialization, password) on the standby host
• Restores the control file, datafiles, and archived redo log files to the specified
locations on the standby host
• Adds online redo log files and other files to the standby database as needed
Note:
2. On the Databases page, you see a list of databases. Select the primary database for
which you want to create a new physical standby database.
3. On the primary database home page, click Availability and then select Add
Standby Database.
Note:
You need to connect to the primary database using SYSDBA credentials, if you
are not yet connected.
If you log in as a user with SYSDBA privileges, you will have access to all
Data Guard functionality, including all monitoring and management features.
If you log in as a non-SYSDBA user, you will have access to monitoring
functions only; features such as standby creation, switchover, and failover will
not be available.
5. On the Add Standby Database page, select Create a new physical standby
database. Click Continue.
Note:
If you choose to create a new physical or logical standby database, Data Guard
checks the following when you click Continue:
• Server parameter file (SPFILE) -- Data Guard requires that all databases in
a configuration use a server parameter file (SPFILE). If the wizard
encounters a primary database that does not use an SPFILE, the wizard
stops and returns a message asking you to create one. You can create one
with a non-default name. Data Guard only requires that the primary
database uses an SPFILE.
6. The Add Standby Database wizard opens. It takes you through the following steps:
• Perform a live backup of the primary database using RMAN to copy database files,
or by copying the database files via staging areas.
13.2.3 Step 3: Select the Oracle home in which to create the standby database
The standby database can be created in any Oracle home that was discovered by
Oracle Enterprise Manager. Only Oracle homes on hosts that match the operating
system of the primary host are shown. You must select a discovered Oracle home and
provide a unique instance name for the standby database. Standby host credentials are
required to continue.
In the Listener Configuration section, specify the name and port of the listener that
will be used for the standby database. If a new name and port are specified that are
not in use by an existing listener, a new listener using the specified port will be
created.
Click Next.
You can click Cancel to terminate the current process and begin again at the
introductory page of the Add Standby Database wizard.
Note:
Note:
2. On the Databases page, you see a list of databases. Select the primary database for
which you want to create a new physical standby database.
3. On the primary database home page, click Availability and then select Add
Standby Database.
Note:
You need to connect to the primary database using SYSDBA credentials, if you
are not yet connected.
If you log in as a user with SYSDBA privileges, you will have access to all
Data Guard functionality, including all monitoring and management features.
If you log in as a non-SYSDBA user, you will have access to monitoring
functions only; features such as standby creation, switchover, and failover will
not be available.
5. On the Add Standby Database page, select Create a new physical standby
database. Click Continue.
Note:
If you choose to create a new physical or logical standby database, Data Guard
checks the following when you click Continue:
• Server parameter file (SPFILE) -- Data Guard requires that all databases in
a configuration use a server parameter file (SPFILE). If the wizard
encounters a primary database that does not use an SPFILE, the wizard
stops and returns a message asking you to create one. You can create one
with a non-default name. Data Guard only requires that the primary
database uses an SPFILE.
6. On the database page, in the Standby Databases section, click Add Standby
Database.
7. The following steps assume a broker configuration already exists with one
primary database and one physical standby database, and creates a new logical
standby database. It shows how the wizard takes you through additional steps to
select the Oracle home for the database and to copy datafiles to the standby
database.
The Add Standby Database wizard takes you through the following steps:
13.3.3 Step 3: Select the Oracle home in which to create the standby database
The standby database can be created in any Oracle home that was discovered by
Oracle Enterprise Manager. Only Oracle homes on hosts that match the operating
system of the primary host are shown. You must select a discovered Oracle home and
provide a unique instance name for the standby database. Standby host credentials are
required to continue.
Choose the method by which you want to make the primary database backup files
accessible to the standby host. The two options are:
– Transfer files from the primary host working directory to a standby host
working directory
– Directly access the primary host working directory location from the standby
host using a network path name
– Keep file names and locations the same as the primary database
Note:
2. On the Databases page, you see a list of databases. Select the database you want to
manage an existing standby database.
3. On the primary database home page, click Availability and then select Add
Standby Database.
Note:
You need to connect to the primary database using SYSDBA credentials, if you
are not yet connected.
If you log in as a user with SYSDBA privileges, you will have access to all
Data Guard functionality, including all monitoring and management features.
If you log in as a non-SYSDBA user, you will have access to monitoring
functions only; features such as standby creation, switchover, and failover will
not be available.
5. In the Add Standby Database page, select Manage an Existing Standby Database
with Data Guard Broker.
6. Select an existing standby database that you want to be managed by the Data
Guard broker. The database you choose must have been created from the primary
database and must be configured to function as a standby database.
All discovered databases in your environment (both RAC and non-RAC databases)
will be shown in the list.
Click Next.
Note:
You can click Cancel at any time to terminate the current process and begin
again at the introductory page of the Add Standby Database wizard.
7. Enter the log in details for the database. You can select Named or New credentials.
For new credentials, create a unique credential. You can set it to Preferred
Credential if you want to use it again.
Click Next.
8. (optional) Change the Standby Archive Location setting of the existing standby
cluster database. Click Next.
9. Review the data for the configuration and standby database. Click Finish
2. On the Databases page, you see a list of databases. Select the database you want to
manage an existing standby database.
3. On the primary database home page, click Availability and then select Add
Standby Database.
Note:
You need to connect to the primary database using SYSDBA credentials, if you
are not yet connected.
If you log in as a user with SYSDBA privileges, you will have access to all
Data Guard functionality, including all monitoring and management features.
If you log in as a non-SYSDBA user, you will have access to monitoring
functions only; features such as standby creation, switchover, and failover will
not be available.
5. In the Add Standby Database page, select Create a Primary Backup Only.
Click Continue.
6. On the Backup Options page, specify a location on the primary host where a
directory can be created to store the primary database backup files. Click Next.
7. On the Schedule page, specify a name, description, and start time for the backup
job.
You can choose to start the backup immediately or at a later time. If you want to
start at a later time, set the time and date for when the backup should start.
Click Next.
8. Review the data for the configuration and standby database. Click Finish
Enterprise Manager Cloud Control enables you to clone databases using the Full
Clone method, or by using the classic cloning wizard which enables you clone
databases using RMAN backup, staging areas, or an existing backup.
This chapter outlines the following procedures which you can use to create a database
clone:
1. On the Databases page, you can access the Full Clone database wizard by following
any one method:
• Select the database that you want to clone from the list of the databases
displayed. On the Database home page, click the Database menu, select
Cloning, and then select Create Full Clone.
• Right click on the database target name, select Database, select Cloning, and
then select Create Full Clone.
• Right click on the database target name, select Database, select Cloning, and
then select Clone Management. On the Clone Management page, in the Full
Clone Databases box, click Create.
2. On the Create Full Clone Database: Source and Destination page, do the following:
• In the Source section, launch the credentials selector by selecting the search
icons for SYSDBA Database and Database Host credentials. Click OK.
• In the Data Time Series section, select Now or Prior Point in Time.
If you selected Now, specify or search and select the SYSASM ASM Credentials.
Now refers to Live Clone.
If you selected Prior Point in Time, a carousel of RMAN Backup images appear.
Select the appropriate RMAN backup by clicking Select on the image.
You can create full clones by selecting a backup and optionally modify the time
and SCN to do a point in time restore.The Select Time option has the minimum
limit set to the current backups time and maximum time limit set to the next
backup time. You can modify this in case you have to create a new clone
between these two time periods. Similarly, you can do the same for SCN by
selecting the Select SCN option.
– RAC Database
In the Hosts section, specify or select the cluster target. The Oracle Home
location gets specified by default. Next, specify the Database Host
credentials, and the SYSASM ASM credentials.
In the Nodes section, select the cluster and Oracle Home to display one or
more hosts on which the administrator managed Oracle RAC database will
be created.
Note:
Oracle supports inline patching as part of clones. When the destination home
selected has patches applied such as the latest CPU or PSU, then the cloned
database is automatically brought up with that level.
Click Next.
• In the Database Files Location, specify the location where you want the data
files, temp files, redo log files, and control files to be created. You can select File
System or Automatic Storage Management (ASM), and then specify the
common location for the database files.
The Use Oracle Optimal Flexible Architecture-compliant directory structure
(OFA) option enables you to configure different locations for:
– Data files
– Control file
– Temporary file
• In the Recovery Files location, specify the location where you want the recovery
files, such as archived redo logs, RMAN backups, and other related files to be
created. You can choose to use the fast area recovery by selecting Use Fast
Recovery Area. If you do, specify the fast recovery area size. The fast recovery
area size is defaulted to that of source.
• In the Listener Configuration section, select the listener targets running under
the new Oracle Home, to register the clone database.
• In the Database Credentials section, specify passwords for the SYS, SYSTEM,
and DBSNMP administrative users in the clone database. You can choose to
have the same password for all the three users or a different password for each.
Click Next.
5. On the Create Full Clone Database: Initialization Parameters page, you can
configure the values of various initialization parameters that affect the operation of
the database instance. Select the parameter and click Edit to modify the value of the
parameter.
Click Next.
6. On the Create Full Clone Database: Post Processing page, specify the following:
Note:
The masking definition can be used only when you have a Subset-Masking
license pack.
• Custom Scripts: Specify the custom scripts that need to be executed before and
after the database is created.
For more information on how to store and use custom scripts in the Software
Library, refer to Using Custom Scripts Stored in the Software Library.
• Create Data Profile: This option enables you to automatically take a backup of
the new cloned instance once it is created. When the clone or the Test Master is
refreshed, this section displays the existing profiles created for the database.
You can select the profile that has to be refreshed along with the database.
• Create as Test Master: This option if you want to create the cloned database as a
Test Master database.
Click Next.
7. On the Create Full Clone Database: Schedule page, specify a unique deployment
procedure instance name. You can choose to start the deployment procedure
immediately or at a later time,
In the Notification Details section, you can choose to set the following notifications:
• Scheduled
• Running
• Action Required
• Suspended
• Succeeded
• Problems
Click Next.
8. On the Create Full Clone Database: Review page, verify the details of the source
database, the data source of the clone, and the destination database.
Click Submit.
DB_STORAGE_TYPE=FS
DB_FILE_LOC=/scratch/app/oradata
FRA_STORAGE_TYPE=FS
FLASH_REC_AREA=/scratch/user/app/fra
FRA_SIZE=4395
ARCHIVE_LOG_MODE=NO
DEST_LISTENER_SELECTION=DEST_DB_HOME
LISTENER_PORT=1526
ENABLE_LIVE_CLONE=true
DB_ADMIN_PASSWORD_SAME=true
DATABASE_PASSWORDS=right1
DB_TEMPLATE_STAGE=/tmp
To verify the status of the database clone creation, execute the verb emcli
get_instance_status -instance={instance GUID}.
1. On the Databases page, you can access the Full Clone database wizard by following
any one method:
• Select the database that you want to clone from the list of the databases
displayed. On the Database home page, click the Database menu, select
Cloning, and then select Create Test Master.
• Right click on the database target name, select Database, select Cloning, and
then select Create Test Master.
• Right click on the database target name, select Database, select Cloning, and
then select Clone Management. On the Clone Management page, in the Test
Master Databases box, click Create.
2. On the Create Test Master Database: Source and Destination page, do the
following:
• In the Source section, launch the credentials selector by selecting the search
icons for SYSDBA Database and Database Host credentials. Click OK.
• In the Data Time Series section, select Now or Prior Point in Time.
If you selected Now, specify or search and select the SYSASM ASM Credentials.
Now refers to Live Clone.
If you selected Prior Point in Time, a carousel of RMAN Backup images appear.
Select the appropriate RMAN backup by clicking Select on the image.
Select a specific time between the selected backup or snapshot and the next (or
latest point of source). The backups or dumps are created at specific intervals
and the test master that is based on these will reflect the production database at
specific points in time. To reflect the latest data in the production database, the
test master needs to be periodically refreshed.
– RAC Database
In the Hosts section, specify or select the cluster target. The Oracle Home
location gets specified by default. Next, specify the Database Host
credentials, and the SYSASM ASM credentials.
In the Nodes section, select the cluster and Oracle Home to display one or
more hosts on which the administrator managed Oracle RAC database will
be created.
Note:
Oracle supports inline patching as part of clones. When the destination home
selected has patches applied such as the latest CPU or PSU, then the cloned
database is automatically brought up with that level.
Click Next.
• Database Files Location: Specify the location in which the data files, temporary
files, redo log files, and control files will be created.
– File System: The Oracle Database File System creates a standard file system
interface on top of files and directories that are stored in database tables. If
you select this option, you must specify or select the Location of the File
System. You can specify a common location for all the files or you can select
the Use Oracle Optimal Flexible Architecture-compliant directory structure
(OFA) checkbox and specify different locations for data files, redo log files,
and so on.
• Listener Configuration: Click Add to add one or more listener targets that are to
be associated with the new database.
• Database Credentials: Specify the passwords for the administrative users (SYS,
SYSTEM and DBSNMP) of the new database being cloned. You can choose to
use the same password for all the schemas or different passwords for each
schema.
• Click Next.
5. On the Create Test Master Database: Initialization Parameters page, you can
configure the values of various initialization parameters that affect the operation of
the database instance.
Select the parameter and click Edit to modify the value of the parameter. Some
values such as db_block_size cannot be modified.
Click Next.
6. On the Create Test Master Pluggable Database: Post Processing page, in the Data
Masking section, specify the data masking definition that you want to apply after
creating the test master PDB. Data masking masks sensitive data in a database.
For information on how to create a data masking definition, see Oracle Data
Masking and Subsetting Guide. Note that you can apply a data masking definition
only if you have the Subset-Masking license pack.
In the Custom Scripts section, for Pre Script and Post Script, specify the Oracle
Software Library components that contain the scripts that you want to run before,
and after creating the test master PDB respectively. Also, for SQL Script, specify
the SQL scripts that you want to run after creating the test master PDB. For Run As
User, select the user account that you want to use to run the SQL scripts.
Click Next.
7. On the Create Test Master Database: Schedule page, specify the schedule for the
creation of the test master. It can be created immediately (if physical standby used,
it is created immediately and automatically refreshed) or can be created at a later
date / time and refreshed at specified intervals.
Click Next.
8. On the Create Test Master Database: Review page, review and verify the
information specified and click Submit to create the test master. After the Test
Master has been created, you can refresh the Test Master as required to create a
new version of the profile on which the Test Master is based.
To verify the status of the Test Master database creation execute the EM CLI verb
emcli get_instance_status -instance={instance GUI}.
14.3.1 Creating a Full Clone Pluggable Database Using the Clone Wizard
If you have the 12.1.0.8 Enterprise Manager for Oracle Database plug-in deployed in
your system, you can create a full clone of a PDB using the new Clone PDB Wizard.
2. For View, select Search List. From the View menu, select Expand All.
3. Look for the source CDB (the CDB that the source PDB is a part of) in the list, then
click the name of the PDB that you want to clone.
4. From the Oracle Database menu, select Cloning, then select Create Full Clone.
Alternatively, in Step 3, you can right click the name of the PDB that you want to
clone, select Oracle Database, select Cloning, then select Create Full Clone.
5. On the Source and Destination: Create Full Clone Pluggable Database page, fo the
following:
• Specify the SYSDBA credentials for the source CDB. You can choose to use the
preferred credentials, use a saved set of named credentials, or specify a new set
of credentials.
• To clone the PDB to a CDB different from the source CDB, select Clone the
Pluggable Database into a different Container Database, then specify the
destination CDB.
• In the Credentials section, specify the destination CDB host credentials. If you
chose to clone the PDB to a CDB different from the source CDB, specify the
SYSDBA credentials for the destination CDB. Also, if the destination CDB is
using Automatic Storage Management (ASM) to manage disk storage, you must
specify the ASM credentials.
6. If you do not need to specify anymore details, click Clone. This submits the
deployment procedure to clone a PDB to a CDB that is deployed in a public cloud
setup.
To specify other configuration details, mask data, as well as schedule the cloning
process, click Advanced.
Follow the rest of the steps, if you have selected the Advanced option. The option
to Clone is available on each page.
7. On the Create Full Clone Pluggable Database: Source and Destination page, verify
the details specified, and then click Next.
• In the Database Files Location section, specify the storage location where the
datafiles of the PDB clone must be stored. If the destination CDB is using ASM
to manage disk storage, specify the disk group where the datafiles of the PDB
clone must be stored.
• To ensure that only the source PDB data model definition is cloned (and the
source PDB data is not cloned), select Exclude User Data.
• In the Advanced Configuration section, specify the storage limits for the
maximum size of the PDB clone, and the maximum size of a shared tablespace
within the PDB clone. By default, no limits are placed on the values for these
attributes.
• In the Miscellaneous section, select the logging option that you want to use for
the tablespaces created within the PDB clone.
Click Next.
9. On the Create Full Clone Pluggable Database: Post Processing page, do the
following:
• In the Data Masking section, specify the data masking definition that you want
to apply after cloning the PDB. Data masking masks sensitive data in a
database.
For information on how to create a data masking definition, see Oracle Data
Masking and Subsetting Guide. Note that you can apply a data masking definition
only if you have the Subset-Masking license pack.
• In the Custom Scripts section, for Pre Script and Post Script, specify the Oracle
Software Library components that contain the scripts that you want to run
before cloning, and after cloning the PDB respectively. Also, for SQL Script,
specify the SQL scripts that you want to run after cloning the PDB. For Run As
User, select the user account that you want to use to run the SQL scripts.
Click Next.
10. On the Create Full Clone Pluggable Database: Schedule page, specify an instance
name for the cloning deployment procedure. Also, specify the point in time when
you want the cloning procedure to begin.
In the Notification section, select the deployment procedure states for which you
want to receive e-mail notifications. For example, if you select Scheduled and
Succeeded for Status for Notification, you will receive e-mail notifications when
the cloning deployment procedure is scheduled, and when it succeeds.
Click Next.
11. On the Create Full Clone Pluggable Database: Review page, review all the details
you provided. If you want to edit certain details, click Back to navigate to the
required page.
Click Clone to submit the deployment procedure to create a full clone of the source
PDB.
SRC_CDB_CREDS=NC_HOST_SYC:SYCO
SRC_WORK_DIR=/tmp/source
DEST_HOST_CREDS=NC_SLCO_SSH:SYS
DEST_LOCATION=/scratch/sray/app/sray/cdb_tm/HR_TM_PDB6
DEST_CDB_TARGET=cdb_tm
DEST_CDB_TYPE=oracle_database
DEST_CDB_CREDS=NC_HOST_SYC:SYCO
DEST_PDB_NAME=HR_TM_PDB6
Note:
If the destination PDB and the source PDB are in different CDBs wherein, both
the CDBs are on Oracle Cloud, then ensure that the source PDB is in read-
write mode. This is necessary since a database link is created in the destination
CDB for cloning the PDB, and a temporary user is created in the source PDB
for using the database link. If there is an existing database link in the
destination CDB that connects to the source PDB, then use the parameter
EXISTING_DB_LINK_NAME to provide the database link name in the
properties file.
14.4.1 Creating a Test Master Pluggable Database Using the Clone Wizard
If you have the 12.1.0.8 Enterprise Manager for Oracle Database plug-in deployed in
your system, you can create a test master PDB from a source PDB, using the new
Clone PDB Wizard.
To create a test master PDB from a source PDB, follow these steps:
2. For View, select Search List. From the View menu, select Expand All.
3. Look for the source CDB (the CDB that the source PDB is a part of) in the list, then
click the name of the PDB from which you want to create a test master PDB.
4. From the Oracle Database menu, select Cloning, then select Create Test Master.
Alternatively, in Step 3, you can right click the name of the PDB from which you
want to create a test master PDB, select Oracle Database, select Cloning, then
select Create Test Master.
5. On the Create Test Master Pluggable Database: Source and Destination page, do
the following:
• Specify the SYSDBA credentials for the source CDB. You can choose to use the
preferred credentials, use a saved set of named credentials, or specify a new set
of credentials.
• In the Container Database section, specify the destination CDB (the CDB that
the test master PDB must be a part of).
• In the Credentials section, specify the SYSDBA credentials for the destination
CDB, and the host credentials for the destination CDB. Also, if the destination
CDB is using Automatic Storage Management (ASM) to manage disk storage,
you must specify the ASM credentials.
Click Next.
In the Database Files Location section, specify the storage location where the
datafiles of the test master PDB must be stored. If the destination CDB is using
ASM to manage disk storage, specify the disk group where the datafiles of the test
master PDB must be stored.
To ensure that only the source PDB data model definition is cloned (and the source
PDB data is not cloned), select Exclude User Data.
In the PDB Administrator Credentials section, specify the credentials of the admin
user account that you want to use to administer the test master PDB.
In the Advanced Configuration section, specify the storage limits for the maximum
size of the test master PDB, and the maximum size of a shared tablespace within
the test master PDB. By default, no limits are placed on the values for these
attributes. In the Miscellaneous section, select the logging option that you want to
use for the tablespaces created within the test master PDB.
Note that if the destination CDB is part of an Exadata machine, the Access Controls
and Permissions section is displayed in place of the Advanced Configuration
section. In this case, you must specify the owner and the group that must be
granted read only permissions on the datafiles.
Click Next.
7. On the Create Test Master Pluggable Database: Post Processing page, in the Data
Masking section, specify the data masking definition that you want to apply after
creating the test master PDB. Data masking masks sensitive data in a database.
For information on how to create a data masking definition, see Oracle Data
Masking and Subsetting Guide. Note that you can apply a data masking definition
only if you have the Subset-Masking license pack.
In the Custom Scripts section, for Pre Script and Post Script, specify the Oracle
Software Library components that contain the scripts that you want to run before,
and after creating the test master PDB respectively. Also, for SQL Script, specify
the SQL scripts that you want to run after creating the test master PDB. For Run As
User, select the user account that you want to use to run the SQL scripts.
Click Next.
8. Specify an instance name for the deployment procedure. Also, specify the point in
time when you want the deployment procedure to begin.
In the Notification section, select the deployment procedure states for which you
want to receive e-mail notifications. For example, if you select Scheduled and
Succeeded for Status for Notification, you will receive e-mail notifications when
the deployment procedure is scheduled, and when it succeeds.
Click Next.
9. Review all the details you provided. If you want to edit certain details, click Back
to navigate to the required page.
Click Clone to submit the deployment procedure to create a test master PDB from
the source PDB.
Note:
You will need to add two more parameters (ACL_DF_OWNER=oracle and
ACL_DF_GROUP=oinstall) in case you need to create the Test Master on
Exadata ASM.
Cloning an Oracle • Backs up each database file and stores it in a staging area
Database Using Staging • Transfers each backup file from source to destination
Areas • Restores each backup file to the specified locations
• Recovers and opens the cloned database
3. On the Database target page, from the Oracle Database menu, select Provisioning,
and then click Clone and Refresh Database.
4. On the Clone and Refresh page, click the Switch to Classic Clone link.
6. On the Clone Database page: Source Type page, select Online Backup and Use
Recovery Manager (RMAN) to copy database files.
Click Continue.
Note:
When you use RMAN backup to clone a database, the source database will be
duplicated directly to the specified destination Oracle Home. No staging areas
are required.
7. On the Clone Database: Source Options page, in the Degree of Parallels box, enter
the number of parallel channels used by RMAN to copy the database files. The
default number is 2.
Note:
8. In the Source Host Credentials section, enter the credentials of the user who owns
the source database Oracle server installation. You can either select Named
credential or New credential.
If you select New credential, enter the Username and Password. You can select the
Set as Preferred Credentials checkbox, if you want to use these set of credentials
again. Click Test to check if your credentials are valid.
Click Next.
9. On the Clone Database: Select Destinations page, in the Destination Oracle Home
section, click the Search icon.
Note:
The Oracle Home should exist on the specified host and should match the
version of the source database.
On the Destination Oracle Home page that appears, search and select the
destination Oracle Home. Click Next.
10. In the Destination Host Credentials section, enter the credentials of the user who
owns the Oracle Home specified in the Destination Oracle Home section.
11. In the Destination Database section, do the following: specify the global database
name, the instance name, and for select file system as the database storage. Click
Next.
• Click Next.
12. On the Clone Database: Destination Options page, select Use Database Area and
Fast Recovery Area.
Click Next.
13. On the Clone Database: Database Configuration page, in the Listener Configuration
section, specify the name and port of the listener that will be used for the cloned
database. If a new name and port are specified that are not in use by an exiting
listener, a new listener using the specified port will be created.
14. On the Clone Database: Schedule page, specify a name description for the clone job.
You can choose to run the clone job immediately or you can specify a later time and
date for the job to run.
Click Next.
15. On the Clone Database: Review page, review the details and configuration of the
source database, the destination database, and the database storage. You can view
the database storage files by clicking on View Source Database Files.
3. On the Database target page, from the Oracle Database menu, select Provisioning,
and then click Clone Database.
4. On the Clone and Refresh page, click the Switch to Classic Clone link.
6. On the Clone Database page: Source Type page, select Online Backup and Copy
database files via staging areas.
Click Continue.
Note:
This method requires staging areas on both the source and the destination
hosts.
7. On the Clone Database: Source Options page, in the Staging Area section, enter the
Staging Area Location.
Note:
8. Select if you want to delete or retain the staging area after the cloning operation.
By retaining the staging area after a cloning operation, you avoid doing another
backup later. However, this option requires a minimum disk space of 2230 MB.
9. In the Source Host Credentials section, enter the credentials of the user who owns
the source database Oracle server installation. You can either select Named
credential or New credential.
If you select New credential, enter the Username and Password. You can select the
Set as Preferred Credentials checkbox, if you want to use these set of credentials
again. Click Test to check if your credentials are valid.
Click Next.
10. On the Clone Database: Select Destinations page, in the Destination Oracle Home
section, click the Search icon.
Note:
The Oracle Home should exist on the specified host and should match the
version of the source database.
On the Destination Oracle Home page that appears, search and select the
destination Oracle Home. Click Next.
11. In the Destination Host Credentials section, enter the credentials of the user who
owns the Oracle Home specified in the Destination Oracle Home section.
12. In the Destination Database section, do the following: specify the global database
name, the instance name, and for select file system as the database storage. Click
Next.
• Click Next.
13. On the Clone Database: Destination Options page, select Use Database Area and
Fast Recovery Area.
Click Next.
14. On the Clone Database: Database Configuration page, in the Listener Configuration
section, specify the name and port of the listener that will be used for the cloned
database. If a new name and port are specified that are not in use by an exiting
listener, a new listener using the specified port will be created.
15. On the Clone Database: Schedule page, specify a name description for the clone job.
You can choose to run the clone job immediately or you can specify a later time and
date for the job to run.
Click Next.
16. On the Clone Database: Review page, review the details and configuration of the
source database, the destination database, and the database storage. You can view
the database storage files by clicking on View Source Database Files.
3. On the Database target page, from the Oracle Database menu, select Provisioning,
and then click Clone Database.
4. On the Clone and Refresh page, click the Switch to Classic Clone link.
6. On the Clone Database page: Source Type page, select Existing BackUp.
Click Continue.
7. On the Clone Database: Source Host Credentials page, select the backup that you
want to use.
8. In the Source Host Credentials section, enter the credentials of the user who owns
the source database Oracle server installation. You can either select Preferred,
Named or New credential.
If you select New credential, enter the Username and Password. You can select the
Set as Preferred Credentials checkbox, if you want to use these set of credentials
again. Click Test to check if your credentials are valid.
Click Next.
9. On the Clone Database: Backup Details page, in the Point In Time section, specify a
time or System Change Number (SCN). This will help identify backups necessary
to create the clone database.
Note:
If the existing backup does not have all necessary archive logs, Enterprise
Manager will transfer them from the source host to the destination host as part
of the clone operation.
10. Oracle database backups are can be encrypted using a database wallet, password,
or both. If the backups are encrypted, specify the encryption mode and password
as needed, in the Encryption section. By default, the encryption mode is set as
None.
Click Next.
11. In the Destination Host Credentials section, enter the credentials of the user who
owns the Oracle Home specified in the Destination Oracle Home section.
12. In the Destination Database section, do the following: specify the global database
name, the instance name, and for select file system as the database storage. Click
Next.
13. In the Parallelism section, in the Degree of Parallels box, enter the number of
parallel channels used by RMAN to copy the database files. The default number is
2.
Note:
Click Next.
14. On the Clone Database: Destination Database Settings page, in the Memory
Parameters section, select Configure Memory Management and then from the
drop-down list select Automatic Shared Memory Management.
The database automatically sets the optimal distribution of memory across the
System Global Area (SGA) components. The distribution of memory will change
from time to time to accommodate changes in the workload. Also, specify the
aggregate Program Global Area (PGA) size.
15. In the Listener Configuration section, specify the name and port of the listener to be
configured for the database. If the listener specified does not exist at the destination
Oracle Home, it will be created.
Note:
If you are going to convert the cloned database RAC at a later point, it is
recommended that you specify storage location shared across all hosts in the
cluster.
16. In the Recovery Files section, specify the location where recovery-related files such
as, archived redo log files, RMAN backups, and the like are to be created.
Click Next.
17. On the Clone Database: Storage Locations page, in Database Files Location section,
specify the location where datafiles, tempfiles, redo log files, and control files are to
be created.
18. On the Clone Database: Schedule page, specify a name description for the clone job.
You can choose to run the clone job immediately or you can specify a later time and
date for the job to run.
Click Next.
19. On the Clone Database: Review page, review the details and configuration of the
source database, the destination database, and the database storage. You can view
the database storage files by clicking on View Source Database Files.
• The on-premise Cloud Control instance must be of version 12.1.0.5 or later with the
latest patches applied from MOS (Doc ID 1549855.1)
• If you are cloning a database to Oracle Cloud, you must ensure that a Management
Agent has been deployed on the destination host. Also, the destination target must
be discovered.
For information on how to deploy a Management Agent on an Oracle Cloud target,
see Oracle Enterprise Manager Cloud Control Administrator's Guide.
• It is recommended that you use a Test Master database or a Test Master pluggable
database for cloning to Oracle Cloud.
To create a Test Master database, see Creating a Test Master Database.
To create a Test Master pluggable database, see Creating a Test Master Pluggable
Database.
• The on-premise database and the database on Oracle Cloud should not be
encrypted, should possess the same character set, and should have the same patch
set level.
• When cloning from on-premise to Oracle Cloud, the SELINUX security policy must
set to permissive. If the security policy is enforced or set to enforcing, the cloning
procedure may fail in the SecureCopyFiles step in the non-advanced wizard mode.
You will need to configure the SELinux to allow RSYNC from the Agent (script).
2. For View, select Search List. From the View menu, select Expand All.
3. Select the database to be cloned and right click the mouse button.
4. From the Oracle Database menu, select Cloning, then select Clone to Oracle
Cloud.
5. On the Source and Destination: Clone to Oracle Cloud page, do the following:
• In the Source Credentials section, specify or search and select the database and
Oracle Home credentials. If the source database is encrypted, specify the Wallet
Password. Click OK.
Note: If the source database is present on a hybrid gateway, the backups and
oracle home binaries can be directly be transferred to the Oracle Public Cloud
machine. If the source database is present on any other host machine, the
Hybrid Gateway details must be specified in this page.
• In the Data Time Series section, the Prior Point in Time option is selected by
default and a timeline of RMAN Backup images appear. Select the appropriate
RMAN backup by clicking Select on the image.
• In the Source Backup Location field, the location of the source backup you have
selected appears.
• In the Destination section, in the Clone to field, the Select Compute Cloud
Service is selected by default. This indicates that you are requesting resources
on Oracle Public Cloud to clone your on-premise database.
• In the Database Definition section, specify the Display Name for the database,
the Global Database Name, SID.
– Clone Oracle Home: If the Oracle Home is not pre-installed on the Oracle
Public Cloud machine on which the database is being cloned, select the
Clone Oracle Home option.
– Use an Existing Home: If the Oracle Home is already cloned, select the Use
an Existing Home option
• If you select the Clone Oracle Home option, select the Oracle Public Cloud host
machine on which the Oracle Home is to be provisioned. The Shape (size) of the
Oracle Public Cloud machine and the Site (Oracle Cloud) is displayed. Specify
the following:
– Run root scripts that require root privileges: You can either specify the root
credentials or choose to enter them later to run the root scripts.
In the Stage Location field, the location on the Oracle Public Cloud machine
where the backups will be staged before cloning is displayed.
• If you select the Use an Existing Home option, specify the following:
– Oracle Home Location: Select the location of the Oracle Home on an Oracle
Public Cloud machine already managed in Enterprise Manager. The host
name of the machine in which the Oracle Home is present is displayed.
• Hybrid Gateway Server: Specify the host name and credentials for the Hybrid
Gateway Server (configured Enterprise Manager Management Agent) which
transfers data from Enterprise Manager to Oracle Public Cloud. Click Next. The
Configuration page appears:
• Software Location: Specify the Oracle Base and Oracle Home location where the
database is to be cloned.
• Oracle Database User Groups: Specify the name of the Oracle Database User
Group performing this cloning operation.
• Database Files Location: Specify the location where you want the data files,
temp files, redo log files, and control files to be created.
• Recovery Files Location: Specify the location where you want the recovery files,
such as archived redo logs, RMAN backups, and other related files to be
created. You can choose to use the fast recovery area. If you do, specify the fast
recovery area size. In the Database Mode section, you can enable Read Only.
• Listener Configuration: In the Hosts section on the previous page, if you have
selected:
– Clone Oracle Home: Specify the Listener Name and Port to create a new
listener with which the database will be registered.
– Use an Existing Home: Select the listener targets running under the new
Oracle Home, to register the clone database, You can add new listeners by
clicking Add, and specifying the listener name and port.
• Database Credentials: Specify passwords for the SYS, SYSTEM, and DBSNMP
administrative users in the clone database. You can choose to have the same
password for all the three users or a different password for each.
7. On the Initialization Parameters page, you can configure the values of various
initialization parameters that affect the operation of the database instance. Select
the parameter and click Edit to modify the value of the parameter. Click Next.
8. On the Post Processing page, you can select masking rules, subsetting rules, or
custom scripts such as pre script, post script, and post SQL script. Click Next.
9. On the Schedule page, specify an instance name for the cloning deployment
procedure. Also, specify the point in time when you want the cloning deployment
procedure to begin.
In the Notification section, select the deployment procedure states for which you
want to receive e-mail notifications. For example, if you select Scheduled and
Succeeded for Status for Notification, you will receive e-mail notifications when
the cloning deployment procedure is scheduled, and when it succeeds.
Click Next.
10. On the Review page, review all the details you provided. If you want to edit certain
details, click Back to navigate to the required page.
Click Submit to launch the procedure to clone the database, transfer the selected
backups to the destination host via the hybrid gateway. Once cloned, this database
can be managed as an Enterprise Manager target.
To refresh the cloned database, from the Oracle Database menu, select Cloning,
then select Clone Management. On the Clone Management Dashboard, select the
database and click Refresh. Accept the default values in the wizard and optionally
change the Initialization Parameters and Configuration details and click Refresh to
refresh he database
#------------------------------------------------------------#
# Hybrid Gateway Details
#------------------------------------------------------------#
FORWARDER_HOST=slc04wim.us.oracle.com
FORWARDER_HOST_CREDS=HOST:SYSMAN
FORWARDER_STAGE_LOCATION=/scratch/fwd
#------------------------------------------------------------#
# Destination Details
#------------------------------------------------------------#
TARGET_HOST_LIST=129.124.22.34
HOST_NORMAL_NAMED_CRED=OPC_CRED:SYSMAN
HOST_PRIV_NAMED_CRED=HOST_CRED_ROOT:SYSMAN
DEST_LOCATION=/u03/backup
DEST_WORK_DIR=/tmp
#------------------------------------------------------------#
# Database Definition
#------------------------------------------------------------#
COMMON_DB_SID=Dcln1
COMMON_DOMAIN_NAME=us.xyz.com
COMMON_GLOBAL_DB_NAME=Dcln1.xyz.com
DATABASE_TYPE=dbTypeSI
DB_ADMIN_PASSWORD_SAME=true
DATABASE_PASSWORDS=welcome1
#------------------------------------------------------------#
# Software Configuration
#------------------------------------------------------------#
CLONE_HOME=Y # mention it as N if only database cloning is required
#All these below parameters are only required if a new oracle home provisioning
needs to be done.
OINSTALL_GROUP=dba
OSBACKUPDBA_GROUP=dba
OSDBA_GROUP=dba
OSDGDBA_GROUP=opc
OSKMDBA_GROUP=opc
OSOPER_GROUP=opc
ORACLE_BASE_LOC=/u03/home/app/base
ORACLE_HOME_LOC=/u03/home/app/base/product/11.2.0/dbhome_1
#------------------------------------------------------------#
# Database Configuration
#------------------------------------------------------------#
ARCHIVE_LOG_MODE=YES
DB_FILE_LOC=/u03/home/oradata
DB_TARGET_DISPLAY_NAME=Dcln1
DB_TARGET_NAME=Dcln1.xyz.com
FLASH_REC_AREA=/u03/home/app/home/fast_recovery_area
FLASH_REC_AREA_SIZE=4182
LISTENER_PORT=2887
#Only needed if we are provisioning a new home
LISTENER_NAME=LIST_2887
#--------------------------------------------------------------#
# Data file transfer configuration parameters
#--------------------------------------------------------------#
# This will be moved to emctl metadata. for now its a variable in DP
NUMBER_OF_THREADS=10
# This will be moved to emctl metadata. for now its a variable in DP
# The chunk size in bytes. For now default is 250MB
MIN_CHUNK_SIZE=262144000
#Transfer Unit can be FILE or CHUNK
TRANSFER_UNIT=FILE
#Maximum speed in Kbps
BANDWIDTH_LIMIT=2048
#Time out in seconds for data transfer
TIME_OUT=1200
#----------------------------------------------------------------#
# Wallet Configuration
#----------------------------------------------------------------#
DEST_PATH_TO_WALLET=/u03/wallet
WALLET_PASSWORD=Welcome123
NEW_WALLET_PASSWORD=welcome1
REKEY_REQUIRED=true
2. For View, select Search List. From the View menu, select Expand All.
3. Look for the source CDB (the CDB that the source PDB is a part of) in the list, then
click the name of the PDB that you want to clone.
4. From the Oracle Database menu, select Cloning, then select Clone to Oracle
Cloud.
Alternatively, in Step 3, you can right click the name of the PDB that you want to
clone, select Oracle Database, select Cloning, then select Clone to Oracle Cloud.
5. On the Source and Destination: Clone to Oracle Cloud page, do the following:
• In the Credentials section, specify the SYSDBA credentials for the source CDB,
and the host credentials for the source CDB. You can choose to use the preferred
credentials, use a saved set of named credentials, or specify a new set of
credentials.
• In the Container Database section, specify the destination CDB that is deployed
in Oracle Cloud (the CDB that the PDB clone must be a part of).
• In the Credentials section, specify the SYSDBA credentials for the destination
CDB, and the host credentials for the destination CDB.
6. If you do not need to specify anymore details, click Clone. This submits the
deployment procedure to clone a PDB to a CDB that is deployed in Oracle Cloud.
To specify other configuration details, mask data, as well as schedule the cloning
process, click Advanced.
Follow the rest of the steps, if you have selected the Advanced option.
7. On the Clone to Oracle Cloud: Source and Destination page, verify the details, and
then click Next.
8. On the Clone to Oracle Cloud: Configuration page, in the Database Files Location
section, specify the storage location where the datafiles of the PDB clone must be
stored.
In the Advanced Configuration section, specify the storage limits for the maximum
size of the PDB clone, and the maximum size of a shared table space within the
PDB clone. By default, no limits are placed on the values for these attributes.
In the Miscellaneous section, select the logging option that you want to use for the
table spaces created within the PDB clone.
Click Next.
9. On the Clone to Oracle Cloud: Post Processing page, in the Data Masking section,
specify the data masking definition that you want to apply after cloning the PDB.
Data masking masks sensitive data in a database.
For information on how to create a data masking definition, see Creating or Editing
a Data Masking Definition. Note that you can apply a data masking definition only
if you have the Subset-Masking license pack.
In the Custom Scripts section, for Pre Script and Post Script, specify the Oracle
Software Library components that contain the scripts that you want to run before
cloning, and after cloning the PDB respectively. Also, for SQL Script, specify the
SQL scripts that you want to run after cloning the PDB. For Run As User, select the
user account that you want to use to run the SQL scripts.
Click Next.
10. On the Clone to Oracle Cloud: Schedule page, specify an instance name for the
cloning deployment procedure. Also, specify the point in time when you want the
cloning deployment procedure to begin.
In the Notification section, select the deployment procedure states for which you
want to receive e-mail notifications. For example, if you select Scheduled and
Succeeded for Status for Notification, you will receive e-mail notifications when
the cloning deployment procedure is scheduled, and when it succeeds.
Click Next.
11. On the Clone to Oracle Cloud: Review page, review all the details you provided. If
you want to edit certain details, click Back to navigate to the required page.
Click Clone to submit the deployment procedure to clone a PDB to a CDB that is
deployed in Oracle Cloud.
DEST_LOCATION=/scratch/sray/app/sray/cdb_tm/HR_TM_PDB6
DEST_CDB_TARGET=cdb_tm
DEST_CDB_TYPE=oracle_database
DEST_CDB_CREDS=NC_HOST_SYC:SYCO
DEST_PDB_NAME=HR_TM_PDB6
BACKUP_TYPE=OSIMAGE
• Existing backup
Uses an existing backup of the source PDB and creates a new PDB. The
BACKUP_TYPE parameter should specify the type of backup. The allowed values
for BACKUP_TYPE are OSIMAGE, RMAN and TAR. The EXISTING_BACKUP
parameter should specify the location with the backup name and
EXISTING_BACKUP_METADATA should specify the location and the metadata
file name for the backup.
Sample properties file:
SRC_PDB_TARGET=cdb_prod_PDB
SRC_HOST_CREDS=NC_HOST_SCY:SYCO
SRC_CDB_CREDS=NC_HOST_SYC:SYCO
SRC_WORK_DIR=/tmp/source
DEST_HOST_CREDS=NC_SLCO_SSH:SYS
DEST_LOCATION=/scratch/sray/app/sray/cdb_tm/HR_TM_PDB6
DEST_CDB_TARGET=cdb_tm
DEST_CDB_TYPE=oracle_database
DEST_CDB_CREDS=NC_HOST_SYC:SYCO
DEST_PDB_NAME=HR_TM_PDB6
EXISTING_BACKUP=/user1/pdbbackup/PDB1_Backup_14297779
EXISTING_BACKUP_METADATA=/user1/pdbbackup/PDB1_Backup_14297779/PDB1.xml
BACKUP_TYPE=RMAN
Note:
• Unplug/plug
Unplugs the source PDB and creates a new PDB at the destination using the
unplugged source, and then plugs the source back. EXISTING_BACKUP,
EXISTING_BACKUP_METADATA and BACKUP_TYPE parameters should not be
provided.
Sample properties file:
SRC_PDB_TARGET=cdb_prod_PDB
SRC_HOST_CREDS=NC_HOST_SCY:SYCO
SRC_CDB_CREDS=NC_HOST_SYC:SYCO
SRC_WORK_DIR=/tmp/source
DEST_HOST_CREDS=NC_SLCO_SSH:SYS
DEST_LOCATION=/scratch/sray/app/sray/cdb_tm/HR_TM_PDB6
DEST_CDB_TARGET=cdb_tm
DEST_CDB_TYPE=oracle_database
DEST_CDB_CREDS=NC_HOST_SYC:SYCO
DEST_PDB_NAME=HR_TM_PDB6
Note:
For all the 3 methods stated above, in case the destination PDB data files
location is ASM then add the parameter DEST_STAGE_DIR who's value will
be used as the destination while transferring the source PDB data files. This
parameter is optional, if it is not provided a temporary directory will be used.
For Linux systems the temporary directory is /tmp.
Note:
Note:
2. Use the input variables to create a properties file with values for all the variables.
4. Export data from the source database by creating a database profile. To do so, enter
the verb emcli create_dbprofile - input_file=data:<properties
file name along with path>.
Note:
Use the properties file created in the previous step for this verb.
#-----------------------------------------------#
# DATA CONTENT DETAILS #
#-----------------------------------------------#
DATA_CONTENT_MODE=EXPORT
DATA_CONTENT=METADATA_AND_DATA
#-----------------------------------------------#
# EXPORT DETAILS #
#-----------------------------------------------#
EXPORT.EXPORT_TYPE=SELECTED_SCHEMAS
EXPORT.SCHEMA_INCLUDE_LIST.0=HR
EXPORT.SCHEMA_INCLUDE_LIST.1=PM
EXPORT.SCHEMA_INCLUDE_LIST.2=OE
EXPORT.SCHEMA_INCLUDE_LIST.3=IX
EXPORT.SCHEMA_INCLUDE_LIST.4=SH
EXPORT.SCHEMA_INCLUDE_LIST.5=BI
EXPORT.DEGREE_OF_PARALLELISM=1
EXPORT.DUMP_DIRECTORY_LIST.
0=directory=SCHEMAS_DUMP_DIR,file_name=samplschemas.dmp,max_size=100
EXPORT.LOG_FILE_DIRECTORY=directory=SCHEMAS_DUMP_DIR,file_name=samplschemas.log
#-----------------------------------------------#
# PROFILE DETAILS #
#-----------------------------------------------#
PROFILE_NAME=Export Dump of Sample schemas10
PROFILE_VERSION=11.2.0.4.0
PROFILE_LOCATION=Database Provisioning Profiles/12.1.0.1.0/linux_x64/
WORKING_DIRECTORY=/tmp
#-----------------------------------------#
# DESTINATION #
#-----------------------------------------#
DEST_HOST_CREDS=NC_HOST_SRAY
DEST_LOCATION=/scratch/sray/app3/sray/oradata/migda
DEST_HOST=slo.us.example.com
#-----------------------------------------#
# HYBRID GATEAWAY / FORWARDER #
#-----------------------------------------#
FORWARDER_HOST=slo.us.example.com
FORWARDER_CRED=ACD_NY:SYSCO
WORKING_DIRECTORY=/tmp
Note:
Remove the Hybrid Gateway parameters if the SSH connection exists between
the source and the destination hosts.
6. Enter the verb to import data in to the destination database: emcli dbimport -
input_file=data:/u01/files/dbimport.props.
Note:
#-----------------------------------------------#
# PROFILE #
#-----------------------------------------------#
PROFILE_LOCATION=Database Provisioning Profiles/12.1.0.1.0/linux_x64/Export Dump
of Sample schemas10
#-----------------------------------------------#
# SCHEMA DETAILS #
#-----------------------------------------------#
REMAP_SCHEMA_LIST.0=HR:HR
REMAP_SCHEMA_LIST.1=OE:OE
REMAP_SCHEMA_LIST.2=PM:PM
REMAP_SCHEMA_LIST.3=IX:IX
REMAP_SCHEMA_LIST.4=SH:SH
REMAP_SCHEMA_LIST.5=BI:BI
REMAP_TABLESPACE_LIST.0=EXAMPLE:MYTBSP1
REMAP_TABLESPACE_LIST.1=USERS:MYTBSP1
REMAP_TABLESPACE_LIST.2=SYSTEM:MYTBSP1
DEGREE_OF_PARALLELISM=1
DUMP_FILE_LIST.0=/scratch/ae/dumpdir/samplschemas.dmp
IMPORT_LOG_FILE_DIRECTORY=DATA_PUMP_DIR
Note:
Note:
2. Use the input variables to create a properties file with values for all the variables.
4. Export data from the source database by creating a database profile. To do so, enter
the verb emcli create_dbprofile - input_file=data:<properties
file name along with path>.
Note:
Use the properties file created in the previous step for this verb.
#-----------------------------------------------#
# DATA CONTENT DETAILS #
#-----------------------------------------------#
DATA_CONTENT_MODE=EXPORT
DATA_CONTENT=METADATA_AND_DATA
#-----------------------------------------------#
# EXPORT DETAILS #
#-----------------------------------------------#
EXPORT.EXPORT_TYPE=FULL_DATABASE
EXPORT.DEGREE_OF_PARALLELISM=1
EXPORT.DUMP_DIRECTORY_LIST.
0=directory=SCHEMAS_DUMP_DIR,file_name=samplschemas.dmp,max_size=100
EXPORT.LOG_FILE_DIRECTORY=directory=SCHEMAS_DUMP_DIR,file_name=samplschemas.log
#-----------------------------------------------#
# PROFILE DETAILS #
#-----------------------------------------------#
PROFILE_NAME=Export Dump of Sample schemas10
PROFILE_VERSION=11.2.0.4.0
PROFILE_LOCATION=Database Provisioning Profiles/12.1.0.1.0/linux_x64/
WORKING_DIRECTORY=/tmp
#-----------------------------------------#
# DESTINATION #
#-----------------------------------------#
DEST_HOST_CREDS=NC_HOST_SRAY
DEST_LOCATION=/scratch/sray/app3/sray/oradata/migda
DEST_HOST=slo.us.example.com
#-----------------------------------------#
# HYBRID GATEAWAY / FORWARDER #
#-----------------------------------------#
FORWARDER_HOST=slo.us.example.com
FORWARDER_CRED=ACD_NY:SYSCO
WORKING_DIRECTORY=/tmp
Note:
Remove the Hybrid Gateway parameters if the SSH connection exists between
the source and the destination hosts.
6. Enter the verb to import data in to the destination database: emcli dbimport -
input_file=data:/u01/files/dbimport.props.
Note:
To clone the destination to a pluggable database, ensure you enter
oracle_pdbin the DESTINATION_TARGET_TYPE option in the properties
file.
#-----------------------------------------------#
# PROFILE #
#-----------------------------------------------#
PROFILE_LOCATION=Database Provisioning Profiles/12.1.0.1.0/linux_x64/Export Dump
of Sample schemas10
#-----------------------------------------------#
# SCHEMA DETAILS #
#-----------------------------------------------#
REMAP_SCHEMA_LIST.0=HR:HR
REMAP_SCHEMA_LIST.1=OE:OE
REMAP_SCHEMA_LIST.2=PM:PM
REMAP_SCHEMA_LIST.3=IX:IX
REMAP_SCHEMA_LIST.4=SH:SH
REMAP_SCHEMA_LIST.5=BI:BI
REMAP_TABLESPACE_LIST.0=EXAMPLE:MYTBSP1
REMAP_TABLESPACE_LIST.1=USERS:MYTBSP1
REMAP_TABLESPACE_LIST.2=SYSTEM:MYTBSP1
DEGREE_OF_PARALLELISM=1
DUMP_FILE_LIST.0=/scratch/ae/dumpdir/samplschemas.dmp
IMPORT_LOG_FILE_DIRECTORY=DATA_PUMP_DIR
15.5.1.1 Cloning a PDB from Oracle Cloud Using the Clone Wizard
To clone a PDB from Oracle Cloud to an On-Premise PDB, follow these steps:
2. For View, select Search List. From the View menu, select Expand All.
3. Look for the source CDB (the CDB that the source PDB is a part of) in the list, then
click the name of the PDB that you want to clone.
4. From the Oracle Database menu, select Cloning, then select Clone from Oracle
Cloud.
Alternatively, in Step 3, you can right click the name of the PDB that you want to
clone, select Oracle Database, select Cloning, then select Clone from Oracle
Cloud.
5. On the Source and Destination: Clone from Oracle Cloud page, do the following:
• In the Credentials section, specify the SYSDBA credentials for the source CDB,
and the host credentials for the source CDB. You can choose to use the preferred
credentials, use a saved set of named credentials, or specify a new set of
credentials.
• In the Container Database section, specify the destination CDB that is deployed
in the public cloud setup (the CDB that the PDB clone must be a part of).
• In the Credentials section, specify the SYSDBA credentials for the destination
CDB, and the host credentials for the destination CDB.
6. If you do not need to specify anymore details, click Clone. This submits the
deployment procedure to clone a PDB to a CDB that is deployed in a public cloud
setup.
To specify other configuration details, mask data, as well as schedule the cloning
process, click Advanced.
Follow the rest of the steps, if you have selected the Advanced option.
7. On the Clone from Oracle Cloud: Source and Destination page, verify the details,
and then click Next.
8. On the Clone from Cloud: Configuration page, in the Database Files Location
section, specify the storage location where the datafiles of the PDB clone must be
stored.
In the Advanced Configuration section, specify the storage limits for the maximum
size of the PDB clone, and the maximum size of a shared table space within the
PDB clone. By default, no limits are placed on the values for these attributes.
In the Miscellaneous section, select the logging option that you want to use for the
table spaces created within the PDB clone.
Click Next.
9. On the Clone from Cloud: Post Processing page, in the Data Masking section,
specify the data masking definition that you want to apply after cloning the PDB.
Data masking masks sensitive data in a database.
For information on how to create a data masking definition, see Creating or Editing
a Data Masking Definition. Note that you can apply a data masking definition only
if you have the Subset-Masking license pack.
In the Custom Scripts section, for Pre Script and Post Script, specify the Oracle
Software Library components that contain the scripts that you want to run before
cloning, and after cloning the PDB respectively. Also, for SQL Script, specify the
SQL scripts that you want to run after cloning the PDB. For Run As User, select the
user account that you want to use to run the SQL scripts.
Click Next.
10. On the Clone from Cloud: Schedule page, specify an instance name for the cloning
deployment procedure. Also, specify the point in time when you want the cloning
deployment procedure to begin.
In the Notification section, select the deployment procedure states for which you
want to receive e-mail notifications. For example, if you select Scheduled and
Succeeded for Status for Notification, you will receive e-mail notifications when
the cloning deployment procedure is scheduled, and when it succeeds.
Click Next.
11. On the Clone from Cloud: Review page, review all the details you provided. If you
want to edit certain details, click Back to navigate to the required page.
Click Clone to submit the deployment procedure to clone the selected PDB to a
CDB that is deployed in a public cloud setup.
• Existing backup
Uses an existing backup of the source PDB and creates a new PDB. The
BACKUP_TYPE parameter should specify the type of backup. The allowed values
for BACKUP_TYPE are OSIMAGE, RMAN and TAR. The EXISTING_BACKUP
parameter should specify the location with the backup name and
EXISTING_BACKUP_METADATA should specify the location and the metadata
file name for the backup.
Sample properties file:
SRC_PDB_TARGET=cdb_prod_PDB
SRC_HOST_CREDS=NC_HOST_SCY:SYCO
SRC_CDB_CREDS=NC_HOST_SYC:SYCO
SRC_WORK_DIR=/tmp/source
DEST_HOST_CREDS=NC_SLCO_SSH:SYS
DEST_LOCATION=/scratch/sray/app/sray/cdb_tm/HR_TM_PDB6
DEST_CDB_TARGET=cdb_tm
DEST_CDB_TYPE=oracle_database
DEST_CDB_CREDS=NC_HOST_SYC:SYCO
DEST_PDB_NAME=HR_TM_PDB6
EXISTING_BACKUP=/user1/pdbbackup/PDB1_Backup_14297779
EXISTING_BACKUP_METADATA=/user1/pdbbackup/PDB1_Backup_14297779/PDB1.xml
BACKUP_TYPE=RMAN
Note:
• Unplug/plug
Unplugs the source PDB and creates a new PDB at the destination using the
unplugged source, and then plugs the source back. Both, EXISTING_BACKUP and
BACKUP_TYPE parameters should not be provided.
Sample properties file:
SRC_PDB_TARGET=cdb_prod_PDB
SRC_HOST_CREDS=NC_HOST_SCY:SYCO
SRC_CDB_CREDS=NC_HOST_SYC:SYCO
SRC_WORK_DIR=/tmp/source
DEST_HOST_CREDS=NC_SLCO_SSH:SYS
DEST_LOCATION=/scratch/sray/app/sray/cdb_tm/HR_TM_PDB6
DEST_CDB_TARGET=cdb_tm
DEST_CDB_TYPE=oracle_database
DEST_CDB_CREDS=NC_HOST_SYC:SYCO
DEST_PDB_NAME=HR_TM_PDB6
Note:
For all the 3 methods explained above, in case the destination PDB data files
location is ASM then add the parameter DEST_STAGE_DIR who's value will
be used as the destination while transferring the source PDB data files. This
parameter is optional, if it is not provided a temporary directory will be used.
For Linux systems the temporary directory is /tmp.
Note:
2. Use the input variables to create a properties file with values for all the variables.
4. Export data from the source database by creating a database profile. To do so, enter
the verb emcli create_dbprofile - input_file=data:<properties
file name along with path>.
Note:
Use the properties file created in the previous step for this verb.
#-----------------------------------------------#
# DATA CONTENT DETAILS #
#-----------------------------------------------#
DATA_CONTENT_MODE=EXPORT
DATA_CONTENT=METADATA_AND_DATA
#-----------------------------------------------#
# EXPORT DETAILS #
#-----------------------------------------------#
EXPORT.EXPORT_TYPE=SELECTED_SCHEMAS
EXPORT.SCHEMA_INCLUDE_LIST.0=HR
EXPORT.SCHEMA_INCLUDE_LIST.1=PM
EXPORT.SCHEMA_INCLUDE_LIST.2=OE
EXPORT.SCHEMA_INCLUDE_LIST.3=IX
EXPORT.SCHEMA_INCLUDE_LIST.4=SH
EXPORT.SCHEMA_INCLUDE_LIST.5=BI
EXPORT.DEGREE_OF_PARALLELISM=1
EXPORT.DUMP_DIRECTORY_LIST.
0=directory=SCHEMAS_DUMP_DIR,file_name=samplschemas.dmp,max_size=100
EXPORT.LOG_FILE_DIRECTORY=directory=SCHEMAS_DUMP_DIR,file_name=samplschemas.log
#-----------------------------------------------#
# PROFILE DETAILS #
#-----------------------------------------------#
PROFILE_NAME=Export Dump of Sample schemas10
PROFILE_VERSION=11.2.0.4.0
PROFILE_LOCATION=Database Provisioning Profiles/12.1.0.1.0/linux_x64/
WORKING_DIRECTORY=/tmp
#-----------------------------------------#
# DESTINATION #
#-----------------------------------------#
DEST_HOST_CREDS=NC_HOST_SRAY
DEST_LOCATION=/scratch/sray/app3/sray/oradata/migda
DEST_HOST=slo.us.example.com
#-----------------------------------------#
# HYBRID GATEAWAY / FORWARDER #
#-----------------------------------------#
FORWARDER_HOST=slo.us.example.com
FORWARDER_CRED=ACD_NY:SYSCO
WORKING_DIRECTORY=/tmp
Note:
Remove the Hybrid Gateway parameters if the SSH connection exists between
the source and the destination hosts.
6. Enter the verb to import data in to the destination database: emcli dbimport -
input_file=data:/u01/files/dbimport.props.
Note:
To clone the destination to database or pluggable database, ensure you
provide the required value in the DESTINATION_TARGET_TYPE option in
the properties file. For database, enter oracle_database, and for PDB enter
oracle_pdb.
DESTINATION_TARGET_TYPE=oracle_database
DATABASE_CREDENTIAL=CRED_DB:sysman
HOST_NAMED_CREDENTIAL=CRED_HOST:sysman
#-----------------------------------------------#
# PROFILE #
#-----------------------------------------------#
PROFILE_LOCATION=Database Provisioning Profiles/12.1.0.1.0/linux_x64/Export Dump
of Sample schemas10
#-----------------------------------------------#
# SCHEMA DETAILS #
#-----------------------------------------------#
REMAP_SCHEMA_LIST.0=HR:HR
REMAP_SCHEMA_LIST.1=OE:OE
REMAP_SCHEMA_LIST.2=PM:PM
REMAP_SCHEMA_LIST.3=IX:IX
REMAP_SCHEMA_LIST.4=SH:SH
REMAP_SCHEMA_LIST.5=BI:BI
REMAP_TABLESPACE_LIST.0=EXAMPLE:MYTBSP1
REMAP_TABLESPACE_LIST.1=USERS:MYTBSP1
REMAP_TABLESPACE_LIST.2=SYSTEM:MYTBSP1
DEGREE_OF_PARALLELISM=1
DUMP_FILE_LIST.0=/scratch/ae/dumpdir/samplschemas.dmp
IMPORT_LOG_FILE_DIRECTORY=DATA_PUMP_DIR
Note:
2. Use the input variables to create a properties file with values for all the variables.
4. Export data from the source database by creating a database profile. To do so, enter
the verb emcli create_dbprofile - input_file=data:<properties
file name along with path>.
Note:
Use the properties file created in the previous step for this verb.
#-----------------------------------------------#
REFERENCE_DATABASE=SS_TM_DB
REFERENCE_DATABASE_TYPE=oracle_database
REF_DB_CREDENTIALS=CRED_DB:sysman
REF_HOST_CREDENTIALS=CRED_HOST:sysman
#-----------------------------------------------#
# DATA CONTENT DETAILS #
#-----------------------------------------------#
DATA_CONTENT_MODE=EXPORT
DATA_CONTENT=METADATA_AND_DATA
#-----------------------------------------------#
# EXPORT DETAILS #
#-----------------------------------------------#
EXPORT.EXPORT_TYPE=FULL_DATABASE
EXPORT.DEGREE_OF_PARALLELISM=1
EXPORT.DUMP_DIRECTORY_LIST.
0=directory=SCHEMAS_DUMP_DIR,file_name=samplschemas.dmp,max_size=100
EXPORT.LOG_FILE_DIRECTORY=directory=SCHEMAS_DUMP_DIR,file_name=samplschemas.log
#-----------------------------------------------#
# PROFILE DETAILS #
#-----------------------------------------------#
PROFILE_NAME=Export Dump of Sample schemas10
PROFILE_VERSION=11.2.0.4.0
PROFILE_LOCATION=Database Provisioning Profiles/12.1.0.1.0/linux_x64/
WORKING_DIRECTORY=/tmp
#-----------------------------------------#
# DESTINATION #
#-----------------------------------------#
DEST_HOST_CREDS=NC_HOST_SRAY
DEST_LOCATION=/scratch/sray/app3/sray/oradata/migda
DEST_HOST=slo.us.example.com
#-----------------------------------------#
# HYBRID GATEAWAY / FORWARDER #
#-----------------------------------------#
FORWARDER_HOST=slo.us.example.com
FORWARDER_CRED=ACD_NY:SYSCO
WORKING_DIRECTORY=/tmp
Note:
Remove the Hybrid Gateway parameters if the SSH connection exists between
the source and the destination hosts.
6. Enter the verb to import data in to the destination database: emcli dbimport -
input_file=data:/u01/files/dbimport.props.
Note:
#-----------------------------------------------#
# PROFILE #
#-----------------------------------------------#
PROFILE_LOCATION=Database Provisioning Profiles/12.1.0.1.0/linux_x64/Export Dump
of Sample schemas10
#-----------------------------------------------#
# SCHEMA DETAILS #
#-----------------------------------------------#
REMAP_SCHEMA_LIST.0=HR:HR
REMAP_SCHEMA_LIST.1=OE:OE
REMAP_SCHEMA_LIST.2=PM:PM
REMAP_SCHEMA_LIST.3=IX:IX
REMAP_SCHEMA_LIST.4=SH:SH
REMAP_SCHEMA_LIST.5=BI:BI
REMAP_TABLESPACE_LIST.0=EXAMPLE:MYTBSP1
REMAP_TABLESPACE_LIST.1=USERS:MYTBSP1
REMAP_TABLESPACE_LIST.2=SYSTEM:MYTBSP1
DEGREE_OF_PARALLELISM=1
DUMP_FILE_LIST.0=/scratch/ae/dumpdir/samplschemas.dmp
IMPORT_LOG_FILE_DIRECTORY=DATA_PUMP_DIR
2. For View, select Search List. From the View menu, select Expand All.
3. Select the database to be cloned and right click the mouse button.
4. From the Oracle Database menu, select Cloning, then select Clone to Oracle
Cloud.
5. On the Source and Destination: Clone to Oracle Cloud page, do the following:
• In the Source Credentials section, specify or search and select the Oracle Cloud
database and Oracle Home credentials. Click OK.
• In the Data Time Series section, the Prior Point in Time option is selected by
default and a timeline of RMAN Backup images appear. Select the appropriate
RMAN backup by clicking Select on the image.
• In the Source Backup Location field, the location of the source backup you have
selected appears.
• In the Destination section, in the Clone to field, the Select Compute Cloud
Service is selected by default. This indicates that you are requesting resources
on Oracle Public Cloud to clone your on-premise database.
• In the Database Definition section, specify the Display Name for the database,
the Global Database Name, SID.
– Clone Oracle Home: If the Oracle Home is not pre-installed on the on-
premise host on which database is being cloned, select the Clone Oracle
Home option.
– Use an Existing Home: If the Oracle Home is already cloned, select the Use
an Existing Home option.
• If you select the Clone Oracle Home option, select destination machine on
which the Oracle Home is to be provisioned. Specify the following:
– Run root scripts that require root privileges: You can either specify the root
credentials or choose to enter them later to run the root scripts.
In the Stage Location field, the location on the destination host machine
where the backups will be staged before cloning is displayed.
• If you select the Use an Existing Home option, specify the following:
– Oracle Home Location: Select the location of the Oracle Home on an Oracle
Public Cloud machine already managed in Enterprise Manager. The host
name of the machine in which the Oracle Home is present is displayed.
• Hybrid Gateway Server: Specify the host name and credentials for the Hybrid
Gateway Server (configured Enterprise Manager Management Agent) which
transfers data from Oracle Cloud to Enterprise Manager. Click Next. The
Configuration page appears:
• Software Location: Specify the Oracle Base and Oracle Home location where
the database is to be cloned.
• Oracle Database User Groups: Specify the name of the Oracle Database User
Group performing this cloning operation.
• Database Files Location: Specify the location where you want the data files,
temp files, redo log files, and control files to be created.
• Recovery Files Location: Specify the location where you want the recovery files,
such as archived redo logs, RMAN backups, and other related files to be
created. You can choose to use the fast recovery area. If you do, specify the fast
recovery area size. In the Database Mode section, you can enable Read Only.
• Listener Configuration: In the Hosts section on the previous page, if you have
selected:
– Clone Oracle Home: Specify the Listener Name and Port to create a new
listener with which the database will be registered.
– Use an Existing Home: Select the listener targets running under the new
Oracle Home, to register the clone database, You can add new listeners by
clicking Add, and specifying the listener name and port.
• Database Credentials: Specify passwords for the SYS, SYSTEM, and DBSNMP
administrative users in the clone database. You can choose to have the same
password for all the three users or a different password for each.
7. On the Initialization Parameters page, you can configure the values of various
initialization parameters that affect the operation of the database instance. Select
the parameter and click Edit to modify the value of the parameter. Click Next.
8. On the Post Processing page, you can select masking rules, subsetting rules, or
custom scripts such as pre script, post script, and post SQL script. Click Next.
9. On the Schedule page, specify an instance name for the cloning deployment
procedure. Also, specify the point in time when you want the cloning deployment
procedure to begin.
In the Notification section, select the deployment procedure states for which you
want to receive e-mail notifications. For example, if you select Scheduled and
Succeeded for Status for Notification, you will receive e-mail notifications when
the cloning deployment procedure is scheduled, and when it succeeds.
Click Next.
10. On the Review page, review all the details you provided. If you want to edit certain
details, click Back to navigate to the required page.
Click Submit to launch the procedure to clone the database, transfer the selected
backups to the destination host via the hybrid gateway. Once cloned, this database
can be managed as an Enterprise Manager target.
To refresh the cloned database, from the Oracle Database menu, select Cloning,
then select Clone Management. On the Clone Management Dashboard, select the
database and click Refresh. Accept the default values in the wizard and optionally
change the Initialization Parameters and Configuration details and click Refresh to
refresh the database
#------------------------------------------------------------#
# Hybrid Gateway Details
#------------------------------------------------------------#
FORWARDER_HOST=slc04wim.us.oracle.com
FORWARDER_HOST_CREDS=HOST:SYSMAN
FORWARDER_STAGE_LOCATION=/scratch/fwd
#------------------------------------------------------------#
# Destination Details
#------------------------------------------------------------#
TARGET_HOST_LIST=slc01aia.us.oracle.com
HOST_NORMAL_NAMED_CRED=SLC01AIA_CRED:SYSMAN
HOST_PRIV_NAMED_CRED=SLC01AIA_CRED_ROOT:SYSMAN
DEST_LOCATION=/scratch/backupStage
DEST_WORK_DIR=/tmp
#------------------------------------------------------------#
# Database Definition
#------------------------------------------------------------#
COMMON_DB_SID=hrcln
COMMON_DOMAIN_NAME=us.oracle.com
COMMON_GLOBAL_DB_NAME=hrcln.us.oracle.com
DATABASE_TYPE=dbTypeSI
DB_ADMIN_PASSWORD_SAME=true
DATABASE_PASSWORDS=welcome1
#------------------------------------------------------------#
# Software Configuration
#------------------------------------------------------------#
#All these are only Applicable if a new oracle home provisioning needs to be done.
CLONE_HOME=Y
OINSTALL_GROUP=dba
OSBACKUPDBA_GROUP=dba
OSDBA_GROUP=dba
OSDGDBA_GROUP=opc
OSKMDBA_GROUP=opc
OSOPER_GROUP=opc
ORACLE_BASE_LOC=/scratch/app/dbhome11204
ORACLE_HOME_LOC=/scratch/app/dbhome11204/product/11.2.0/dbhome_1
#------------------------------------------------------------#
# Database Configuration
#------------------------------------------------------------#
ARCHIVE_LOG_MODE=YES
DB_FILE_LOC=/scratch/app/dbhome11204/oradata
DB_TARGET_DISPLAY_NAME=HRCLN
DB_TARGET_NAME=hrcln.us.oracle.com
FLASH_REC_AREA=/scratch/app/dbhome11204/fast_recovery_area
FLASH_REC_AREA_SIZE=4182
LISTENER_PORT=2887
//Only needed if we are provisioning a new home
LISTENER_NAME=LIST_2887
#--------------------------------------------------------------#
# Data file transfer configuration parameters
#--------------------------------------------------------------#
# This will be moved to emctl metadata. for now its a variable in DP
NUMBER_OF_THREADS=10
# This will be moved to emctl metadata. for now its a variable in DP
# The chunk size in bytes. For now default is 250MB
MIN_CHUNK_SIZE=262144000
#Transfer Unit can be FILE or CHUNK
TRANSFER_UNIT=FILE
#Maximum speed in Kbps
BANDWIDTH_LIMIT=2048
#Time out in seconds for data transfer
TIME_OUT=1200
#----------------------------------------------------------------#
# Wallet Configuration
#----------------------------------------------------------------#
DEST_PATH_TO_WALLET=/u03/wallet
WALLET_PASSWORD=Welcome123
NEW_WALLET_PASSWORD=welcome1
REKEY_REQUIRED=true
15.6.1.1 Creating a Full Clone Pluggable Database Using the Clone Wizard
If you have the 12.1.0.8 Enterprise Manager for Oracle Database plug-in deployed in
your system, you can create a full clone of a PDB using the new Clone PDB Wizard.
To create a full clone PDB, follow these steps:
2. For View, select Search List. From the View menu, select Expand All.
3. Look for the source CDB (the CDB that the source PDB is a part of) in the list, then
click the name of the PDB that you want to clone.
4. From the Oracle Database menu, select Cloning, then select Create Full Clone.
Alternatively, in Step 3, you can right click the name of the PDB that you want to
clone, select Oracle Database, select Cloning, then select Create Full Clone.
5. On the Source and Destination: Create Full Clone Pluggable Database page, fo the
following:
• Specify the SYSDBA credentials for the source CDB. You can choose to use the
preferred credentials, use a saved set of named credentials, or specify a new set
of credentials.
• To clone the PDB to a CDB different from the source CDB, select Clone the
Pluggable Database into a different Container Database, then specify the
destination CDB.
• In the Credentials section, specify the destination CDB host credentials. If you
chose to clone the PDB to a CDB different from the source CDB, specify the
SYSDBA credentials for the destination CDB. Also, if the destination CDB is
using Automatic Storage Management (ASM) to manage disk storage, you must
specify the ASM credentials.
6. If you do not need to specify anymore details, click Clone. This submits the
deployment procedure to clone a PDB to a CDB that is deployed in a public cloud
setup.
To specify other configuration details, mask data, as well as schedule the cloning
process, click Advanced.
Follow the rest of the steps, if you have selected the Advanced option. The option
to Clone is available on each page.
7. On the Create Full Clone Pluggable Database: Source and Destination page, verify
the details specified, and then click Next.
• In the Database Files Location section, specify the storage location where the
datafiles of the PDB clone must be stored. If the destination CDB is using ASM
to manage disk storage, specify the disk group where the datafiles of the PDB
clone must be stored.
• To ensure that only the source PDB data model definition is cloned (and the
source PDB data is not cloned), select Exclude User Data.
• In the Advanced Configuration section, specify the storage limits for the
maximum size of the PDB clone, and the maximum size of a shared tablespace
within the PDB clone. By default, no limits are placed on the values for these
attributes.
• In the Miscellaneous section, select the logging option that you want to use for
the tablespaces created within the PDB clone.
Click Next.
9. On the Create Full Clone Pluggable Database: Post Processing page, do the
following:
• In the Data Masking section, specify the data masking definition that you want
to apply after cloning the PDB. Data masking masks sensitive data in a
database.
For information on how to create a data masking definition, see Oracle Data
Masking and Subsetting Guide. Note that you can apply a data masking definition
only if you have the Subset-Masking license pack.
• In the Custom Scripts section, for Pre Script and Post Script, specify the Oracle
Software Library components that contain the scripts that you want to run
before cloning, and after cloning the PDB respectively. Also, for SQL Script,
specify the SQL scripts that you want to run after cloning the PDB. For Run As
User, select the user account that you want to use to run the SQL scripts.
Click Next.
10. On the Create Full Clone Pluggable Database: Schedule page, specify an instance
name for the cloning deployment procedure. Also, specify the point in time when
you want the cloning procedure to begin.
In the Notification section, select the deployment procedure states for which you
want to receive e-mail notifications. For example, if you select Scheduled and
Succeeded for Status for Notification, you will receive e-mail notifications when
the cloning deployment procedure is scheduled, and when it succeeds.
Click Next.
11. On the Create Full Clone Pluggable Database: Review page, review all the details
you provided. If you want to edit certain details, click Back to navigate to the
required page.
Click Clone to submit the deployment procedure to create a full clone of the source
PDB.
Note:
If the destination PDB and the source PDB are in different CDBs wherein, both
the CDBs are on Oracle Cloud, then ensure that the source PDB is in read-
write mode. This is necessary since a database link is created in the destination
CDB for cloning the PDB, and a temporary user is created in the source PDB
for using the database link. If there is an existing database link in the
destination CDB that connects to the source PDB, then use the parameter
EXISTING_DB_LINK_NAME to provide the database link name in the
properties file.
Note: The Hybrid Cloud Gateway server is not required as the database is
being cloned within the Oracle Cloud.
#------------------------------------------------------------#
# Hybrid Gateway Details
#------------------------------------------------------------#
FORWARDER_HOST=slc04wim.us.oracle.com
FORWARDER_HOST_CREDS=HOST:SYSMAN
FORWARDER_STAGE_LOCATION=/scratch/fwd
#------------------------------------------------------------#
# Destination Details
#------------------------------------------------------------#
TARGET_HOST_LIST=129.124.22.34
HOST_NORMAL_NAMED_CRED=OPC_CRED:SYSMAN
HOST_PRIV_NAMED_CRED=HOST_CRED_ROOT:SYSMAN
DEST_LOCATION=/u03/backup
DEST_WORK_DIR=/tmp
#------------------------------------------------------------#
# Database Definition
#------------------------------------------------------------#
COMMON_DB_SID=Dcln1
COMMON_DOMAIN_NAME=us.oracle.com
COMMON_GLOBAL_DB_NAME=Dcln1.us.oracle.com
DATABASE_TYPE=dbTypeSI
DB_ADMIN_PASSWORD_SAME=true
DATABASE_PASSWORDS=welcome1
#------------------------------------------------------------#
# Software Configuration
#------------------------------------------------------------#
CLONE_HOME=Y # mention it as N if only database cloning is required
#All these below parameters are only required if a new oracle home provisioning
needs to be done.
OINSTALL_GROUP=dba
OSBACKUPDBA_GROUP=dba
OSDBA_GROUP=dba
OSDGDBA_GROUP=opc
OSKMDBA_GROUP=opc
OSOPER_GROUP=opc
ORACLE_BASE_LOC=/u03/home/app/gaurav
ORACLE_HOME_LOC=/u03/home/app/gaurav/product/11.2.0/dbhome_1
#------------------------------------------------------------#
# Database Configuration
#------------------------------------------------------------#
ARCHIVE_LOG_MODE=YES
DB_FILE_LOC=/u03/home/oradata
DB_TARGET_DISPLAY_NAME=Dcln1
DB_TARGET_NAME=Dcln1.us.oracle.com
FLASH_REC_AREA=/u03/home/app/home/fast_recovery_area
FLASH_REC_AREA_SIZE=4182
LISTENER_PORT=2887
#Only needed if we are provisioning a new home
LISTENER_NAME=LIST_2887
#--------------------------------------------------------------#
# Data file transfer configuration parameters
#--------------------------------------------------------------#
# This will be moved to emctl metadata. for now its a variable in DP
NUMBER_OF_THREADS=10
# This will be moved to emctl metadata. for now its a variable in DP
# The chunk size in bytes. For now default is 250MB
MIN_CHUNK_SIZE=262144000
#Transfer Unit can be FILE or CHUNK
TRANSFER_UNIT=FILE
#Maximum speed in Kbps
BANDWIDTH_LIMIT=2048
#Time out in seconds for data transfer
TIME_OUT=1200
#----------------------------------------------------------------#
# Wallet Configuration
#----------------------------------------------------------------#
DEST_PATH_TO_WALLET=/u03/wallet
WALLET_PASSWORD=Welcome123
NEW_WALLET_PASSWORD=welcome1
REKEY_REQUIRED=true
This chapter explains how you can create databases using Oracle Enterprise Manager
Cloud Control (Cloud Control). In particular, this chapter covers the following:
Note:
Step 3 Running the Deployment Procedure • To create Single Instance Database, see
Run the Deployment Procedure to Procedure for Creating an Oracle Database.
successfully create the database. • To create Oracle RAC Database, see
Procedure for Creating an Oracle Real
Application Clusters Database.
• To create Oracle RAC One Node Database,
see Procedure for Creating an Oracle Real
Application Clusters One Node Database.
Note:
You can also use the information provided in this section to create a single-
instance container database.
You can create a container database on a host only if Oracle Database 12c
Release 1 (12.1), or higher, is installed on the host. For more information on
container databases, view Oracle Database Administrator's Guide.
2. Ensure that you have created and stored a database template in the Software
Library or Oracle Home. For information about creating database templates, see
Creating Database Templates.
3. Oracle Home for the database you want to create must be installed and you need
to have credentials of the owner of the Oracle Home. If the Create Database
wizard is launched from the Provision Database deployment procedure wizards,
the Oracle Home need not be installed earlier. In such cases, the validations for
Oracle Home will be skipped during the procedure interview and will be
performed during execution of the deployment procedure.
4. The database plug-in that supports the corresponding database version should be
deployed on OMS and Agent. For information about deploying plug-ins, see
Oracle Enterprise Manager Cloud Control Administrator's Guide.
5. Ensure that you have sufficient space to create the database, and that you have
write permissions to the recovery file location.
6. If you are using a template from the Software Library for database creation, you
must have Write permission to the Staging Location.
7. If you are using Automatic Storage Management (ASM) as storage, ASM instances
and diskgroups must be configured prior to creating database.
8. The Cloud Control user creating the database template must have
CONNECT_ANY_TARGET privilege in Cloud Control.
1. From the Enterprise menu, select Provisioning and Patching, then select Database
Provisioning.
2. In the Database Provisioning page, select the Create Oracle Database Deployment
Procedure and click Launch. The Create Oracle Database wizard is launched.
3. In the Database Version and Type page, select the database Version and select
Oracle Single Instance Database.
In the Hosts section, specify hosts and Oracle Home to provision the database. You
can also specify Host Credentials and Common Oracle Home across all hosts. The
Host Credentials can be Named or Preferred Credentials.
Click the plus (+) icon to add the host. Select the host and specify Oracle Home.
Select Host Credentials or add new. Click the plus icon to add new credentials and
specify User Name, Password, and Run Privileges and save the credentials.
Click Next.
4. In the Database Template page, choose the database template location. The location
can be Software Library or Oracle Home. The template selected must be compatible
with the selected Oracle Home version.
If you have selected Software Library, click on the search icon and select the
template from the Software Library. Specify Temporary Storage Location on
Managed Host(s). This location must exist on all hosts where you want to create
the database.
Click Show Template Details to view details of the selected template. You can
view initialization parameters, table spaces, data files, redo log groups, common
options, and other details of the template.
If you have selected Oracle Home, select the template from the Oracle Home. The
default location is ORACLE_HOME/assistants/dbca/templates.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
Note:
• SID must be unique for a database on a host. This means, the SID assigned
to one database on a host cannot be reused on another database on the
same host, but can be reused on another database on a different host. For
example, if you have two databases (db1 and db2) on a host (host1), then
their SIDs need to be unique. However, if you install the third database on
another host (host2), then its SID can be db1 or db2.
• Global database name must be unique for a database on a host and also
unique for databases across different hosts. This means, the global database
name assigned to one database on a host can neither be reused on another
database on the same host nor on another database on a different host. For
example, if you have two databases (db1 and db2) on a host (host1), then
their global database names need to be unique. And if you install the third
database on another host (host2), the global database name of even this
database must be unique and different from all other names registered
with Cloud Control.
• The database credentials you specify here will be used on all the
destination hosts. However, after provisioning, if you want to change the
password for any database, then you must change it manually.
6. In the Storage Locations page, select the storage type, whether File System or
Automatic Storage Management (ASM).
If you want to use a file system, then select File System and specify the full path to
the location where the data file is present. For example, %ORACLE_BASE%/
oradata or /u01/product/db/oradata.
If you want to use ASM, then select Automatic Storage Management (ASM), and
click the torch icon to select the disk group name and specify ASMSNMP
password. The Disk Group Name List window appears and displays the disk
groups that are common on all the destination hosts.
In the Database Files Location section, specify the location where data files,
temporary files, redo logs, and control files will be stored.
• Select Use Database File Locations from Template to select defaults from the
template used.
• Select Use Common Location for All Database Files to specify a different
location.
If you select Use Oracle Managed Files (OMF), in the Multiplex Redo Logs and
Control Files section, you can specify locations to store duplicate copies of redo
logs and control files. Multiplexing provides greater fault-tolerance. You can
specify upto five locations.
In the Recovery Files Location section, select Use same storage type as database
files location to use the same storage type for recovery files as database files. Select
Use Flash Recovery Area and specify the location for recovery-related files and
Fast Recovery Area Size.
Select Enable Archiving to enable archive logging. Click Specify Archive Log
Locations and specify upto nine archive log locations. If the log location is not
specified, the logs will be saved in the default location.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
In the Database sizing section, specify the Block Size and number of Processes. If
you have selected a database template with datafiles in the Database Template
page, you cannot edit the Block Size.
Specify the Host CPU Count. The maximum CPU count that can be specified is
equal to the number of CPUs present on the host.
In the Character Sets section, select the default character set. The default character
set is based on the locale and operating system.
Select a national character set. The default is AL16UTF16.
In the Database Connection Mode section, select the dedicated server mode. For
shared server mode, specify the number of shared servers.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
8. In the Additional Configuration Options, all the available listeners running from
the Oracle Home and Grid Infrastructure listeners are listed. You can either select a
listener or create a new one. You can select multiple listeners to register with the
database. To create a new listener, specify the Listener Name and Port. Select
database schemas and specify custom scripts, if any. Select custom scripts from the
host where you are creating the database or from Software Library. If you have
selected multiple hosts, you can specify scripts only from Software Library.
If you have selected a Structure Only database template in the Database Template
page, you can also view and edit database options.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
9. In the Schedule page, specify a Deployment Instance name and a schedule for the
deployment. If you want to run the procedure immediately, then retain the default
selection, that is Immediately. If you want to run the procedure later, then select
Later and provide time zone, start date, and start time details. Click Next.
10. In the Review page, review the details you have provided for the deployment
procedure and if you are satisfied with the details, then click Finish to run the
deployment procedure according to the schedule set. If you want to modify the
details, then click Back repeatedly to reach the page where you want to make the
changes. Click Save to save the deployment procedure for future deployment.
Click Analyze to check for prerequisites and to ensure that all the necessary
requirements for provisioning are met.
11. In the Procedure Activity page, view the status of the execution of the job and steps
in the deployment procedure. Click the Status link for each step to view the details
of the execution of each step. You can click Debug to set the logging level to Debug
and click Stop to stop the procedure execution.
Note:
You can also use the information provided in this section to create a Oracle
Real Application Clusters container database.
You can create a container database on a host only if Oracle Database 12c
Release 1 (12.1), or higher, is installed on the host. For more information on
container databases, view Oracle Database Administrator's Guide.
2. Ensure that you have created and stored the database template in the Software
Library or Oracle Home. For information about creating database templates, see
Creating Database Templates.
3. Oracle Home for the database you want to create must be installed and you need
to have credentials of the owner of the Oracle Home. If the Create Database
wizard is launched from the Provision Database deployment procedure wizards,
the Oracle Home need not be installed earlier. In such cases, the validations for
Oracle Home will be skipped during the procedure interview and will be
performed during execution of the deployment procedure.
4. The database plug-in that supports the corresponding database version should be
deployed on OMS and Agent. For information about deploying plug-ins, see
Oracle Enterprise Manager Cloud Control Administrator's Guide.
5. Ensure that you have sufficient space to create the database, and that you have
write permissions to the recovery file location.
6. If you are using a template from the Software Library for database creation, you
must have Write permission to the Staging Location.
7. If you are creating Oracle Real Application Clusters database, you must have Grid
Infrastructure installed and configured. If the Create Database wizard is launched
8. If you are using Automatic Storage Management (ASM) as storage, ASM instances
and diskgroups must be configured prior to creating database.
9. The Cloud Control user creating the database template must have
CONNECT_ANY_TARGET privilege in Cloud Control.
1. From the Enterprise menu, select Provisioning and Patching, then select Database
Provisioning.
2. In the Database Provisioning page, select the Create Oracle Database Deployment
Procedure and click Launch. The Create Oracle Database wizard is launched.
3. In the Database Version and Type page, select the database Version and select
Oracle Real Application Clusters (Oracle RAC) Database.
In the Cluster section, select the Cluster and Oracle Home. Select a reference host to
perform validations to use as reference to create database on the cluster.
Select Cluster Credentials or add new. Click the plus icon to add new credentials
and specify User Name, Password, and Run Privileges and save the credentials.
Click Next.
4. In the Database Template page, choose the database template location. The location
can be Software Library or Oracle Home. The template selected must be compatible
with the selected Oracle Home version.
If you have selected Software Library, click on the search icon and select the
template from the Software Library. Specify Temporary Storage Location on
Managed Host(s). This location must be present on the reference node that you
selected earlier.
Click Show Template Details to view details of the selected template. You can
view initialization parameters, table spaces, data files, redo log groups, common
options, and other details of the template.
If you have selected Oracle Home, select the template from the Oracle Home. The
default location is ORACLE_HOME/assistants/dbca/templates.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
5. In the Identification and Placement page, select the type of Oracle RAC database,
whether Policy Managed or Admin Managed. Also, specify Global Database
Name and SID prefix.
For admin-managed database, select nodes on which you want to create the cluster
database. You must specify the node selected as the reference node in the Database
Version and Type page.
For policy-managed database, select the server pools to be used for creating the
database, from the list of existing server pools, or choose to create a new server
pool. Policy-managed databases can be created for database versions 11.2 and
higher. For database versions lower than 11.2, you will need to select nodes to
create the Oracle RAC database.
In the Database Consolidation section, select Create As Container Database if you
want to create a container database. By default, an empty container database is
created. If you want to add one or more pluggable databases to that container
database, then select Create a Container Database with one or more PDBs, and set
the number of PDBs.
If you choose to create multiple PDBs, then the unique name you enter here is used
as a prefix for all the cloned PDBs, and the suffix is a numeric value that indicates
the count of PDBs.
For example, if you create five PDBs with the name accountsPDB, then the PDBs
are created with the names accountsPDB1, accountsPDB2, accountsPDB3,
accountsPDB4, and accountsPDB5.
Specify the Database Credentials for SYS, SYSTEM, and DBSNMP.
For database version 12.1 or higher, for Microsoft Windows operating systems, the
database services will be configured for the Microsoft Windows user specified
during Oracle home installation.This user will own all services run by Oracle
software. In the Oracle Home Windows User Credentials section, specify the host
credentials for the Microsoft Windows user account to configure database services.
Select existing named credentials or specify new credentials. To specify new
credentials, provide the user name and password. You can also save these
credentials and set them as preferred credentials.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
6. In the Storage Locations page, select the storage type, whether File System or
Automatic Storage Management (ASM).
In the Database Files Location section, specify the location where data files,
temporary files, redo logs, and control files will be stored. These locations must be
on shared storage such as cluster file system location or ASM diskgroups.
• Select Use Database File Locations from Template to select defaults from the
template used.
• Select Use Common Location for All Database Files to specify a different
location.
If you select Use Oracle Managed Files (OMF), in the Multiplex Redo Logs and
Control Files section, you can specify locations to store duplicate copies of redo
logs and control files. Multiplexing provides greater fault-tolerance. You can
specify upto five locations.
In the Recovery Files Location section, select Use Flash Recovery Area and specify
the location for recovery-related files and Fast Recovery Area Size.
In the Archive Log Settings section, select Enable Archiving to enable archive
logging. In the Specify Archive Log Locations, you can specify up to nine archive
log locations. If the log location is not specified, the logs will be saved in the default
location.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
In the Database sizing section, specify the Block Size and number of Processes. If
you have selected a database template with datafiles in the Database Template
page, you cannot edit the Block Size.
Specify the Host CPU Count. The maximum CPU count that can be specified is
equal to the number of CPUs present on the host.
In the Character Sets section, select the default character set. The default character
set is based on the locale and operating system.
Select a national character set. The default is AL16UTF16.
In the Database Connection Mode section, select the dedicated server mode. For
shared server mode, specify the number of shared servers.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
8. In the Additional Configuration Options page, select custom scripts from the
Software Library. If you have selected a Structure Only database template in the
Database Template page, you can also view and edit database options.
9. In the Schedule page, specify a Deployment Instance name and a schedule for the
deployment. If you want to run the procedure immediately, then retain the default
selection, that is Immediately. If you want to run the procedure later, then select
Later and provide time zone, start date, and start time details. Click Next.
10. In the Review page, review the details you have provided for the deployment
procedure and if you are satisfied with the details, then click Finish to run the
deployment procedure according to the schedule set. If you want to modify the
details, then click Back repeatedly to reach the page where you want to make the
changes. Click Save to save the deployment procedure for future deployment.
Click Analyze to check for prerequisites and to ensure that all the necessary
requirements for provisioning are met.
11. In the Procedure Activity page, view the status of the execution of the job and steps
in the deployment procedure. Click the Status link for each step to view the details
of the execution of each step. You can click Debug to set the logging level to Debug
and click Stop to stop the procedure execution.
Note:
You can also use the information provided in this section to create a Oracle
Real Application Clusters One Node container database.
You can create a container database on a host only if Oracle Database 12c
Release 1 (12.1), or higher, is installed on the host. For more information on
container databases, view Oracle Database Administrator's Guide.
• Procedure for Creating an Oracle Real Application Clusters One Node Database
2. Ensure that you have created and stored the database template in the Software
Library or Oracle Home. For information about creating database templates, see
Creating Database Templates.
3. Oracle Home for the database you want to create must be installed and you need
to have credentials of the owner of the Oracle Home. If the Create Database
wizard is launched from the Provision Database deployment procedure wizards,
the Oracle Home need not be installed earlier. In such cases, the validations for
Oracle Home will be skipped during the procedure interview and will be
performed during execution of the deployment procedure.
4. The database plug-in that supports the corresponding database version should be
deployed on OMS and Agent. For information about deploying plug-ins, see
Oracle Enterprise Manager Cloud Control Administrator's Guide.
5. Ensure that you have sufficient space to create the database, and that you have
write permissions to the recovery file location.
6. If you are using a template from the Software Library for database creation, you
must have Write permission to the Staging Location.
7. If you are creating Oracle Real Application Clusters database, you must have Grid
Infrastructure installed and configured. If the Create Database wizard is launched
from the Provision Database deployment procedure wizards, Grid Infrastructure
need not be installed and configured. In such cases, the validations for Grid
Infrastructure will be skipped during the procedure interview and will be
performed during execution of the deployment procedure.
8. If you are using Automatic Storage Management (ASM) as storage, ASM instances
and diskgroups must be configured prior to creating database.
9. The Cloud Control user creating the database template must have
CONNECT_ANY_TARGET privilege in Cloud Control.
16.4.2 Procedure for Creating an Oracle Real Application Clusters One Node Database
To create an Oracle Real Application Clusters One Node database, follow these steps:
1. From the Enterprise menu, select Provisioning and Patching, then select Database
Provisioning.
2. In the Database Provisioning page, select the Create Oracle Database Deployment
Procedure and click Launch. The Create Oracle Database wizard is launched.
3. In the Database Version and Type page, select the database Version and select
Oracle RAC One Node Database.
In the Cluster section, select the cluster and Oracle Home. Select a reference host to
perform validations to use as reference to create database on the cluster.
Select Cluster Credentials or add new. Click the plus icon to add new credentials
and specify User Name, Password, and Run Privileges and save the credentials.
Click Next.
4. In the Database Template page, choose the database template location. The location
can be Software Library or Oracle Home. The template selected must be compatible
with the selected Oracle Home version.
If you have selected Software Library, click on the search icon and select the
template from the Software Library. Specify Temporary Storage Location on
Managed Host(s). This location must be present on the reference node that you
selected earlier.
Click Show Template Details to view details of the selected template. You can
view initialization parameters, table spaces, data files, redo log groups, common
options, and other details of the template.
If you have selected Oracle Home, select the template from the Oracle Home. The
default location is ORACLE_HOME/assistants/dbca/templates.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
5. In the Identification and Placement page, select nodes on which you want to create
the cluster database. Specify Global Database Name and SID prefix. Select the
type of Oracle RAC database, whether Policy Managed or Admin Managed.
Specify the Service Name.
Specify the Database Credentials for SYS, SYSTEM, and DBSNMP database
accounts.
For database version 12.1 or higher, for Microsoft Windows operating systems, the
database services will be configured for the Microsoft Windows user specified
during Oracle home installation.This user will own all services run by Oracle
software. In the Oracle Home Windows User Credentials section, specify the host
credentials for the Microsoft Windows user account to configure database services.
Select existing named credentials or specify new credentials. To specify new
credentials, provide the user name and password. You can also save these
credentials and set them as preferred credentials.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
6. In the Storage Locations page, select the storage type, whether File System or
Automatic Storage Management (ASM).
In the Database Files Location section, specify the location where data files,
temporary files, redo logs, and control files will be stored.
• Select Use Database File Locations from Template to select defaults from the
template used.
• Select Use Common Location for All Database Files to specify a different
location.
If you select Use Oracle Managed Files (OMF), in the Multiplex Redo Logs and
Control Files section, you can specify locations to store duplicate copies of redo
logs and control files. Multiplexing provides greater fault-tolerance. You can
specify upto five locations.
In the Recovery Files Location section, select Use Flash Recovery Area and specify
the location for recovery-related files and Fast Recovery Area Size.
In the Archive Log Settings section, select Enable Archiving to enable archive
logging. In the Specify Archive Log Locations, you can specify up to nine archive
log locations. If the log location is not specified, the logs will be saved in the default
location.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
In the Database sizing section, specify the Block Size and number of Processes. If
you have selected a database template with datafiles in the Database Template
page, you cannot edit the Block Size.
Specify the Host CPU Count. The maximum CPU count that can be specified is
equal to the number of CPUs present on the host.
In the Character Sets section, select the default character set. The default character
set is based on the locale and operating system.
Select a national character set. The default is AL16UTF16.
In the Database Connection Mode section, select the dedicated server mode. For
shared server mode, specify the number of shared servers.
Click on the Lock icon to lock the fields you have configured. These fields will not
be available for editing in the operator role.
Click Next.
8. In the Additional Configuration Options page, select custom scripts from the
Software Library. If you have selected a Structure Only database template in the
Database Template page, you can also view and edit database options. Click on the
Lock icon to lock the field. Click Next.
9. In the Schedule page, specify a Deployment Instance name and a schedule for the
deployment. If you want to run the procedure immediately, then retain the default
selection, that is Immediately. If you want to run the procedure later, then select
Later and provide time zone, start date, and start time details. Click Next.
10. In the Review page, review the details you have provided for the deployment
procedure and if you are satisfied with the details, then click Finish to run the
deployment procedure according to the schedule set. If you want to modify the
details, then click Back repeatedly to reach the page where you want to make the
changes. Click Save to save the deployment procedure for future deployment.
Click Analyze to check for prerequisites and to ensure that all the necessary
requirements for provisioning are met.
11. In the Procedure Activity page, view the status of the execution of the job and steps
in the deployment procedure. Click the Status link for each step to view the details
of the execution of each step. You can click Debug to set the logging level to Debug
and click Stop to stop the procedure execution.
This chapter explains how you can manage pluggable databases (PDBs) using Oracle
Enterprise Manager Cloud Control (Cloud Control). In particular, this chapter covers
the following:
Step 3 Meeting the Prerequisites • To meet the prerequisites for creating a new PDB,
Meet the prerequisites for the selected see Prerequisites for Creating a New Pluggable
use case. Database.
• To meet the prerequisites for plugging in an
unplugged PDB, see Prerequisites for Plugging In
an Unplugged Pluggable Database.
• To meet the prerequisites for cloning a PDB, see
Prerequisites for Cloning a Pluggable Database.
• To meet the prerequisites for migrating a non-CDB
as a PDB, see Prerequisites for Migrating a Non-
CDB as a Pluggable Database.
• To meet the prerequisites for unplugging and
dropping a PDB, see Prerequisites for Unplugging
and Dropping a Pluggable Database.
• To meet the prerequisites for deleting PDBs, see
Prerequisites for Deleting Pluggable Databases.
Step 4 Following the Procedure • To create a new PDB, see Creating a New Pluggable
Follow the procedure for the selected Database.
use case. • To plug in an unplugged PDB, see Plugging In an
Unplugged Pluggable Database.
• To clone a PDB, see Cloning a Pluggable Database.
• To migrate a non-CDB as a PDB, see Migrating a
Non-CDB as a Pluggable Database.
• To unplug and drop a PDB, see Unplugging and
Dropping a Pluggable Database.
• To delete PDBs, see Deleting Pluggable Databases.
cloning existing PDBs, migrating non-CDBs as PDBs, unplugging PDBs, and deleting
PDBs.
Note:
To manage the PDB lifecycle using Cloud Control, you must have the 12.1.0.3
Enterprise Manager for Oracle Database plug-in, or a later version, deployed.
To delete PDBs using Cloud Control, you must have the 12.1.0.5 Enterprise
Manager for Oracle Database plug-in deployed.
For information on how to deploy a plug-in and upgrade an existing plug-in,
see Oracle Enterprise Manager Cloud Control Administrator's Guide.
Figure 17-1 provides a graphical overview of how you can manage the PDB lifecycle in
Cloud Control.
For more information about PDBs and CDBs, see the Managing Pluggable Databases
part in Oracle Database Administrator's Guide.
Note:
You can also provision PDBs using EM CLI. For information on how to do so,
see Provisioning Pluggable Databases.
• The CDB within which you want to create a PDB must exist, and must be a Cloud
Control target.
Note:
• The CDB (within which you want to create a PDB) must not be in read-only,
upgrade, or downgrade mode.
• The target host user must be the owner of the Oracle home that the CDB (within
which you want to create the PDB) belongs to.
1. From the Enterprise menu, select Provisioning and Patching, then select Database
Provisioning. In the Database Provisioning page, in the Related Links section of
the left menu pane, click Provision Pluggable Databases.
Note:
You can also access the Provision Pluggable Database Console from the home
page of the CDB. To do so, in the CDB's home page, from the Oracle Database
menu, select Provisioning, then select Provision Pluggable Database.
Note:
Skip this step if you have accessed the Provision Pluggable Database Console
from the CDB's home page.
4. Click Launch.
Note:
You will be prompted to log in to the database if you have not already logged
in to it through Enterprise Manager. Make sure you log in using sysdba user
account credentials.
5. In the Creation Options page of the Create Pluggable Database Wizard, in the
Pluggable Database Creation Options section, select Create a New PDB.
6. In the Container Database Host Credentials section, select or specify the target CDB
Oracle home owner host credentials. If you have already registered the credentials
with Enterprise Manager, you can select Preferred or Named. Otherwise, you can
select New and enter the credentials.
7. Click Next.
8. In the Identification page, enter a unique name for the PDB you are creating.
If you prefer to create more than one PDB in this procedure, then select Create
Multiple Copies, and set the number of PDBs that you want to create. Note that
you can create a maximum of 252 PDBs within a CDB.
Note:
If you choose to create multiple PDBs, then the unique name you enter here is
used as a prefix for all PDBs, and the suffix is a numeric value that indicates
the count of PDBs.
For example, if you create five PDBs with the name accountsPDB, then the
PDBs are created with the names accountsPDB1, accountsPDB2,
accountsPDB3, accountsPDB4, and accountsPDB5.
9. In the PDB Administrator section, enter the credentials of the admin user account
you need to create for administering the PDB.
Note:
If you choose to create multiple PDBs, then an admin user account is created
for each PDB that you create, with the same set of the specified credentials.
11. In the Storage page, in the PDB Datafile Locations section, select the type of
location where you want to store the datafiles.
• If the target CDB (CDB in which you are creating the PDB) is enabled with Oracle
Managed Files and if you want to use the same, then select Use Oracle
Managed Files (OMF).
• If you want to enter a custom location, then select Use Common Location for
PDB Datafiles. Select the storage type and the location where the datafiles can
be stored.
12. In the Temporary Working Directory section, enter a location where the temporary
files generated during the PDB creation process can be stored.
13. In the Post-Creation Scripts section, select a custom SQL script you want to run as
part of this procedure, once the PDB is created.
15. In the Schedule page, enter a unique deployment procedure instance name and a
schedule for the deployment. The instance name you enter here helps you identify
and track the progress of this procedure on the Procedure Activity page.
If you want to run the procedure immediately, then retain the default selection,
that is, Immediately. Otherwise, select Later and provide time zone, start date, and
start time details.
You can optionally set a grace period for this schedule. A grace period is a period
of time that defines the maximum permissible delay when attempting to run a
scheduled procedure. If the procedure does not start within the grace period you
have set, then the procedure skips running. To set a grace period, select Grace
Period, and set the permissible delay time.
Figure 17-6 displays the Schedule page.
17. In the Review page, review the details you have provided for the deployment
procedure. If you are satisfied with the details, click Submit.
If you want to modify the details, then click Back repeatedly to reach the page
where you want to make the changes.
Figure 17-7 displays the Review page.
18. In the Procedure Activity page, view the status of the procedure. From the
Procedure Actions menu, you can select Debug to set the logging level to Debug,
and select Stop to stop the procedure execution.
When you create a new PDB, the Enterprise Manager job system creates a Create
Pluggable Database job. For information about viewing the details of this job, see
Viewing Create Pluggable Database Job Details.
• The target CDB (the CDB within which you want to plug in the unplugged PDB)
must exist, and must be a Cloud Control target.
Note:
For information on how to create a new CDB, see Creating Databases .
• The XML file that describes the unplugged PDB, and the other files associated with
the unplugged PDB, such as the datafiles and the wallet file, must exist and must
be readable.
• The target host user must be the owner of the Oracle home that the CDB (within
which you want to plug in the unplugged PDB) belongs to.
• The platforms of the source CDB host (the host on which the CDB that previously
contained the unplugged PDB is installed) and the target CDB host (the host on
which the target CDB is installed) must have the same endianness, and must have
compatible database options installed.
• The source CDB (the CDB that previously contained the unplugged PDB) and the
target CDB must have compatible character sets and national character sets. Every
character in the source CDB character set must be available in the target CDB
character set, and the code point value of every character available in the source
CDB character set must be the same in the target CDB character set.
1. From the Enterprise menu, select Provisioning and Patching, then select Database
Provisioning. In the Database Provisioning page, in the Related Links section of
the left menu pane, click Provision Pluggable Databases.
Note:
You can also access the Provision Pluggable Database Console from the home
page of the CDB. To do so, in the CDB's home page, from the Oracle Database
menu, select Provisioning, then select Provision Pluggable Database.
Note:
Skip this step if you have accessed the Provision Pluggable Database Console
from the CDB's home page.
4. Click Launch.
Note:
You will be prompted to log in to the database if you have not already logged
in to it through Enterprise Manager. Make sure you log in using sysdba user
account credentials.
5. In the Creation Options page of the Create Pluggable Database Wizard, in the
Pluggable Database Creation Options section, select Plug an Unplugged PDB.
6. In the Container Database Host Credentials section, select or specify the target CDB
Oracle home owner host credentials. If you have already registered the credentials
with Enterprise Manager, you can select Preferred or Named. Otherwise, you can
select New and enter the credentials.
7. Click Next.
8. In the Identification page, enter a unique name for the PDB you are plugging in.
Select Create As Clone to ensure that Oracle Database generates a unique PDB
DBID, GUID, and other identifiers expected for the new PDB.
If you prefer to create more than one PDB in this procedure, then select Create
Multiple Copies, and set the number of PDBs that you want to create. Note that
you can create a maximum of 252 PDBs within a CDB.
Note:
If you choose to create multiple PDBs, then the unique name you enter here is
used as a prefix for all PDBs, and the suffix is a numeric value that indicates
the count of PDBs.
For example, if you create five PDBs with the name accountsPDB, then the
PDBs are created with the names accountsPDB1, accountsPDB2,
accountsPDB3, accountsPDB4, and accountsPDB5.
9. In the PDB Administrator section, do one of the following to administer the PDB:
• If you prefer to use the admin user account that was created as part of the
source PDB that you are plugging in, then deselect Create PDB Administrator.
• If you want to create a brand new admin user account for the PDB you are
plugging in, then select Create PDB Administrator, and enter the desired
credentials.
Note:
If you choose to create multiple PDBs, then an admin user account is created
for each PDB that you create, with the same set of the specified credentials.
To lock and expire all the users in the newly created PDB, (except the newly
created Admin), select Lock All Existing PDB Users.
Figure 17-11 displays the Identification page.
10. In the PDB Template Location section, select the location where the source PDB's
template is available, and then select the type of PDB template.
• If the PDB template is available on your CDB host (CDB to which you are plugging
in the unplugged PDB), then select Target Host File System.
– If the PDB template is a single archive file—a TAR file with datafiles and
metadata XML file included in it, then select Create the PDB from PDB
Archive, then select the PDB template.
– If the PDB template is a PDB file set—a separate DFB file with all the
datafiles and a separate metadata XML file, then select Create the PDB using
PDB File Set, then select the DBF and XML files.
– If you want to plug in a PDB using the PDB metadata XML file and the
existing datafiles, then select Create PDB using Metadata file.
• In the previous page, if you chose to create the PDB from a pluggable database
archive (single TAR file) or using a pluggable database file set (DFB file and an
XML file), then select the type of location where you want to store the target
datafiles for the PDB you are plugging in.
– If the target CDB (CDB to which you are plugging in the unplugged PDB) is
enabled with Oracle Managed Files and if you want to use the same, then
select Use Oracle Managed Files (OMF).
– If you want to enter a common custom location, then select Use Common
Location for PDB datafiles. Select the storage type and the location where
the datafiles can be stored.
• In the previous page, if you chose to create the PDB using a pluggable database
template (XML file only), then do the following:
In the PDB Datafile Locations section, validate the locations mapped for the
datafiles. If they are incorrect, correct the paths. Alternatively, if you have a
single location where the datafiles are all available, then enter the absolute path
in the Set Common Source File Mapping Location field, and click Set.
You can choose to store the target datafiles for the PDB you are plugging in, in
the same location as the source datafiles. However, if you want the target
datafiles to be stored in a different location, then select Copy Datafiles, and
select the type of location:
– If the target CDB (CDB to which you are plugging in the unplugged PDB) is
enabled with Oracle Managed Files and if you want to use the same, then
select Use Oracle Managed Files (OMF).
– If you want to enter a common custom location, then select Use Common
Location for Pluggable Database Files. Select the storage type and the
location where the datafiles can be stored.
– If you prefer to use different custom locations for different datafiles, then
select Customized Location, and enter the custom location paths.
13. In the Temporary Working Directory section, enter a location where the temporary
files generated during the PDB creation process can be stored.
14. In the Post-Creation Scripts section, select a custom SQL script you want to run as
part of this procedure, once the PDB is plugged in.
If the script is available in the Software Library, select Select from Software
Library, then select the component that contains the custom script.
Figure 17-13 displays the Storage page.
16. In the Schedule page, enter a unique deployment procedure instance name and a
schedule for the deployment. The instance name you enter here helps you identify
and track the progress of this procedure on the Procedure Activity page.
If you want to run the procedure immediately, then retain the default selection,
that is, Immediately. Otherwise, select Later and provide time zone, start date, and
start time details.
You can optionally set a grace period for this schedule. A grace period is a period
of time that defines the maximum permissible delay when attempting to run a
scheduled procedure. If the procedure does not start within the grace period you
have set, then the procedure skips running. To set a grace period, select Grace
Period, then set the permissible delay time.
Figure 17-14 displays the Schedule page.
18. In the Review page, review the details you have provided for the deployment
procedure. If you are satisfied with the details, click Submit.
If you want to modify the details, then click Back repeatedly to reach the page
where you want to make the changes.
Figure 17-15 displays the Review page.
19. In the Procedure Activity page, view the status of the procedure. From the
Procedure Actions menu, you can select Debug to set the logging level to Debug,
and select Stop to stop the procedure execution.
When you plug in an unplugged PDB, the Enterprise Manager job system creates a
Create Pluggable Database job. For information about viewing the details of this
job, see Viewing Create Pluggable Database Job Details.
• The source PDB (the PDB that you want to clone) must exist, and must be a Cloud
Control target.
Note:
For information on how to create a new PDB, see Creating a New Pluggable
Database Using Enterprise Manager.
• The target CDB (the CDB into which you want to plug in the cloned PDB) must
exist, and must be a Cloud Control target.
Note:
• The target host user must be the owner of the Oracle home that the target CDB
belongs to.
To clone a PDB using the Snap Clone method, you must meet the following additional
prerequisites:
• The 12.1.0.5 Enterprise Manager for Oracle Database plug-in must be downloaded
and deployed. Also, the 12.1.0.3 SMF plug-in or higher must be downloaded and
deployed.
• The PDB that you want to clone must reside on a registered storage server. This
storage server must be synchronized.
For information on how to register a storage server and synchronize storage
servers, see Oracle Enterprise Manager Cloud Administration Guide.
• All the datafiles of the PDB that you want to clone must reside on the storage
volumes of the storage server, and not on the local disk.
• Metric collections must be run on the source CDB (the CDB containing the PDB
that you want to clone), the source CDB host, and the PDB that you want to clone.
• The Snap Clone feature must be enabled for the PDB that you want to clone.
For information on how to enable the Snap Clone feature, see Oracle Enterprise
Manager Cloud Administration Guide.
Note:
If you use the Full Clone method to clone a PDB, you can clone the PDB only
to the source CDB (the CDB containing the PDB that you are cloning).
1. From the Enterprise menu, select Provisioning and Patching, then select Database
Provisioning. In the Database Provisioning page, in the Related Links section of
the left menu pane, click Provision Pluggable Databases.
Note:
You can also access the Provision Pluggable Database Console from the home
page of the CDB. To do so, in the CDB's home page, from the Oracle Database
menu, select Provisioning, then select Provision Pluggable Database.
2. In the Provision Pluggable Database Console, in the CDB section, select the CDB to
which you want to add the cloned PDB.
Note:
Skip this step if you have accessed the Provision Pluggable Database Console
from the CDB's home page.
4. Click Launch.
Note:
You will be prompted to log in to the database if you have not already logged
in to it through Enterprise Manager. Make sure you log in using sysdba user
account credentials.
5. In the Creation Options page of the Create Pluggable Database Wizard, in the PDB
Creation Options section, select Clone an Existing PDB.
To clone a PDB using the traditional method of cloning the PDB datafiles, select
Full Clone. Use this method if you want to clone a PDB for long term usage. This
method is ideal for load testing, when you plan to make significant data updates to
the PDB clone. However, this method takes a longer period of time, and a clone
that is created using this method occupies a fairly large amount of space, as
compared to the Snap Clone method.
To clone a PDB using the Storage Management Framework (SMF) Snap Clone
feature, select Snap Clone. Use this method if you want to clone a PDB for short
term purposes. This method is ideal for functional testing, as the cloning process is
quick, and a PDB clone that is created using this method occupies very little space.
However, this method is not suitable if you plan to make significant data updates
to the PDB clone.
For Source PDB, select the PDB that you want to clone.
Figure 17-18 displays the PDB Creation Options section of the Creation Options
page.
6. In the CDB Host Credentials section, select or specify the target CDB Oracle Home
owner host credentials. If you have already registered the credentials with
Enterprise Manager, you can select Preferred or Named. Otherwise, you can select
New and enter the credentials.
Figure 17-19 displays the CDB Host Credentials section of the Creation Options
page.
7. Click Next.
8. In the Identification page, enter a unique name for the PDB you are cloning.
If you prefer to create more than one PDB in this procedure, then select Create
Multiple Copies, and set the number of PDBs you want to create. Note that you
can create a maximum of 252 PDBs.
Note:
If you choose to create multiple PDBs, then the unique name you enter here is
used as a prefix for all the cloned PDBs, and the suffix is a numeric value that
indicates the count of PDBs.
For example, if you create five PDBs with the name accountsPDB, then the
PDBs are created with the names accountsPDB1, accountsPDB2,
accountsPDB3, accountsPDB4, and accountsPDB5.
9. In the PDB Administrator section, do one of the following to administer the PDB:
• If you prefer to use the admin user account that was created as part of the
source PDB that you are cloning, then deselect Create PDB Administrator.
• If you want to create a brand new admin user account for the PDB you are
cloning, then select Create PDB Administrator, and enter the desired
credentials.
Figure 17-20 displays the Identification page.
Note:
If you choose to create multiple PDBs, then an admin user account is created
for each PDB that you create, with the same set of the specified credentials.
10. In the Source CDB Login Credentials section, select or specify the login credentials
of the source CDB. If you have already registered the credentials with Enterprise
Manager, you can select Preferred or Named. Otherwise, you can select New and
enter the credentials.
The credentials are used to bring the source PDB to read-only mode before the
cloning operation begins, and to restore it to the original state after the cloning
operation ends.
If you chose the Snap Clone method (on the Source page of the Create Pluggable
Database Wizard) to clone the PDB, specify the host credentials for the source CDB.
Note:
If you are cloning the source PDB to the source CDB itself, then the Source
CDB Login Credentials section is not displayed, that is, you do not need to
provide the source CDB login credentials or the source CDB host credentials.
If you are cloning the source PDB to a CDB different from the source CDB,
and this CDB resides on the source CDB host, then you must provide the
source CDB login credentials. You do not need to provide the source CDB host
credentials.
If you are cloning the source PDB to a CDB different from the source CDB,
and this CDB resides on a host different from the source CDB host, then you
must provide the source CDB login credentials and the source CDB host
credentials.
If you chose the Full Clone method to clone the PDB, select the type of location
where you want to store the PDB datafiles in the following manner:
• If the source CDB is enabled with Oracle Managed Files and if you want to use
the same, then select Use Oracle Managed Files (OMF).
• If you want to enter a custom location, then select Use Common Location for
PDB Datafiles. Select the storage type and the location where the datafiles can
be stored.
Figure 17-21 displays the Storage page for the Full Clone method.
If you chose the Snap Clone method to clone the PDB, do the following:
• In the PDB Datafile Locations section, specify a value for Mount Point Prefix,
that is, the mount location for the storage volumes. You can choose to specify
the same prefix for all the volumes, or a different prefix for each volume. Also,
specify a value for Writable Space, that is, the space that you want to allocate
for writing the changes made to the PDB clone. You can choose to specify the
same writable space value for all the volumes, or a different value for each
volume.
• In the Privileged Host Credentials section, select or specify the credentials of the
root user. These credentials are used for mounting the cloned volumes on the
destination host.
If you have already registered the credentials with Enterprise Manager, you can
select Preferred or Named. Otherwise, you can select New and enter the
credentials.
Figure 17-22 displays the Storage page for the Snap Clone method.
13. In the Temporary Working Directory section, enter a location where the temporary
files generated during the PDB creation process can be stored.
14. In the Post-Creation Scripts section, select a custom SQL script you want to run as
part of this procedure, once the PDB is cloned.
16. In the Schedule page, enter a unique deployment procedure instance name and a
schedule for the deployment. The instance name you enter here helps you identify
and track the progress of this procedure on the Procedure Activity page.
If you want to run the procedure immediately, then retain the default selection,
that is, Immediately. Otherwise, select Later and provide time zone, start date, and
start time details.
You can optionally set a grace period for this schedule. A grace period is a period
of time that defines the maximum permissible delay when attempting to run a
scheduled procedure. If the procedure does not start within the grace period you
have set, then the procedure skips running. To set a grace period, select Grace
Period, and set the permissible delay time.
Figure 17-23 displays the Schedule page.
18. In the Review page, review the details you have provided for the deployment
procedure. If you are satisfied with the details, click Submit.
If you want to modify the details, then click Back repeatedly to reach the page
where you want to make the changes.
Figure 17-24 displays the Review page.
19. In the Procedure Activity page, view the status of the procedure. From the
Procedure Actions menu, you can select Debug to set the logging level to Debug,
and select Stop to stop the procedure execution.
When you clone a PDB, the Enterprise Manager job system creates a Create
Pluggable Databaa job. For information about viewing the details of this job, see
Viewing Create Pluggable Database Job Details.
Note:
The pluggable database cloning procedure contains a prerequisite step that
checks the plug compatibility which is disabled by default. If disabled, this
step will succeed irrespective of any violations. If enabled, this step will fail if
there are any plug compatibility violations found.
To enable this step, run the following command:
emctl set property -name
oracle.sysman.db.pdb.prereq_enabled -sysman_pwd <sysman
password> —value true
To disable this step, run the following command:
emctl set property -name
oracle.sysman.db.pdb.prereq_enabled -sysman_pwd <sysman
password> —value false
• The target CDB (the CDB to which you want to migrate a non-CDB as a PDB) must
exist, and must be a Cloud Control target.
Note:
• The non-CDB that you want to migrate and the target CDB must be running in
ARCHIVELOG mode.
For information on setting the archiving mode of a database, see Oracle Database
Administrator's Guide.
• The database administrators of the database you want to migrate, and the target
CDB must have SYSDBA privileges.
• The target host user must be the owner of the Oracle home that the target CDB
belongs to.
1. From the Enterprise menu, select Provisioning and Patching, then select Database
Provisioning. In the Database Provisioning page, in the Related Links section of
the left menu pane, click Provision Pluggable Databases.
Note:
You can also access the Provision Pluggable Database Console from the home
page of the CDB. To do so, in the CDB's home page, from the Oracle Database
menu, select Provisioning, then select Provision Pluggable Database.
2. In the Provision Pluggable Database Console, in the CDB section, select the CDB to
which you want to migrate a non-CDB as a PDB.
Note:
Skip this step if you have accessed the Provision Pluggable Database Console
from the CDB's home page.
3. In the PDB Operations section of the Provision Pluggable Database page, select the
Migrate Existing Databases option and click Launch.
4. On the Database Login page, select the Credential Name from the drop-down list.
Click Login.
5. On the Migrate Non-CDBs launch page, select a data migration method, that is,
Export/Import or Plug as a PDB. If you select Plug as a PDB, ensure that the non-
CDB that you want to migrate is open, and is in read-only mode.
Enter the appropriate credentials for the Oracle Home Credential section.
Click Next.
Figure 17-27 displays the Method page.
6. On the Database page, select a Non-CDB to be migrated. You can select more than
one. Click Add. In the database pane, provide the appropriate credential,
properties, export, import, and datafile location information. Click Next.
Figure 17-28 displays how to select the non-CDB you want to migrate.
Figure 17-29 displays how to specify the database and database host credentials.
Figure 17-30 displays how to specify the stage location for migration.
Figure 17-31 displays how to specify the PDB administrator details and datafile
location.
7. On the Schedule page, enter the appropriate job and scheduling details. Click Next.
8. On the Review page, review all details entered. If there are no changes required,
click Submit.
• Both the source and destination Container Databases should be in Archive Log
mode and have Local Undo configured.
2. For View, select Search List. From the View menu, select Expand All.
3. Look for the source PDB and right-click the name of the PDB that you want to
relocate.
a. Specify the SYSDBA credentials for the source CDB. You can choose to use the
preferred credentials, use a saved set of named credentials, or specify a new set
of credentials.
b. In the Container Database Destination section, select the destination CDB and
specify a name, and a display name for the PDB.
c. In the Credentials section, specify the SYSDBA credentials of the CDB and the
CDB host credentials.
6. Click Relocate.
Note: To edit other parameters of the PDB you can click Advanced and edit
the parameters which are similar to the PDB clone wizard. For more
information, see Creating a Full Clone Pluggable Database Using the Clone
Wizard.
Note:
As an alternative to using the method described in this section, you can use
EM CLI to unplug and drop PDBs. For more information, see Unplugging and
Dropping a Pluggable Database.
• The PDB that you want to unplug and drop must have been opened at least once.
• The target host user must be the owner of the Oracle home that the CDB
(containing the PDB that you want to unplug and drop) belongs to.
1. From the Enterprise menu, select Provisioning and Patching, then select Database
Provisioning. In the Database Provisioning page, in the Related Links section of
the left menu pane, click Provision Pluggable Databases.
Note:
You can also access the Provision Pluggable Database Console from the home
page of the CDB. To do so, in the CDB's home page, from the Oracle Database
menu, select Provisioning, then select Provision Pluggable Database.
2. In the Provision Pluggable Database Console, in the CDB section, select the CDB
from which you want to unplug the PDBs.
Note:
Skip this step if you have accessed the Provision Pluggable Database Console
from the CDB's home page.
4. Click Launch.
Note:
You will be prompted to log in to the database if you have not already logged
in to it through Enterprise Manager. Make sure you log in using sysdba user
account credentials.
5. In the Select PDB page of the Unplug Pluggable Database Wizard, in the Select
Pluggable Database section, select the PDB you want to unplug. Note that the PDB
once unplugged will be stopped and dropped.
6. In the CDB Host Credentials section, select or specify the target CDB Oracle Home
owner host credentials. If you have already registered the credentials with
Enterprise Manager, you can select Preferred or Named. Otherwise, you can select
New and enter the credentials.
7. In the Destination page, select the type of PDB template you want to generate for
unplugging the PDB, and the location where you want to store it. The PDB
template consists of all datafiles as well as the metadata XML file.
• If you want to store the PDB template on your CDB host (CDB from where you are
unplugging the PDB), then select Target Host File System.
– If you want to generate a single archive file—a TAR file with the datafiles
and the metadata XML file included in it, then select Generate PDB Archive.
Select a location where the archive file can be created.
Note:
Oracle recommends you to select this option if the source and target CDBs are
using file system for storage. This option is not supported for PDBs using
ASM as storage.
– If you want to generate an archive file set—a separate DFB file with all the
datafiles and a separate metadata XML file, then select Generate PDB File
Set. Select the locations where the DBF and XML files can be created.
Note:
Oracle recommends you to select this option if the source and target CDBs are
using ASM for storage.
– If you want to generate only a metadata XML file, leaving the datafiles in
their current location, then select Generate PDB Metadata File. Select a
location where the metadata XML file can be created.
• If you want to store the PDB template in Oracle Software Library (Software
Library), then select Software Library.
– If you want to generate a single archive file—a TAR file with the datafiles
and the metadata XML file included in it, then select Generate PDB Archive.
If you want to generate an archive file set—a separate DFB file with all the
datafiles and a separate metadata XML file, then select Generate PDB File
Set. If you want to generate only a metadata XML file, leaving the datafiles
in their current location, then select Generate PDB Metadata File.
8. In the Schedule page, enter a unique deployment procedure instance name and a
schedule for the deployment. The instance name you enter here helps you identify
and track the progress of this procedure on the Procedure Activity page.
If you want to run the procedure immediately, then retain the default selection,
that is, Immediately. Otherwise, select Later and provide time zone, start date, and
start time details.
You can optionally set a grace period for this schedule. A grace period is a period
of time that defines the maximum permissible delay when attempting to run a
scheduled procedure. If the procedure does not start within the grace period you
have set, then the procedure skips running. To set a grace period, select Grace
Period, and set the permissible delay time.
Figure 17-38 displays the Schedule page.
9. Click Next.
10. In the Review page, review the details you have provided for the deployment
procedure. If you are satisfied with the details, click Submit.
If you want to modify the details, then click Back repeatedly to reach the page
where you want to make the changes.
Figure 17-39 displays the Review page.
11. In the Procedure Activity page, view the status of the procedure. From the
Procedure Actions menu, you can select Debug to set the logging level to Debug,
and select Stop to stop the procedure execution.
When you unplug and drop a PDB, the Enterprise Manager job system creates an
Unplug Pluggable Database job. For information about viewing the details of this
job, see Viewing Unplug Pluggable Database Job Details.
• The 12.1.0.5 Enterprise Manager for Oracle Database plug-in must be downloaded
and deployed.
For information on how to download and deploy a plug-in, see Oracle Enterprise
Manager Cloud Control Administrator's Guide.
• The PDBs that you want to delete must have been opened at least once.
• The target host user must be the owner of the Oracle home that the CDB
(containing the PDBs that you want to delete) belongs to.
1. From the Enterprise menu, select Provisioning and Patching, then select Database
Provisioning. In the Database Provisioning page, in the Related Links section of
the left menu pane, click Provision Pluggable Databases.
Note:
You can also access the Provision Pluggable Database Console from the home
page of the CDB. To do so, in the CDB's home page, from the Oracle Database
menu, select Provisioning, then select Provision Pluggable Database.
2. In the Provision Pluggable Database Console, in the CDB section, select the CDB
from which you want to delete the PDBs.
Note:
Skip this step if you have accessed the Provision Pluggable Database Console
from the CDB's home page.
4. Click Launch.
Note:
You will be prompted to log in to the database if you have not already logged
in to it through Enterprise Manager. Make sure you log in using sysdba user
account credentials.
5. In the Select PDBs page of the Delete Pluggable Databases Wizard, click Add.
Select the PDBs that you want to delete, then click Select.
Note:
If you choose to delete a PDB that was created using the Snap Clone method,
the PDB mount points on the CDB host are cleaned up. The corresponding
storage volumes on the storage server are also deleted. This action is
irreversible.
6. In the CDB Host Credentials section, select or specify the target CDB Oracle Home
owner host credentials. If you have already registered the credentials with
Enterprise Manager, you can select Preferred or Named. Otherwise, you can select
New and enter the credentials.
If one (or more) of the PDBs that you selected for deletion is the Snap Clone of
another PDB, you must also provide the privileged host credentials, that is, the
credentials of the root user. If you have already registered the credentials with
Enterprise Manager, you can select Preferred or Named. Otherwise, you can select
New and enter the credentials.
Figure 17-41 displays the CDB Host Credentials section of the Select PDBs page.
7. In the Schedule page, enter a unique deployment procedure instance name and a
schedule for the deployment. The instance name you enter here helps you identify
and track the progress of this procedure on the Procedure Activity page.
If you want to run the procedure immediately, then retain the default selection,
that is, Immediately. Otherwise, select Later and provide time zone, start date, and
start time details.
You can optionally set a grace period for this schedule. A grace period is a period
of time that defines the maximum permissible delay when attempting to run a
scheduled procedure. If the procedure does not start within the grace period you
have set, then the procedure skips running. To set a grace period, select Grace
Period, and set the permissible delay time.
Figure 17-42 displays the Schedule page.
8. Click Next.
9. In the Review page, review the details you have provided for the deployment
procedure. If you are satisfied with the details, click Submit.
If you want to modify the details, then click Back repeatedly to reach the page
where you want to make the changes.
Figure 17-43 displays the Review page.
10. In the Procedure Activity page, view the status of the procedure. From the
Procedure Actions menu, you can select Debug to set the logging level to Debug,
and select Stop to stop the procedure execution.
When you delete a PDB, the Enterprise Manager job system creates a Delete
Pluggable Database job. For information about viewing the details of this job, see
Viewing Delete Pluggable Database Job Details.
1. From the Enterprise menu, select Provisioning and Patching, then select
Procedure Activity.
2. Click the deployment procedure that contains the required create PDB job.
3. Expand the deployment procedure steps. Select the PDB creation job.
In the Prepare Configuration Data step, the system prepares for PDB creation.
In the Check Prerequisites step, the system checks the prerequisites for PDB
creation.
In the Verify and Prepare step, the system runs tasks prior to PDB creation.
In the Perform Configuration step, the PDB creation is performed. For details of the
performed tasks and their status, refer to the remote log files present on the host.
In the Post Configuration step, Enterprise Manager is updated with the newly
created PDB details, and the custom scripts are run.
6. To view a visual representation of the create PDB job progress, click Results.
In the Configuration Progress section, you can view the completion percentage of
the job, and a list of pending, currently running, and completed job steps. You can
also view errors, warnings, and logs. The tail of the log for the currently running
job step is displayed.
1. From the Enterprise menu, select Provisioning and Patching, then select
Procedure Activity.
2. Click the deployment procedure that contains the required unplug PDB job.
3. Expand the deployment procedure steps. Select the unplug PDB job.
In the Prepare Configuration Data step, the system prepares for unplugging a PDB.
In the Check Prerequisites step, the system checks the prerequisites for unplugging
a PDB.
In the Verify and Prepare step, the system runs tasks prior to unplugging the PDB.
In the Perform Configuration step, the PDB unplugging is performed. For details of
the performed tasks and their status, refer to the remote log files present on the
host.
In the Post Configuration step, Enterprise Manager is updated with the unplugged
PDB details.
6. To view a visual representation of the unplug PDB job progress, click Results.
In the Configuration Progress section, you can view the completion percentage of
the job, and a list of pending, currently running, and completed job steps. You can
also view errors, warnings, and logs. The tail of the log for the currently running
job step is displayed.
1. From the Enterprise menu, select Provisioning and Patching, then select
Procedure Activity.
2. Click the deployment procedure that contains the required delete PDB job.
3. Expand the deployment procedure steps. Select the delete PDB job.
In the Prepare Configuration Data step, the system prepares for deleting the PDBs.
In the Verify and Prepare step, the system runs tasks prior to deleting the PDBs.
In the Perform Configuration step, the PDB deletion is performed. For details of the
performed tasks and their status, refer to the remote log files present on the host.
In the Post Configuration step, Enterprise Manager is updated with the deleted
PDB details.
6. To view a visual representation of the delete PDB job progress, click Results.
In the Configuration Progress section, you can view the completion percentage of
the job, and a list of pending, currently running, and completed job steps. You can
also view errors, warnings, and logs. The tail of the log for the currently running
job step is displayed.
1. From the current PDB, select any PDB scope page (such as, Manage Advanced
Queues).
In the upper-left corner of the window, the name of the PDB will update to display
a context switcher as a drop-down menu.
2. Click the context switcher to display the drop-down menu. This menu shows the
PDBs most recently used.
4. Click the context switcher to display the drop-down menu. If the menu does not
show the PDBs you want, then select All Containers.
5. A Switch Container window will pop up to display all available PDBs for the
monitored target.
6. The page will update to show data for the selected PDB.
1. From the Oracle Database menu, select Control, then select Open/Close
Pluggable Database.
2. From the Open/Close PDB page, select a PDB from the list.
3. Click the Action drop-down menu and select the appropriate actions. Your
choices are Open, Open Read Only, and Close.
5. Once state change completes, the Open/Close PDB page will update to show the
new state of the PDB.
To change the state of a PDB in a Cluster/RAC to Open or Close, follow these steps:
1. From the Oracle Database menu, select Control, then Open/Close Pluggable
Database.
2. From the Open/Close PDB page, select a PDB from the list. The RAC instances are
shown along with the PDB's current state on those instances.
3. Once you select a PDB, a panel appears below the list to show the state of the PDBs
on the different RAC instances. The open and close options apply to the PDBs on
the RAC instance's panel. You can open or close a PDB on any number of available
RAC instances.
4. In the Confirmation dialog window, click Yes to complete the change. A Processing
dialog window appears to show you the progress of your choice.
5. Once state change completes, the Open/Close PDB page will update to show the
new state of the PDB.
• Upgrading Databases
18
Upgrading Databases
This chapter explains how you can upgrade Oracle databases using Oracle Enterprise
Manager Cloud Control (Cloud Control). In particular, this chapter covers the
following:
• Getting Started
• Supported Releases
Step 3 Understanding the Deployment Procedure To learn about the Deployment Procedure
Understand the Deployment Procedure you need offered for upgrading databases, see About
to select, and its scope and coverage. Deployment Procedures.
Step 5 Running the Deployment Procedure • To learn about the procedure for
Run the Deployment Procedure to successfully upgrading database clusters, follow the
upgrade Oracle database. steps explained in Upgrading Oracle
Cluster Database Using Deployment
Procedure.
• To learn about the procedure for
upgrading database clusterware, follow
the steps explained in Upgrading Oracle
Clusterware Using Deployment
Procedure.
• To learn about the procedure for
upgrading database instance using
deployment procedure, follow the steps
explained in Upgrading Oracle
Database Instance Using Deployment
Procedure.
• To learn about the procedure to
upgrade a database instance using the
database upgrade wizard, see
Performing the Upgrade Procedure.
For upgrading one database instance, the following releases are supported:
• Upgrade of Clusterware
Note:
For upgrading one Oracle database instance at a time or any Oracle RAC database
instance, you must access the Oracle Database Upgrade wizard from the Home page
of the database that you want to upgrade.
Note:
• The following table lists the mininum source version and destination version
required for each database target type:
Cluster Database 10.2.0.4.0 and higher in 10gR2 11.2 and 12c series
series
11.1.0.7.0 and higher in 10gR2
series
11.2.0.1 and higher in 11gR2
Single Instance High 11.2.0.1 and higher in 11gR2 11.2 and 12c series
Availability
Oracle RAC One Node 10.2.0.4.0 and higher in 10gR2 11.2 and 12c series
series 11.1.0.7.0 and higher in
10gR2 series
11.2.0.1 and higher in 11gR2
Automatic Storage Applicable only for 10.2.0.4 11.2 and 12c series
Management and higher
11.1.0.7 and higher (stand
alone type)
• The database user must have SYSDBA privileges or the OS user must be part of the
DBA group.
• The database to be upgraded, and all its node instances (in case of a cluster
database) must be up and running.
• Ensure that you have created designer and operator roles with the required
privileges. The designer must have EM_PROVISIONING_DESIGNER role and the
oeprator must have EM_PROVISIONING_OPERATOR role.
– Edit access to Software Library to manage Software Library entities such as gold
images.
– View access to Software Library to view Software Library entities such as gold
images.
1. In the designer role, from the Enterprise menu, select Provisioning and Patching,
then select Database Provisioning.
3. On the Targets page, in the Select Targets section, select Cluster Database as the
Target Type.
Note:
The Cluster Database option enables you to upgrade the Cluster Databases
and optionally the underlying Automatic Storage Management target and the
managing Clusterware as part of the same process.
4. Select the version that you want the database to be upgraded to from the To list.
6. In the Select Targets for Upgrade dialog box, click on the Cluster search icon to
search for the cluster database.
In the Search and Select: Cluster Target dialog box, select the cluster database that
you want to upgrade, and then click Select.
Note:
If the database you want to upgrade does not appear in the table, verify that
the database is available and there are no metrics collection errors for the
target.
Note:
If the cluster database that you selected is a parent cluster, then all the child
nodes such as HAS, ASM, and cluster databases are automatically selected.
Click Save.
10. The designer role is completed. Now, log in with the Operator credentials.
11. From the Enterprise Menu, select Provisioning and Patching, and then Database
Provisioning.
12. On the Database Provisioning page, all operations except for Launch are disabled
for the Operator. The upgrade database deployment procedure that was saved is
selected. Click Launch.
14. On the Software Details page, in the Grid Infrastructure section, click the Search
icon to select a Grid Infrastructure software from the Software Library to create a
new Oracle Home.
Note:
From the dialog box that appears, select a Grid Infrastructure software, and then
click Select.
a. In the Oracle Database Software section, click the search icon to select the
Oracle Database Software from the Software Library for creating a new
Oracle Home.
The software may be stored as a gold image or a zipped up installation media
in the Software Library. Ensure that the zipped up Oracle Home contains all
critical patches for the new Oracle Home.
Note:
To ensure that the gold image you create includes all the recommended
patches, follow these steps:
ii. Select the following types of patches to be applied to the gold image:
• Patches on top of the release that maintain the fixes in the base release
iii. Apply the patches to an Oracle Home of the release to upgrade to using
Patch Plans or manually.
iv. Create a gold image from the Oracle Home and use it for the upgrade.
c. In the User Groups section, specify or check if the specified values for the
following user groups are correct:
Note:
Click on the Lock icon to lock the fields that you do not want to be editable.
These fields will not be available for editing in the operator role.
e. Click Next.
16. In the Credentials page, specify the Operating System credentials for Grid
Infrastructure and Database, Privileged Operating System credentials (run as
root), and Database credentials. If you choose to specify Preferred Credentials,
select either Normal Host or Privileged Host credentials. For Named Credentials,
you can specify the same or different credentials for Oracle homes.
If you have not set Named Credentials, click the plus sign (+) in the Credentials
section. In the Add New Database Credentials popup, specify the User name,
Password, Role, and specify the Save Details. Select Run As and specify root.
Click OK.
Click on the Lock icon to lock the fields that you do not want to be editable. These
fields will not be available for editing in the operator role.
topicid:software_lib_cn_swlib_users
Click Next.
17. In the Configuration Details page, the Backup and Restore settings option in the
Restore Strategy is selected by default.
In the Restore Strategy section, you can select:
• Create RMAN backup Before Upgrade and enter the backup location.
Note:
If you perform a backup using RMAN then you can restore the entire database
irrespective of datafiles on filesystem or ASM. Backup using RMAN is
available when upgrading to 12c only.
• Use Existing RMAN Backup where the latest RMAN backup will be used.
Note:
The source database version should be from 11g onwards. The target home
version should be 12c
• Full Backup to backup the database and restore configuration and oratab
settings if upgrade fails. The backup location is, by default, $ORACLE_BASE/
admin/$GDB/backup where '$GDB' is the global database name.
• Ignore if you have your own backup options and do not want Cloud Control
to perform a backup of your database.
Note:
Click on the Lock icon to lock the fields that you do not want to be editable.
These fields will not be available for editing in the operator role.
• If you are upgrading to database version 11.2.0.2 or higher, you will be able to
set the time zone upgrade option. You can select Upgrade Time Zone Version
and Timestamp with Time Zone data.
• If archive logging has been turned on for the database, then you have the
option to disable Archiving and flashback logging for each database.
19. In the Pre and Post Upgrade Script section, specify custom scripts to run on the
database before or after upgrade. Select a component from the software library
that contains the SQL script to be executed before upgrading the database for each
of the following scripts:
20. The Custom Properties page will be displayed only for user customized
deployment procedures that require custom parameters. Specify custom
properties for the upgrade, if any. Click Next.
21. In the Schedule page, specify a Deployment Instance Name and schedule for the
upgrade job. If you want to run the procedure immediately, then retain the
default selection, that is, Immediately. If you want to run the job later, then select
Later and provide time zone, start date, and start time details. Specify a Grace
Period, a duration after the start period for Cloud Control to attempt to start the
upgrade if it cannot start at the specified time.
In the Breakpoint section, you can set the breakpoint by selecting Set Breakpoint,
and then selecting which step you want the breakpoint to be after, from the Set
Breakpoint After list .
Setting these breakpoints appropriately can reduce the actual downtime of the
upgrade process.
For example, you can set the breakpoint before the Upgrade Cluster Database step
as downtime is application only during the actual upgrade process and not duing
the software installation,
You can also change the breakpoint and resume the execution from the Database
Upgrade Instance Tracking page.
In the Set Notification Details section, select the events for which you want to be
notified.
Click Next.
22. In the Review page, verify that the details you have selected are correctly
displayed. If you want to modify the details, then click Back repeatedly to reach
the page where you want to make the changes.
To save the deployment procedure for future use, click Save.
To submit the deployment procedure, click Submit. When the deployment
procedure is submitted for execution, the database upgrade instance tracking
page is displayed. You can also navigate to this page by clicking the procedure
instance in the Job Activity page.
23. Submit the configured Database Upgrade procedure after providing values for the
editable fields. After you have submitted the procedure, the summary for the
running procedure is displayed.
24. In the Upgrade Oracle Database procedure execution page, in the Current Run
tab, view the upgrade job steps and status.
25. If you have specified a breakpoint, the procedure execution will pause at the step
specified. Once the execution pauses, you can do either of the following using the
Run to step list.
Note:
The rollback instance can be tracked in the same page as that of the upgrade
execution run. This can be used where there is a fatal failure during the GI
rollback.
• Retry: to retry the failed step. This option can be used to retry the prerequisite
step that might have failed in the previous run (The fixups were performed
outside the flow),
26. If a step has status Failed, click View Log. The Job Run for the step is listed. Click
Show in the Details column to view the entire log. Fix the error and click Retry.
27. After the procedure execution is completed, click on the Database menu and
verify that the newly upgraded databases appear as Cloud Control targets
1. In the designer role, from the Enterprise menu, select Provisioning and Patching,
then select Database Provisioning.
3. On the Targets page, in the Select Targets section, select Clusterware as the Target
Type.
Note:
The Clusterware option enables you to upgrade the Clusterware and
optionally the underlying Automatic Storage Management target as part of
the same process.
4. Select the version that you want the database to be upgraded to from the To list.
6. In the Select Targets for Upgrade dialog box, click on the Cluster search icon to
search for the cluster.
In the Search and Select: Cluster Target dialog box, select the cluster that you want
to upgrade, and then click Select.
Note:
If the database you want to upgrade does not appear in the table, verify that
the database is available and there are no metrics collection errors for the
target.
Note:
If the cluster database that you selected is a parent cluster, then all the child
nodes such as HAS, ASM, and cluster databases are automatically selected.
Click OK.
Click Save.
10. The designer role is completed. Now, log in with the Operator credentials.
11. From the Enterprise Menu, select Provisioning and Patching, and then Database
Provisioning.
12. On the Database Provisioning page, all operations except for Launch are disabled
for the Operator. The upgrade database deployment procedure that was saved is
selected. Click Launch.
14. On the Software Details page, in the Grid Infrastructure section, click the Search
icon to select a Grid Infrastructure software from the Software Library to create a
new Oracle Home.
Note:
The software may be stored as a gold image or a zipped up installation media
in the Software Library.
From the dialog box that appears, select a Grid Infrastructure software, and then
click Select.
15. In the Working Directory section, specify the working directory on each target
host. A working directory on the host target is required to stage files during Grid
Infrastructure or Database installation or upgrade. Ensure that all the hosts have
read write permission on the working directory specified.
Note:
Click on the Lock icon to lock the fields that you do not want to be editable.
These fields will not be available for editing in the operator role.
16. In the Credentials page, specify the Operating System credentials for Grid
Infrastructure and the Privileged Operating System credentials (run as root). If
you choose to specify Preferred Credentials, select either Normal Host or
Privileged Host credentials. For Named Credentials, you can specify the same or
different credentials for Oracle homes.
If you have not set Named Credentials, click the plus sign (+) in the Credentials
section. In the Add New Database Credentials popup, specify the User name,
Password, Role, and specify the Save Details. Select Run As and specify root.
Click OK.
Click on the Lock icon to lock the fields that you do not want to be editable. These
fields will not be available for editing in the operator role.
topicid:software_lib_cn_swlib_users
Click Next.
17. The Custom Properties page will be displayed only for user customized
deployment procedures that require custom parameters. Specify custom
properties for the upgrade, if any. Click Next.
18. In the Schedule page, specify a Deployment Instance Name and schedule for the
upgrade job. If you want to run the procedure immediately, then retain the
default selection, that is, Immediately. If you want to run the job later, then select
Later and provide time zone, start date, and start time details. Specify a Grace
Period, a duration after the start period for Cloud Control to attempt to start the
upgrade if it cannot start at the specified time.
In the Breakpoint section, you can set the breakpoint by selecting Set Breakpoint,
and then selecting which step you want the breakpoint to be after, from the Set
Breakpoint After list .
Setting these breakpoints appropriately can reduce the actual downtime of the
upgrade process.
For example, you can set the breakpoint before the Upgrade Clusterware step as
downtime is application only during the actual upgrade process and not duing
the software installation,
Downtime is applicable only during the actual upgrade and not during the
software installation. The downtime is only for the actual upgrade process itself.
There is a step in the DP which says Upgrade Clusterware and thats the step that
will require a downtime.
You can also change the breakpoint and resume the execution from the Database
Upgrade Instance Tracking page.
In the Set Notification Details section, select the events for which you want to be
notified.
Click Next.
19. In the Review page, verify that the details you have selected are correctly
displayed. If you want to modify the details, then click Back repeatedly to reach
the page where you want to make the changes.
To save the deployment procedure for future use, click Save.
To submit the deployment procedure, click Submit. When the deployment
procedure is submitted for execution, the database upgrade instance tracking
page is displayed. You can also navigate to this page by clicking the procedure
instance in the Job Activity page.
20. Submit the configured Database Upgrade procedure after providing values for the
editable fields. After you have submitted the procedure, the summary for the
running procedure is displayed.
21. In the Upgrade Oracle Database procedure execution page, in the Current Run
tab, view the upgrade job steps and status.
22. If you have specified a breakpoint, the procedure execution will pause at the step
specified. Once the execution pauses, you can do either of the following using the
Run to step list.
Note:
The rollback instance can be tracked in the same page as that of the upgrade
execution run. This can be used where there is a fatal failure during the GI
rollback.
• Retry: to retry the failed step. This option can be used to retry the prerequisite
step that might have failed in the previous run (The fixups were performed
outside the flow),
23. If a step has status Failed, click View Log. The Job Run for the step is listed. Click
Show in the Details column to view the entire log. Fix the error and click Retry.
24. After the procedure execution is completed, click on the Targets menu and select
All Targets to navigate to the All Targets page and verify that the newly
upgraded databases appear as Cloud Control targets.
1. In the designer role, from the Enterprise menu, select Provisioning and Patching,
then select Database Provisioning.
3. On the Targets page, in the Select Targets section, select Single Instance Database
as the Target Type.
Note:
The Single Instance Database option enables you to upgrade the Single
Instance Database and optionally the underlying Automatic Storage
Management target and the managing High Availability Service as part of the
same process.
4. Select the version that you want the database to be upgraded to from the To list.
6. In the Select Targets for Upgrade dialog box, select the .Source Version and the
Platform of the database instance that you want to upgrade. Click Search.
In the Search and Select: Cluster Target dialog box, select the database instance
target that you want to upgrade.
Note:
If the database you want to upgrade does not appear in the table, verify that
the database is available and there are no metrics collection errors for the
target.
You can choose to upgrade the listeners by selecting Upgrade Listeners. The
selected listeners in the source Oracle Home will be migrated and restarted in the
destination Oracle Home of the database.
Click OK.
Click Save.
10. The designer role is completed. Now, log in with the Operator credentials.
11. From the Enterprise Menu, select Provisioning and Patching, and then Database
Provisioning.
12. On the Database Provisioning page, all operations except for Launch are disabled
for the Operator. The upgrade database deployment procedure that was saved is
selected. Click Launch.
14. In the Oracle Database section, select Upgrade Database Instance only. Specify
the Database Oracle Home location.
In the Working Directory section, specify the working directory on each target
host. A working directory on the host target is required to stage files during Grid
Infrastructure or Database installation or upgrade.Ensure that all the hosts have
read write permission on the working directory specified.
Note:
Click on the Lock icon to lock the fields that you do not want to be editable.
These fields will not be available for editing in the operator role.
Click Next.
15. In the Credentials page, specify the Operating System credentials for Database,
Privileged Operating System credentials (run as root), and Database credentials. If
you choose to specify Preferred Credentials, select either Normal Host or
Privileged Host credentials. For Named Credentials, you can specify the same or
different credentials for Oracle homes.
If you have not set Named Credentials, click the plus sign (+) in the Credentials
section. In the Add New Database Credentials popup, specify the User name,
Password, Role, and specify the Save Details. Select Run As and specify root.
Click OK.
Click on the Lock icon to lock the fields that you do not want to be editable. These
fields will not be available for editing in the operator role.
topicid:software_lib_cn_swlib_users
Click Next.
16. In the Configuration Details page, the Backup and Restore settings option in the
Restore Strategy is selected by default.
In the Restore Strategy section, you can select:
• Create RMAN backup Before Upgrade and enter the backup location.
Note:
If you perform a backup using RMAN then you can restore the entire database
irrespective of datafiles on filesystem or ASM. Backup using RMAN is
available when upgrading to 12c only.
• Use Existing RMAN Backup where the latest RMAN backup will be used.
Note:
The source database version should be from 11g onwards. The target home
version should be 12c
• Full Backup to backup the database and restore configuration and oratab
settings if upgrade fails. The backup location is, by default, $ORACLE_BASE/
admin/$GDB/backup where '$GDB' is the global database name.
• Ignore if you have your own backup options and do not want Cloud Control
to perform a backup of your database.
Note:
Click on the Lock icon to lock the fields that you do not want to be editable.
These fields will not be available for editing in the operator role.
• If you are upgrading to database version 11.2.0.2 or higher, you will be able to
set the time zone upgrade option. You can select Upgrade Time Zone Version
and Timestamp with Time Zone data.
• If archive logging has been turned on for the database, then you have the
option to disable Archiving and flashback logging for each database.
18. In the Pre and Post Upgrade Script section, specify custom scripts to run on the
database before or after upgrade. Select a component from the software library
that contains the SQL script to be executed before upgrading the database for each
of the following scripts:
19. The Custom Properties page will be displayed only for user customized
deployment procedures that require custom parameters. Specify custom
properties for the upgrade, if any. Click Next.
20. In the Schedule page, specify a Deployment Instance Name and schedule for the
upgrade job. If you want to run the procedure immediately, then retain the
default selection, that is, Immediately. If you want to run the job later, then select
Later and provide time zone, start date, and start time details. Specify a Grace
Period, a duration after the start period for Cloud Control to attempt to start the
upgrade if it cannot start at the specified time.
In the Breakpoint section, you can set the breakpoint by selecting Set Breakpoint,
and then selecting which step you want the breakpoint to be after, from the Set
Breakpoint After list .
Setting these breakpoints appropriately can reduce the actual downtime of the
upgrade process.
For example, you can set the breakpoint before the Upgrade Database Instance
step as downtime is application only during the actual upgrade process and not
duing the software installation,
You can also change the breakpoint and resume the execution from the Database
Upgrade Instance Tracking page.
In the Set Notification Details section, select the events for which you want to be
notified.
Click Next.
21. In the Review page, verify that the details you have selected are correctly
displayed. If you want to modify the details, then click Back repeatedly to reach
the page where you want to make the changes.
To save the deployment procedure for future use, click Save.
To submit the deployment procedure, click Submit. When the deployment
procedure is submitted for execution, the database upgrade instance tracking
page is displayed. You can also navigate to this page by clicking the procedure
instance in the Job Activity page.
22. Submit the configured Database Upgrade procedure after providing values for the
editable fields. After you have submitted the procedure, the summary for the
running procedure is displayed.
23. In the Upgrade Oracle Database procedure execution page, in the Current Run
tab, view the upgrade job steps and status.
24. If you have specified a breakpoint, the procedure execution will pause at the step
specified. Once the execution pauses, you can do either of the following using the
Run to step list.
You can also perform the following actions:The possible actions are Stop, Resume,
Suspend, Cleanup, Resubmit, and Skip Step. Click Resubmit to resubmit the
current instance for execution.
Note:
The rollback instance can be tracked in the same page as that of the upgrade
execution run. This can be used where there is a fatal failure during the GI
rollback.
• Retry: to retry the failed step. This option can be used to retry the prerequisite
step that might have failed in the previous run (The fixups were performed
outside the flow),
25. If a step has status Failed, click View Log. The Job Run for the step is listed. Click
Show in the Details column to view the entire log. Fix the error and click Retry.
26. After the procedure execution is completed, click on the Database menu and
verify that the newly upgraded databases appear as Cloud Control targets
Note:
Since mass upgrade of Oracle RAC database is not supported at the moment,
Oracle recommends that you use the wizard described in this section to
upgrade one Oracle RAC database instance at a time.
• The database version must be 10.2.0.4 or above for upgrade to 11g or 12c.
• For Oracle Real Application Clusters databases, if you select an Oracle RAC
database instance and start the database upgrade process, it will upgrade the entire
cluster database.
• If OS authentication is not turned on, SYSDBA credentials are required for the
upgrade.
1. From the Enterprise menu, select Targets, then select Database. In the Databases
page, select the source database to be upgraded.
2. In the Database Instance home page, from the Oracle Database menu, select
Provisioning, then select Upgrade Database.
Note:
For single instance database instances, you will see another menu option to
upgrade the Oracle home and the instance. If you select that option, you will
be taken to the wizard described in Upgrading an Oracle Database or Oracle
RAC Database Instance Using the Database Upgrade Wizard, however, only
the database instance, from where you navigated to the wizard, will be pre-
selected for upgraded.
3. Specify the Database user and password credentials and click Continue. The
Database Upgrade wizard is launched.
4. In the Oracle Home page, select the New Oracle Home where you want the new
Oracle Home for the upgrade to be installed, based on the version of the database
to be upgraded.
If the Oracle Home is not a discovered target in Cloud Control, either discover the
Oracle Home using the Cloud Control Discovery feature and then initiate the
upgrade process or type the path of the Oracle Home manually. For Oracle Real
Application Clusters databases, specify the Oracle RAC home.
For information about discovering targets in Cloud Control, see Discovering
Hosts and Software Deployments .
When specifying the new Oracle Home, you must have DBA permissions on both
the source and destination Oracle Homes and these Oracle Homes must reside on
the same host.
5. In the Oracle Home Credentials section, specify the host credentials. Host
credentials must have DBA privileges and can be Preferred Credentials, or
Named Credentials, or, you can select Enter Credentials and specify the user
name and password and save it. Click More Details to view details about the host
credentials you have selected. The specified Oracle Home credentials should have
privileges on both the source database Oracle Home and the new Oracle Home.
Click Test to verify that the credentials have the required privileges. If you are
using Named Credentials, ensure that these are user and password credentials,
else they will not be supported in Cloud Control.
topicid:addln_oms_install_tsk_selecting_credentials
6. Click Next. The errors and warnings from the prerequisite checks are displayed.
Fix all errors and warnings and revalidate. Click OK to proceed to next step.
7. In the Options page, the Diagnostics destination field is displayed only for
database upgrade from version 10.2.x to 11.1.0.6. The diagnostic destination is
defaulted to Oracle Base and all diagnostic and trace files are stored at this
location.
If you are upgrading from version 11.1.0.7 or higher to 11.2.x, the diagnostic
destination field does not appear.
9. If you are upgrading to database version 11.2.0.2 or higher, you will be able to set
the time zone upgrade option. You can select to Upgrade Time Zone Version and
Timestamp with Time Zone data.
You can select Gather statistics if you want to gather optimizer statistics on fixed
tables. This helps the optimizer to generate good execution plans. It is
recommended that you gather fixed object statistics before you upgrade the
database.
You can also select Make user tablespaces readonly. This makes the tablespaces
read-only during the database upgrade, and it reverts back to read-write once the
upgrade is done.
Note:
The Gather statistics and Make user tablespaces readonly options are
available only when upgrading to 12c.
Note:
If the database has ASM configured with it, the Backup section will not be
displayed.
• Perform full backup before upgrade and restore upon failure to restore
oratab configuration. Specify a file system location for Backup Location. The
credentials that you have specified earlier must have read-write permissions to
this location.
Note:
If you perform a backup using RMAN then you can restore the entire database
irrespective of datafiles on filesystem or ASM. Backup using RMAN is
available when upgrading to 12c only.
Note:
The source database version should be from 11g onwards. The target home
version should be 12c
Note:
Starting with Oracle Database 12c Release 1 (12.1), Oracle Database supports
the use of Oracle Home User, specified at the time of Oracle Database
installation. This Oracle Home user is the owner of Oracle services that run
from Oracle home and cannot be changed post installation. Different Oracle
homes on a system can share the same Oracle Home User or use different
Oracle Home User names.
An Oracle Home User is different from an Oracle Installation User. An Oracle
Installation User is the user who needs administrative privileges to install
Oracle products. An Oracle Home User is a low-privileged Windows User
Account specified during installation that runs most of the Windows services
required by Oracle for the Oracle home. For more information about Oracle
Home User, see Oracle Database Platform Guide..
For database version 12.1 or higher, for Microsoft Windows operating systems,
the database services will be configured for the Microsoft Windows user specified
during Oracle home installation.This user will own all services run by Oracle
software.
In the Oracle Home Windows User Credentials section, specify the host
credentials for the Microsoft Windows user account to configure database
services. Select existing named credentials or specify new credentials. To specify
new credentials, provide the user name and password. You can also save these
credentials and set them as preferred credentials.
In the Advanced section, specify the custom SQL scripts you want to run before
and after the database upgrade. Copy these scripts to the host file system and
select them. If your custom scripts are stored as a component in the Software
Library, select Select these scripts from the Software Library and then browse
the Software Library for these scripts. During execution, the main file specified in
the Software Library component will be run. So, if you want to run a set of scripts,
organize them in the main script file and specify the main script in the Software
Library component.
11. Select Recompile invalid objects at the end of upgrade to make valid database
objects for the new database version. Setting a higher Degree of Parallelism will
ensure faster recompilation of objects. The default setting is the number of CPU
count of the host.
Click Next.
12. The Listeners page is displayed only for single instance database upgrade. In the
Listeners page, listeners that are registered with Oracle Restart and those that are
running in the new Oracle Home are displayed. You can create a new listener or
migrate your existing listener manually and then upgrade the database. If you
create a new listener, the listener will then be an Cloud Control target and will be
monitored. If you migrate your existing listener, the upgrade job will register the
database with the listener.
If you have listeners running in the source Oracle Home and need to maintain the
same listener port after upgrade, migrate your listener manually to the new
Oracle Home first.
For Oracle Real Application Clusters database, the upgraded database will be
registered with the Clusterware listener automatically and the Listeners page will
not appear.
To add a new listener, specify the Name and Port Number.
Click Next.
13. In the Schedule page, edit or retain the Job Name and Description for the
database upgrade. If you want to run the job immediately, then retain the default
selection, that is, Immediately. If you want to run the job later, then select Later
and provide time zone, start date, and start time details. Specify a Grace Period, a
duration after the start period for Cloud Control to attempt to start the upgrade
job if it cannot start at the specified time.
Select Blackout the database target in Enterprise Manager during upgrade if you
do not want the database to be monitored and alerts to be raised by Cloud Control
during the upgrade.
Click Next.
14. In the Review page, ensure that you review all warnings generated in the
Validation Summary. Click the Validation Summary icon to view validation
results and severity and action taken for any warnings. Verify that the details you
have provided for the upgrade job appear correctly and then click Submit Job to
run the job according to the schedule set. If you want to modify the details, then
click Back repeatedly to reach the page where you want to make the changes.
Click Save to save the deployment procedure for future deployment.
15. After you have submitted the job, the Database Upgrade Job page with the
summary for the running job will be displayed. In the Jobs page, view the job
summary and the list of steps and view their status.
Note:
If the database upgrade fails for any reason and you have not specified a
backup option in the Database Upgrade wizard, restore the database manually
and perform the upgrade again. If the database upgrade succeeded, but post
upgrade custom scripts did not run, then the database will not be restored
since upgrade has succeeded.topicid:db_upgrade_em_check_job_fail
16. After the upgrade job is completed successfully, click on the Targets menu and
select All Targets to navigate to the All Targets page and verify that the newly
upgraded database is displayed as an Cloud Control target with the correct
database version.
• Enterprise Manager Cloud Control supports rolling upgrade for database versions
11.2 and higher. An Active Data Guard license is required on the primary database
and the physical standby database selected to perform the rolling upgrade.
• The software for the upgrade version must already be installed and the new Oracle
Home must already exist prior to performing the rolling upgrade. The Enterprise
Manager Cloud Control rolling upgrade process does not include these steps.
• The databases that are in the Data Guard configuration must be discovered as
Enterprise Manager targets.
• Guaranteed restore points are created at appropriate points during the upgrade
process, and are dropped once the rolling upgrade process has been successfully
completed.
• Unless explicitly specified, all required role changes are automatically performed
during the rolling upgrade process to ensure that the original roles of the databases
are restored after the successful completion of the rolling upgrade. An option is
available to pause before the 1st switchover, and if there are no bystander standby
databases in the configuration, there is an option to stop after the first switchover if
you specify not to return the databases to their original roles.
• Changes to unsupported data type objects are temporarily suspended for the
transient logical standby database for the duration of the rolling upgrade process.
• In order to proceed with a rolling upgrade to a particular database version, all the
bystander logical standby databases in the configuration must first be upgraded to
that version.
18.5.3 Submitting a Rolling Upgrade Procedure for a Primary Database With One
Physical Standby Database
To perform a rolling upgrade of a primary database in an existing Data Guard
configuration with one physical standby database, follow these steps:
1. From the home page of the primary database, choose Provisioning from the
Oracle Database or Cluster Database menu, then select Upgrade Database.
If you are submitting a Rolling Upgrade procedure for databases with a primary
cluster database, the menu option shows Cluster Database, otherwise it appears as
Oracle Database.
If you have not yet provided the credentials, the credentials page displays where
you can enter them. Enterprise Manager then displays the first page of the
Database Rolling Upgrade wizard, the Select Standby page, which displays the list
of all the standby databases associated with the primary database that can be
selected for the rolling upgrade process. The Select Standby page also displays an
overall summary of the Data Guard configuration such as the name of the
primary database, Protection Mode, Fast-start Failover status, Data Guard Status,
and the availability status of all the physical standby databases and their
respective host names.
Note: Oracle recommends you perform a full backup of the primary database
before you advance in the wizard. An Active Data Guard license is required
on the primary database and the physical standby database selected to
perform the rolling upgrade.
2. On the Select Standby page, select the physical standby database to use for the
rolling upgrade from the physical standby database table, then click Next. All
other physical standby databases will be upgraded individually after the rolling
upgrade is complete.
Enterprise Manager performs Data Guard-related pre-requisite checks on both the
primary database and the selected physical standby database. It then displays the
Primary Oracle Home page which shows the current Oracle Home location and
version, and allows you to enter the new Oracle Home location and to provide the
relevant host credentials.
3. On the Primary Oracle Home page, enter the New Oracle Home location for the
primary database. Click Next.
Enterprise Manager displays the Standby Oracle Home page of the Database
Rolling Upgrade wizard.
4. On the Standby Oracle Home page, enter the new Oracle Home location for the
selected physical standby database. Click Next.
Enterprise Manager performs the pre-requisite checks and storage checks for the
physical standby database and then opens the Options page.
If you are submitting a Rolling Upgrade Procedure for a primary database with
multiple physical standby databases, in this step Enterprise Manager performs the
pre-requisite checks and storage checks for the physical standby database and
then opens the Bystander Physical Standby Databases page. On the Bystander
Physical Standbys page, you view the list of bystander physical standby databases
in the configuration and specify the destination Oracle Homes for them.
Enterprise Manager then displays the Options page.
5. On the Options page, review or change the default options, then click Next.
The Options page displays options that allow you to pause before the first
switchover, stop after the first switchover (this option displays only when there
are no bystander standby databases), recompile invalid objects, and specify either
a common diagnostic destination locations for all databases or specify individual
locations for each of them.
Enterprise Manager displays the Listeners page of the Database Rolling Upgrade
wizard.
6. On the Listeners page, choose the default Common Listener listed or select from a
list of common listeners on the primary and the standby database hosts. The
upgraded databases will be registered with the selected listener. You can also
specify a new listener name and port to create and register the respective
databases with the new listener.
The Listeners page provides an option to select from a list of common listeners on
the primary and the standby database hosts (common listeners refer to those with
the same name and port) for selection along with an option to create a new
common listener.
Click Next. Enterprise Manager displays the Procedure Details page.
7. On the Procedure Details page, use the default procedure Name or enter the name
and Description of the procedure to be submitted.
Click Next. Enterprise Manager displays the Review page of the Database Rolling
Upgrade wizard.
8. On the Review page, review the summary of selections you have made and click
Submit.
The Review page displays a Procedure Summary and an Option Summary of the
selections you chose, and then shows a General Summary that provides
information about the listener along with a Components and Parameters section.
1. From the Enterprise menu, choose Provisioning and Patching, then select
Procedure Activity from the Activity menu.
Enterprise Manager displays the Deployment Procedure Manager page.
Oracle Audit Vault and Database Firewall (AVDF) secures databases and other critical
components of IT infrastructure. It provides a database firewall that can monitor
database activity and block SQL statements on the network based on a firewall policy.
It also collects audit data, and ensures that the data is available in audit reports.
You can manage and monitor Oracle AVDF components in Enterprise Manager Cloud
Control (Cloud Control) using the Oracle AVDF plug-in. For information on how to
install this plug-in and manage Oracle AVDF using Cloud Control, see Oracle
Enterprise Manager System Monitoring Plug-in User's Guide for Audit Vault and Database
Firewall.
Oracle Data Redaction is an Oracle Database security feature that enables you to mask
(redact) the data that is returned from queries issued by applications. Enterprise
Manager Cloud Control (Cloud Control) provides a user interface for creating and
managing Oracle Data Redaction policies and formats. You can perform these tasks
using the Data Redaction page, which is displayed in Figure 20-1.
For detailed information on using Oracle Data Redaction, see Oracle Database Advanced
Security Guide.
Oracle Database Vault provides powerful security controls to help protect database
application data from unauthorized access, and comply with privacy and regulatory
requirements. Using Oracle Database Vault, you can deploy controls to block
privileged account access to database application data, and control sensitive
operations within the database. Oracle Database Vault with Oracle Database 12c
includes a feature called privilege analysis that helps you increase the security of your
database applications and operations. Privilege analysis policies reduce the attack
surface of database applications and increase operational security by identifying used
and unused privileges.
You can manage Oracle Database Vault and privilege analysis policies using
Enterprise Manager Cloud Control. For detailed information on how to do this, see
Oracle Database Vault Administrator's Guide.
• Profiles
• Deployment Procedures
Profiles
The profiles section lists all the provisioning profiles that you have created and the
profiles on which you have been granted access. You can:
• Filter the profile based on what you want to display in the Profiles table. To do so,
from View menu, select Show Profiles, then click the option that you want to
display. For example, if you click All, then all the profiles are displayed.
• To delete an existing profile, select the profile name, and click Delete.
Deployment Procedures
The deployment procedures section lists all the Oracle-provided deployment
procedures, the Custom Deployment Procedures (CDP) that you have created, and the
procedures on which you (the administrator you have logged in as) have been granted
access. Select a deployment procedure from the list, and perform any of the following
action on it:
• Node Manager: Node Manager is a Java utility that runs as a separate process from
Oracle WebLogic Server and allows you to perform common operations for a
Managed Server, regardless of its location with respect to its Administration
Server. While use of Node Manager is optional, it provides valuable benefits if your
Oracle WebLogic Server environment hosts applications with high-availability
requirements.
If you run Node Manager on a computer that hosts Managed Servers, you can start
and stop the Managed Servers remotely using the Administration Console, Fusion
Middleware Control, or the command line. Node Manager can also automatically
restart a Managed Server after an unexpected failure.
• Oracle Home: An Oracle home contains installed files necessary to host a specific
product. For example, the SOA Oracle home contains a directory that contains
binary and library files for Oracle SOA Suite. An Oracle home resides within the
directory structure of the Middleware home.
• Cloning: The process of creating a copy of the WebLogic Domain and the Oracle
home binaries present within the domain is referred to as cloning. Typically,
cloning is performed at the WebLogic Domain-level. Fusion Middleware Domain
cloning can be performed from an existing target or using provisioning profiles.
• Gold Image: The gold image is a single image that includes the binary and library
files for an Oracle home.
For Oracle Fusion Middleware 11g, the Middleware Home was the top-level
directory that comprised of multiple product-specific Oracle Homes. For example:
[user1@slc01avn middhome]$ ls
Oracle_OSB1 Oracle_SOA1 coherence_3.7 domain-registry.xml logs modules
oracle_common osb patch_ocp371 patch_wls1036 registry.dat registry.xml
utils wlserver_10.3
• Scaling Up: When a managed server is added or cloned to a host that already exists
in the domain or cluster.
• Scaling Out: When a managed server is added or cloned to a host that is not
present in the domain or cluster.
Provision JRF WebLogic Domain Profile WLS 12.2.1.x, 12.1.x, Provisioning Fusion
Fusion 10.3.x. Middleware Domain and Oracle
Middleware Homes
Provision • SOA Installation Media based SOA Domain Provisioning the SOA Domain
Fusion profile 12.2.1.x, 12.1.x, and Oracle Homes
Middleware • SOA Gold Image based profile 11.1.1.x
• SOA Domain Profile
• Existing SOA Middleware Home
Provision • Service Bus Installation Media Service Bus Domain Provisioning the Service Bus
Fusion based profile 12.2.1.x, 12.1.x, Domain and Oracle Homes
Middleware • Service BusService Bus Gold 11.1.1.x
Image based profile
• Service Bus Domain Profile
• Existing Service Bus Middleware
Home
1. Access https://support.oracle.com
Field Details
Release 13.2.0.0.0
4. Click Search.
5. The search result displays the Enterprise Manager Base Platform - OMS 13.2.0.0.0
Certifications.
Note:
22.3.2 Scaling WebLogic Server, SOA, Service Bus, and WebCenter Domains
This table covers the use cases for scaling an existing SOA Domain, Service Bus
Domain, and WebLogic Domain:
Table 22-2 Scaling SOA, Service Bus, WLS, and WebCenter Domains
Scaling up/Scale out Scaling SOA SOA Domain 12c, Running the Scale
Middleware Domain 11g Up / Scale Out
Middleware
Deployment
Procedure
Scaling Service Bus Scaling Service Bus Service Bus Domain Running the Scale
Domain Domain 11g Up / Scale Out
Middleware
Deployment
Procedure
1. Access https://support.oracle.com
Field Details
Release 13.2.0.0.0
4. Click Search.
5. The search result displays the Enterprise Manager Base Platform - OMS 13.2.0.0.0
Certifications.
SOA Artifacts Provisioning SOA Oracle SOA Suite 11gR1 (11.1.1.2.0 to Provisioning SOA Artifacts
Provisioning Artifacts from a 11.1.1.7.0): from Gold Image
Gold Image • SOA Composites
• Oracle WebLogic Server Policies
• Assertion Templates
• JPS Policy and Credential Stores
• Human Workflow
• Oracle B2B
Deploy SOA Provisioning SOA Oracle SOA Suite 11gR1 (11.1.1.2.0 to Deploying SOA Composites
Composites Composites 11.1.1.7.0)
SOA Composites
Service Bus Provision Service Service Bus 2.6.0 - Provisioning Service Bus
Resource Bus resources 2.6.1, 3.0.0, 10.3.0.0 - Resources from Oracle
Provisioning from Software 10.3.1.0, 11.1.1.3.0 - Software Library
Library 11.1.1.7.0, 12.1.3.0.0
Note:
The term Middleware Home is applicable only for WebLogic Server versions
10.3.x and 12.1.x. For WebLogic Server version 12.1.2.0.0 and higher,
Middleware Home is referred to as Oracle Home.
To provision a new Middleware Domain or an Oracle Home, you must have
configured JDK version 1.6 or higher. Oracle recommends using the latest and
the most updated JDK version of the product. To do so, log in to
support.oracle.com, then click Certifications tab. In the Certification Search
section, enter Oracle WebLogic Server in the Product field. From the Release
menu, select a valid WebLogic Server version and from the Platform menu,
select a valid operating system, and click Search.
To clone an existing domain, you must ensure that the JDK version configured
on the cloned destination host is equal to or higher than the JDK version
available on the source host. Oracle recommends using the latest and the most
updated JDK version of the product. To do so, log in to support.oracle.com,
then click Certifications tab. In the Certification Search section, enter Oracle
WebLogic Server in the Product field. From the Release menu, select a valid
WebLogic Server version and from the Platform menu, select a valid
operating system, and click Search.
This chapter explains how you can automate common provisioning operations for
Middleware Homes and WebLogic Domains using Oracle Enterprise Manager Cloud
Control. In particular, this chapter covers the following:
Step 2 Creating the Middleware To learn about the various Provisioning Profiles, see
Provisioning Profiles. Creating Middleware Provisioning Profiles
This chapter covers three types of
provisioning profiles. Select the
profile that best matches your
requirement
Step 3 Meeting Prerequisites to Provision a To learn about the prrequisites for provisioning an
Middleware Profile Installation Media/Oracle Home profile, see
Before you run the Fusion Prerequisites for Provisioning the Installation Media
Middleware Deployment Procedure, Profile or the Oracle Home Profile
there are a few prerequisites that you To learn about the Prerequisites for provisioning a
must meet. WebLogic Domain profile, see Prerequisites for
Provisioning the WebLogic Domain Profile
Step 4 Running the Fusion Middleware To learn about provisioning from an Installation Media
Deployment Procedure Profile or an Oracle Home Profile, see Provisioning of a
Run this deployment procedure to new Fusion Middleware Domain from an Installation
successfully provision a Weblogic Media Based-Profile or an Oracle Home Based-Profile
Domain and/or an Oracle Home. To learn about provisioning from a WebLogic Domain
Profile, see Provisioning a Fusion Middleware Domain
from an Existing Oracle Home
To provision from an existing home, see Cloning from
an Existing WebLogic Domain Based-Profile
• (Recommended Option) For all the out of the box deployment procedures, launch the
Provision Fusion Middleware procedure from the Profiles table. For this, from
Enterprise menu, select Provisioning and Patching, and then click Middleware
Provisioning. On the Middleware Provisioning page, from the Profiles table select
a profile, and click Provision.
• If you are provisioning a new domain from an existing Middleware home, or if you
are using a customized deployment procedure, then you can directly run the
Provision Fusion Middleware deployment procedure. For this, from Enterprise
menu, select Provisioning and Patching, and then click Middleware Provisioning.
On the Middleware Provisioning page, from the Deployment Procedures table,
click Provision Fusion Middleware.
• To automate the process of provisioning using the command line, submit your
procedure using the Enterprise Manager Command Line Interface (EMCLI) utility.
The EMCLI enables you to access Enterprise Manager Cloud Control functionality
from text-based consoles (shells and command windows) for a variety of operating
systems. You can call Enterprise Manager functionality using custom scripts, such
as SQL*Plus, OS shell, Perl, or Tcl, thus easily integrating Enterprise Manager
functionality with your company's business process.
Note:
For more information about related EM CLI verbs, see Oracle Enterprise
Manager Command Line Interface.
Before going ahead with the profile creation, you must identify the profile that suits
your requirements:
• If you do not have an existing WebLogic Domain, or an Oracle Home, use the
Installation Media profile. This profile allows you to create a domain and a home
using the shiphome (installation media files) that you have downloaded from the
OTN. For further details, see Creating a Provisioning Profile Based on an
Installation Media.
• If you have an Oracle Home that needs to be patched to fix some bugs, use the
Oracle Home profile. This profile allows you to apply the relevant patches to the
Oracle home, and then create a profile out of the patched Oracle home which can
be your source of truth. Use the patched profile to provision all the Oracle Homes
in your data center. For further details, see Creating a Provisioning Profile Based on
an Oracle Home.
• If you have an existing Weblogic Domain that you want to clone, use the WebLogic
Domain Profile. This profile allows you to create a copy of the source domain. For
further details, see Creating a Provisioning Profile Based on a WebLogic Domain.
23.3.2 Step2: Running Provision Fusion Middleware Procedure to Provision the Profile
A deployment procedure is a pre-defined sequence of steps that is meant to perform a
set of operations on one or most hosts monitored by Cloud Control. You can provision
a domain or an Oracle home using the profiles you have already created. This section
describes the various methods to provision a profile. In particular, it covers the
following:
Note:
Meet the Prerequisites before going ahead with the provisioning procedure.
For a detailed list of prerequisites, see Prerequisites for Provisioning from the
Middleware Provisioning Profiles.
For information about the other approaches to launch the Provision Fusion
Middleware procedure, see Different Approaches to Launch the Provision
Fusion Middleware Deployment Procedure.
Note:
• Prerequisites for Provisioning the Installation Media Profile or the Oracle Home
Profile
Note:
• Click Preferences.
• Click Save.
• Ensure that the Lock and Edit button is enabled in the Change Center.
23.4.1 Prerequisites for Provisioning the Installation Media Profile or the Oracle Home
Profile
Meet the following prerequisites:
2. Read, if you are using an existing Oracle Home. Note that this is applicable
only for an Oracle Home profile, and not for an Installation Media profile.
• If you are using a shared storage, then mount the domain home and inventory
directories on all the hosts beforehand. Typically, for a two-node setup, with two
SOA Managed Servers running on two different hosts, for example host 1 and host
2; you can choose to create the shared storage on host 1. Effectively, this means that
the Middleware Home location, the domain details, Inventory details, and all other
information are mounted on host 1 for easy access from host 2.
Note:
• All the hosts involved in the provisioning operation should be monitored as targets
in Enterprise Manager.
• If the domain uses a database or LDAP or Oracle HTTP Server, then ensure that the
respective servers are monitored as targets in Enterprise Manager.
• Before cloning an existing Fusion Middleware domain, you must have cloned the
source database, so that the data in the schema is in sync with the source database.
If you haven't already cloned your source database, you can do so using the
Cloning Database feature available in Enterprise Manager Cloud Control. For more
information about cloning your database, see Cloning Oracle Databases and
Pluggable Databases .
Note:
Before cloning the domain, run the following data scrubbing SQL scripts on
the database.
For SOA
Create a Generic Component and upload truncate_soa_oracle.sql
script to Software Library. For more information on creating generic
components, see Oracle Enterprise Manager Cloud Control Administrator's Guide.
Note that the truncate script (truncate_soa_oracle.sql) is located in the
following directory under SOA Installation: /MW_HOME/
SOA_ORACLE_HOME/rcu/integration/soainfra/sql/truncate
For Service Bus
Create a Generic Component and upload llr_table.sql script to Software
Library. To create this script, for each server present in the Service Bus
domain, you need to add the following statement in the SQL script:
TRUNCATE table WL_LLR_<SERVER_NAME>
For example, if the Service Bus domain has administrator server and two
managed servers with name OSB_SERVER1 and OSB_SERVER2, then the
content of the sql script would look like:
TRUNCAETE table WL_LLR_ADMINSERVER
TRUNCATE table WL_LLR_OSB_SERVER1
TRUNCATE table WL_LLR_OSB_SERVER2
• If the source domain was wired with LDAP, then before cloning an existing Fusion
Middleware domain, ensure that the data (users, roles and policies) has been
migrated from the source LDAP to a new LDAP and the new LDAP has been
discovered in Enterprise Manager as a target.
Note:
1. In Cloud Control, from Enterprise menu, select Provisioning and Patching, then
click Software Library.
3. On the Software Library page, from the Actions menu, select Create Entity, then
click Directives.
4. On the Describe page, provide a unique name for the parent folder. For example,
My Custom Script With Parameters. Click Next.
5. On the Configure page, in the Command Line Arguments section, click Add.
6. In the Add Command Line Argument dialog box, enter the property name
INPUT_FILE, then click OK. Verify that the Command Line contains the value: "$
{INPUT_FILE}"
7. On the Select Files page, select Upload Files. In the Specify Destination section,
select an upload location. In the Specify Source section, select the scripts to be
uploaded. Ensure the directly executable file is in the Main File menu.
print "**********************\n";
print "* This is a *\n";
print "* test script *\n";
print "**********************\n";
my $inputFile = $ARGV[0];
my %properties;
open (FILE, "<$inputFile") or die "can't open $inputFile for reading: $!";
print "Input properties:\n";
while (<FILE>)
{
chomp;
my ($key, $val) = split /=/;
$properties{$key} = $val;
print "\t$key=$val\n";
}
close FILE;
my $mwHome = $properties{MIDDLEWARE_HOME};
my $protocol = $properties{ADMIN_PROTOCOL};
my $host =$properties{ADMIN_SERVER_LISTEN_ADDRESS};
my $port = $properties{ADMIN_SERVER_LISTEN_PORT};
my $cmd = $mwHome."/wlserver_10.3/common/bin/wlst.sh listMyServers.py $protocol
$host $port";
print "\nExecuting:\n\t$cmd\n";
print "\nOutput is:\n\n";
system($cmd);
exit 0;
protocol = sys.argv[1];
host = sys.argv[2];
port = sys.argv[3];
username = 'weblogic';
password = 'welcome1';
Click Next.
8. On the Review page, review the details, and click Save and Upload.
1. In Cloud Control, from Enterprise menu, select Provisioning and Patching, then
click Software Library.
3. On the Software Library page, from the Actions menu, select Create Entity, then
click Directives.
4. On the Describe page, provide a unique name for the parent folder. For example,
My Custom Script Without Parameters. Click Next.
5. On the Configure page, in the Command Line Arguments section, click Next.
6. On the Select Files page, select Upload Files. In the Specify Destination section,
select an upload location. In the Specify Source section, select the scripts to be
uploaded. Ensure the directly executable file is in the Main File menu.
print "**********************\n";
print "* This is a *\n";
print "* test script *\n";
print "**********************\n";
my $mwHome = "/scratch/bbanthia/soa/middleware";
my $protocol = "t3";
my $host = "slc01mpj.us.example.com";
my $port = "7001";
my $cmd = $mwHome."/wlserver_10.3/common/bin/wlst.sh listMyServers.py $protocol
$host $port";
print "Executing:\n\t$cmd\n";
print "\nOutput is:\n\n";
system($cmd);
exit 0;
protocol = sys.argv[1];
host = sys.argv[2];
port = sys.argv[3];
username = 'weblogic';
password = 'welcome1';
Click Next.
7. On the Review page, review the details, and click Save and Upload.
Proceed with
provisioning
a fresh Domain
Download install End Result: Install
media on a host Fusion Middleware Media Profile
monitored by Profile Wizard created/stored in
Enterprise Manager software Library
Proceed with
provisioning
a new Middleware
Home
Before you begin creating the middleware provisioning profile, ensure that you meet
the following prerequisites:
• Create one directory for each product like SOA, Service Bus, WebCenter, RCU and
WLS, and ensure that you add the necessary files under the respective directory.
• Ensure that you have read permissions on all the files and sub-directories inside
the domain home, applications home, and the Oracle home.
To create an Installation Media profile, follow these steps:
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Middleware Provisioning.
2. On the Middleware Provisioning home page, in the Profiles section, from the
Create menu, select From Installation Media.
3. On the Create Fusion Middleware Provisioning Profile page, enter a unique name
and description for your profile.
By default, all the profiles are centrally located in Software Library under Fusion
Middleware Provisioning/Profiles directory.
4. In the Product Details section, from the Product menu, select Oracle WebLogic
Server or Oracle SOA Suite. Depending on the option selected, the Platform and
Version menus get updated. Select a suitable platform name and version from the
list.
Note:
When you select Oracle WebLogic Server as the option in the Product field,
then you can only upload only the WebLogic Server installer and not the
Fusion Middleware Infrastructure installer.
a. Click the search icon to search for the host. In the Select Target dialog box,
search and select the target where the Installation Media files reside, then
click Select.
b. To access the files on a remote host, you need to provide the host credentials.
To do so, click search, and in the Select Credential dialog box, enter the
necessary credentials, and click OK. Click Test to validate these credentials
against the selected target.
c. Based on the product selected, the Files table gets updated. One of the
following options are possible:
If you select Oracle SOA Suite from the Product menu, then you can upload
Oracle WebLogic Server, Oracle SOA, Service BusService Bus, and Oracle
RCU files.
Before actually uploading the files, as a prerequisite, you must do the
following:
- Download the Software binaries (Installation Media Files) from Oracle
Technology Network.
- Create one directory for each product like SOA, Service Bus, RCU and WLS
on Software Library, and ensure that you add the necessary files under the
respective directory.
To add the files, select the product name from the files table, and click Select
Folder. Navigate to the directory where the files are present, and click OK. To
remove files, select the product type, and click Remove.
If you select Oracle WebLogic Server from the product menu, then you will
only need to upload Oracle WebLogic Server files. To do so, select the Oracle
WebLogic Server from the files table, and click Select Folder. Navigate to the
directory where the files are present, and click OK. To remove this file, select
the product name, and click Remove.
Note:
There are some mandatory installation media files for each product that must
be available in their respective folders, without which the Installation Media
profile creation will fail.
In the following example, Oracle WebLogic Server is the folder name, and
wls1036_generic.jar is the installation media file. Similarly, basic
installation media files required for Oracle SOA, Service Bus, Oracle
WebCenter and Oracle RCU are listed.
6. In the Storage section, select the Software Library storage details. Ensure that you
provide a valid storage type, and upload location to upload the Installation Media
profile.
8. After the job has successfully run, a new entry is available in the Profiles table.
You can click the profile name to view the details.
Proceed with
provisioning
a fresh Domain
Select the Middleware End Result: A Gold
target to which the Fusion Middleware image of Middleware
required patches are Profile Wizard House created/stored
applied in Software Library
Proceed with
provisioning
a new Middleware
Home
Before you begin creating the middleware provisioning profile, ensure that you meet
the following prerequisites:
• Oracle Home should be a managed target that has been discovered in Enterprise
Manager Cloud Control.
• Ensure that you have read permissions on all the files and sub-directories inside
the domain home, applications home, and the Oracle home.
• Host credentials must be set for the source machine on which Administration
Server is running.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Middleware Provisioning.
2. On the Middleware Provisioning home page, in the Profiles section, from the
Create menu, select From Oracle Home.
3. On the Create Fusion Middleware Provisioning Profile page, enter a unique name
and description for your profile.
By default, all the profiles are centrally located in Software Library under Fusion
Middleware Provisioning/Profiles directory.
4. In the Reference Target section, click the search icon. In the Select Target dialog
box, select an Oracle Home, and click Select. The corresponding host details are
populated.
You can also launch the Create Middleware Home Profile from the Middleware
targets page. How? If you do so, the context of the target is maintained, and the
fields like Type, Name, and Host appear pre-populated.
5. Click the search icon to provide the credentials. In the Select Credentials dialog
box, provide the necessary credentials for your target that is already set, and click
OK. Click Test to validate the credentials against the selected target.
6. In the Storage section, select the Software Library storage details. Ensure that you
provide a valid storage type, and upload location details to update the Oracle
Home profile.
8. After the job has successfully run, a new entry is available in the Profiles table. You
can click the profile name to view the details.
Proceed with
provisioning
a fresh Domain
Note:
• In case the domain is cloned the partition may not function as expected
once provisioned.
• To clone, set up a new domain, and then import or export partitions. This
is supported from release 13.2 only.
Before you begin creating the Middleware provisioning profile, ensure that you meet
the following prerequisites:
• Ensure that you have read permissions on all the files and sub-directories inside
the domain home, applications home, and the Oracle home.
• Host credentials must be set for the source machine on which Administration
Server is running.
• The WebLogic domain for which the profile is being created must be a monitored
target in Cloud Control.
• Ensure to disable host name verification on all the hosts. This can be accomplished
by following a simple procedure. In SSL, click Advanced, then set the Host Name
Verification to None.
Note:
While creating a provisioning profile for SOA or Oracle Service Bus domains,
there is an option Include metadata. When you check this option it saves
some information of the profile. It creates new schema and inserts the saved
metadata. Hence database cloning is not required. This is supported from
release 13.2 only.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Middleware Provisioning.
2. On the Middleware Provisioning home page, in the Profiles section, from the
Create menu, select From WebLogic Domain.
3. On the Create Fusion Middleware Provisioning Profile page, in the Profile Details
section, enter a unique name and description for your profile. By default, all the
profiles are centrally located in Software Library under the Fusion Middleware
Provisioning/Profiles directory.
4. In the Reference Target section, search and select the WebLogic Domain target.
Once you select the target, the Domain Home, the Host, and the Oracle home
details for the host get populated.
You must ensure that a database instance is associated with the WebLogic
Domain except in the case of a plain WebLogic Domain where it may not be
necessary. To associate a database with the domain, create a database profile
using the Create Database Provisioning Profile wizard. More.
Enter the Oracle Home credentials required to log into the host target selected.
Enter an existing named credential of weblogic administrator for the domain or
create a new named credential by entering the details.
Note: You can also launch the WebLogic Domain Profile from the Middleware
target page. If this is done, then the context of the of the target is maintained, and
the fields like Type, Name, Host, and Oracle Home details appear pre-populated.
5. Based on the target selection, you may create one of the following profiles:
a. Plain WebLogic Domain Profile: If you do not wish to upload the Oracle
Home files, de-select the Include Oracle Home checkbox available in the
Reference Target section. If you do so, then while provisioning from this
domain profile, you will need to have a pre-deployed Oracle Home that
already exists at the destination, and the content of the Oracle Home should
match with the one expected by the domain profile.
6. In the Software Library Upload Location section, select the Software Library
storage details. Ensure that you provide a valid storage type, and upload location
to update the profile.
7. Click Create to submit the profile creation job. After the job has successfully run, a
new entry is available in the Profiles table. You can click the profile name to view
the details.
Note:
Starting with Enterprise Manager for Oracle Fusion Middleware 12.1.0.7, you
can provision SOA and Service Bus in a single operation. For this, you need to
create an Installation Media profile with SOA and Service Bus installation
files, and provision this profile. Until the previous release, you could only
provision one domain (for example, SOA), and extend that domain to include
the other product (for example, Service Bus).This release allows the flexibility
of provisioning both SOA and Service Bus at the same time.
Middleware Provisioning supports RAC only with GridLink Data Sources.
Note:
1. In Cloud Control, from Enterprise menu, select Provisioning and Patching, and
then click Middleware Provisioning.
2. On the Middleware Provisioning home page, from the Profiles table select an
Installation Media profile, and click Provision.
3. On the Provision Fusion Middleware page, in the General section, the Installation
Media profile appears pre-selected.
Note:
4. In the Hosts section, search and select the destination hosts on which the
Middleware Home and WebLogic Domain needs to be cloned. Click +Add to add
the target hosts, and provide the login credentials for them. If you have selected
multiple hosts, and the login credentials for all of them are the same, then you can
select Same Credentials for all.
5. In the Middleware section, the Middleware Base and Java Home values appear
pre-populated; you can customize these values if required. Provide credentials for
the domain Administrator.
6. In the Database section, depending on the profile being provisioned the following
options are possible:
7. (optional) In the Identity and Security section, you must enter the OID target name
and the OID credentials. These are mandatory fields for creating an LDAP
Authenticator and/or for reassociating the Domain Credential Store.
Note:
If you are provisioning a WebCenter profile, this section is displayed only for
the Production Topology which is the default option.
In addition to this, you must provide the following sets of inputs in the OID
section:
– User Base DN: Specify the DN under which your Users start. For example,
cn=users,dc=us,dc=mycompany,dc=com
– Group Base DN: Specify the DN that points to your Groups node. For
example:cn=groups,dc=us,dc=mycompany,dc=com
Note:
As a prerequisite, you must have already provisioned the users and groups in
the LDAP.
• Configure Security Store Inputs: In this section, provide the JPS Root Node
information. The JPS root node is the target LDAP repository under which all
data is migrated. The format is cn=nodeName.
Note:
As a prerequisite, you must have already created the root node in the LDAP.
8. In the WebTier section, click +Add to search and select an Oracle HTTP Server
(OHS) target. In the Credential field provide the credentials to access the target. If
you have more than one Oracle HTTP Server, then you need to provide an
External Load Balancer URL.
The format of the URL for an External Load Balancer: (http|https://
hostname:port)
For example: http://wcp-prov.example.com:80
Note:
Note:
If you are provisioning a plain WebLogic Domain, this section will not be
displayed.
• The SES Domain must be configured with the same OID as the WebCenter
Domain.
a. Provide the credentials for Crawl admin user in OID. You must enter the
same Crawl Administrator Username and Crawl Administrator Password
that was created as a part of the prerequisite step. The Crawl Administration
users in Spaces, and in the Identity Management System, are required to
crawl certain Space objects, such as lists, pages, spaces, and people
connections profiles. For example, mycrawladmin.
c. The Search User Username and Search User Password are the credentials of
the Oracle SES federation trusted entity. These get created while installing the
SES. Each Oracle SES instance must have a trusted entity for allowing
WebCenter Portal end users to be securely propagated at search time. A
trusted entity allows the WebCenter Portal application to authenticate itself to
Oracle SES and assert its users when making queries on Oracle SES. This
trusted entity can be any user that either exists on the Identity Management
Server behind Oracle SES or is created internally in Oracle SES. For example,
wesearch.
d. In the SES Search URL field, enter the URL of Search Administration Tool.
The format of this URL should be: http://
search_server_listenAddress:search_server_listenPort. For
example, http://slc01rsk.us.example.com:5720.
Note:
After configuring the WebCenter Domain with SES, if you encounter an issue
with searching the portal, you must perform in the following steps manually:
From the WebCenter Portal Oracle Home, copy
webcenter_portal_ses_admin.zip to the host where SES Domain is present.
Unzip the webcenter_portal_ses_admin.zip to find the files: facet.xml and
searchAttrSortable.xml.
Run these XML files as follows, for example:
/scratch/SES/oracle/middleware/Oracle_SES1/bin/searchadmin
-p welcome1 -c http://slc01rsk.us.example.com:5720/
search/api/admin/AdminService createAll facetTree -i
facet.xml
/scratch/SES/oracle/middleware/Oracle_SES1/bin/searchadmin
-c http://slc01rsk.us.example.com:5720/search/api/admin/
AdminService -p welcome1 updateAll searchAttr -a overwrite
-i searchAttrSortable.xml
Take a snapshot of the WebCenter Content server manually.
10. Click Next to schedule the procedure. If you click Submit, the procedure is
submitted for execution right away. Click Save to save this as a template; this
feature is particularly useful with lockdowns. For example, you can login with
designer role and create a template with lockdowns, and assign the desired
privileges to other administrator/operators to run this template. Click Cancel to
exit the procedure configuration.
11. After submitting, you can track the progress of the provisioning operation from
the Procedure Activity page. For more information about this, see About
Deployment Procedures.
12. To view the newly provisioned target, from the Targets menu, select Middleware.
Note:
Oracle recommends that you allow the default option Typical to remain
selected, and provide all the details. Following which, you can click Advance
to customize your destination environment. This way, the default values for
most of the parameters appear pre-populated, and you will need to enter only
the remaining (delta) details.Also, note that if you switch from Advance mode
to typical mode, you will lose all the changes that you have made so far.
1. In Cloud Control, from Enterprise menu, select Provisioning and Patching, and
then click Middleware Provisioning.
2. On the Middleware Provisioning home page, from the Profiles table select an
Installation Media profile, and click Provision.
Note:
By default, the middleware home is selected, you cannot deselect this option.
However, you are allowed to choose whether or not you want to create a new
domain for your middleware home. If you select Provision Domain, a
middleware home is created on the destination host, and a new domain is set up
for this middleware home on the destination host.
Note: If you are provisioning a SOA or a Service Bus profile, you can extend the
domain to include the other product. For example, if you are provisioning a SOA
domain, and you select the Extend an existing Domain option, then you will be
allowed to extend your SOA domain to include Service Bus. To extend a domain,
you will need to select the domain and provide the administrator credentials.
Note that extend domain feature is different from Scaling a domain where you
add additional managed servers to an existing domain. In extend domain, you are
able to create a hybrid domain that includes two products. As of now, only a SOA
or an Service BusService Bus domain can be extended. Support for other products
like WebCenter is not available.
If you have added more than one destination host, select Use Shared Storage
option to use a shared location for these hosts. The central inventory attached to
that is executed locally in all hosts sharing the Oracle Home.
To clone only the Middleware Home, deselect the Provision Domain.
4. In the Hosts section, search and select the destination hosts on which the
Middleware Home and WebLogic Domain needs to be cloned. Click +Add to add
the target hosts, and provide the login credentials for them. If you have selected
multiple hosts, and the login credentials for all of them are the same, then you can
select Same Credentials for all.
5. In the Middleware section, the values for Middleware Base and Java Home fields
appear pre-populated. You can change the location details, if required.
• For Middleware Home, enter the full path to the directory in which the
Middleware Home is to be created.
• For Java Home, enter the absolute path to the JDK directory to be used on the
destination Host. Note that you must have already installed JDK at the same
path on all the hosts.
6. In the Domain section, the configuration for the source domain is displayed by
default. You can change the following attributes to customize the domain
properties:
• Domain Name: The name of the domain. The generated components for the
domain are stored under the specified Domain directory. For example, if you
enter mydomain, your domain files are stored (by default) in MW_HOME
\user_projects\domains\mydomain.
Middleware Home, but can be changed. For example: If the Middleware Home
is located at /user/mwh, the application directory is created as /user/
domains. The domain location can be anywhere on your local drive or
network. On Windows, you must include the drive letter in this path.
• Domain Mode: The domain can operate in one of the following modes:
Production: The domain is used for production. In this mode, the security
configuration is relatively stringent, requiring a username and password to
deploy applications.
Development: The domain is used for development. In this mode, the security
configuration is relatively relaxed, allowing you to auto-deploy applications.
• Server Startup Mode: you can start the server in one of the following modes
depending on your requirement:
Start all Servers and Node Managers: This is the default option. Typically,
you will select this option if you have no changes to make, and if the
procedure has run as expected.
Start only Administration Server: This option starts only Administrator
Server and Node Manger. Typically, you will select this option if you want to
add a custom step to invoke the WLST online script, and then start the servers.
Do not start any Server or Node Manager: This option does not start any
server or Node Manager. Typically, you will select this option if you have to
customize the domain before starting any server.
7. In the Clusters section, you can modify the name of the cluster, enter the cluster
address that identifies the Managed Servers in the cluster. You can select either
Unicast or Multicast as the messaging mode. If you select Multicast mode, enter
the address and port number that will be dedicated for multicast communications
on the cluster. Click +Add to add one or more clusters to the configuration.
• Node Manager Home: The directory in which the Node Manager is installed.
By default, the Node Manager is installed under the parent directory of the
Middleware Home directory, but this can be modified.
Note that the Node Manager home must always be installed inside the
Administration Server domain home.
• Node Manager Listen Address: Enter the listen address used by Node
Manager to listen for connection requests. By default, the IP addresses defined
for the local system and localhost are shown in the drop-down list. The default
value is the same as specified in the source domain. Note that if multiple
machines are running on the same host, the Node Manager Home location
must be different for each host.
• Node Manager Listen Port: Enter a valid value for the listen port used by
Node Manager to listen for connection requests. The valid Node Manager
listen port range is 1 to 65535. The default value is 5556. The port number must
be available on the destination machine.
9. In the Server section, enter the configuration information for the Administration
Server and one or managed servers.
• Coherence Port: This field is enabled only when you select the Configure
Coherence option. You can retain the port number that is populated by default
or change it.
• Listen Port: By default, this option is selected. The values for the listen ports
are pre-populated. You may enter any value from 1 to 65535. The port number
you enter here must be available on the destination machine.
Note If a domain was registered on the host with a port number whose status
is down, you need to select a different port or manually de-register the domain
before launching the deployment procedure.
• SSL Listen Port: If you enable SSL Listen Port, enter the port number of the
SSL Listen Port for secure requests. You must ensure that the port numbers
you specify for the Listen Port and SSL Listen Port are available. If you are
using the SSL configuration, you must ensure that the security/identity stores
are present in the file system under the same path as on the source and are
configured with certificates generated for the destination hosts.
• Host: Select the host on which the Administration Server or Managed Server is
to be installed.
• Server Start: Click to enter the Server Startup Parameters.Usually, the Node
Manager can start a server without requiring you to specify startup options,
however, since you have customized your environment you must specify the
startup options in the Server Startup Parameter dialog box.
have been completed. Oracle WebLogic Server uses this transaction log for
recovery from system crashes or network failures. To leverage the migration
capability of the Transaction Recovery Service for the Managed Servers within
a cluster, store the transaction log in a location accessible to the Managed
Server and its backup servers.
• Domain Directory for Managed Server: The directory in which the Managed
Servers are installed. By default, they are installed under the parent directory
of the Middleware Home directory, but this can be modified.
10. In the JMS Servers section, click +Add to add new JMS persistent stores and JMS
servers. The storage type can be one of the following:
• A JMS file store is a disk-based file in which persistent messages can be saved.
You can modify the JMS file stores configured in your domain.
• A JDBC Store:
– Data Source
– Targets
11. In the Database section, depending on the profile being provisioned the following
options are possible:
– The Create Schema option allows you to create a new schema for the RCU
profile selected by default. Provide credentials for the database target and
the new schema.
– Select Same password for all option if you would like to use the same
username and password for all the data sources on the selected database
target
If you deselect the default schema creation option, then an existing schema
on the database target is used.
12. (optional) In the Identity and Security section, you must enter the OID target name
and the OID credentials. These are mandatory fields for creating an LDAP
Authenticator and/or for reassociating the Domain Credential Store.
Note:
If you are provisioning a WebCenter profile, this section is displayed only for
the Production Topology which is the default option.
In addition to this, you must provide the following sets of inputs in the OID
section:
– User Base DN: Specify the DN under which your Users start. For example,
cn=users,dc=us,dc=mycompany,dc=com
– Group Base DN: Specify the DN that points to your Groups node. For
example:cn=groups,dc=us,dc=mycompany,dc=com
Note:
As a prerequisite, you must have already provisioned the users and groups in
the LDAP.
• Configure Security Store Inputs: In this section, provide the JPS Root Node
information. The JPS root node is the target LDAP repository under which all
data is migrated. The format is cn=nodeName.
Note:
As a prerequisite, you must have already created the root node in the LDAP.
13. In the WebTier section, click +Add to search and select an Oracle HTTP Server
(OHS) target. In the Credential field provide the credentials to access the target. If
you have more than one Oracle HTTP Server, then you need to provide an
External Load Balancer URL.
The format of the URL for an External Load Balancer: (http|https://
hostname:port)
For example: http://wcp-prov.example.com:80
Note:
Note:
If you are provisioning a plain WebLogic Domain, this section will not be
displayed.
14. (For a WebCenter Production Domain only) To configure an SES Domain with a
WebCenter Production Domain, follow these steps:
Prerequisites:
• The SES Domain must be configured with the same OID as the WebCenter
Domain.
a. Provide the credentials for Crawl admin user in OID. You must enter the
same Crawl Administrator Username and Crawl Administrator Password
that was created as a part of the prerequisite step. The Crawl Administration
users in Spaces, and in the Identity Management System, are required to
crawl certain Space objects, such as lists, pages, spaces, and people
connections profiles. For example, mycrawladmin.
c. The Search User Username and Search User Password are the credentials of
the Oracle SES federation trusted entity. These get created while installing the
SES. Each Oracle SES instance must have a trusted entity for allowing
WebCenter Portal end users to be securely propagated at search time. A
trusted entity allows the WebCenter Portal application to authenticate itself to
Oracle SES and assert its users when making queries on Oracle SES. This
trusted entity can be any user that either exists on the Identity Management
Server behind Oracle SES or is created internally in Oracle SES. For example,
wesearch.
d. In the SES Search URL field, enter the URL of Search Administration Tool.
The format of this URL should be: http://
search_server_listenAddress:search_server_listenPort. For
example, http://slc01rsk.us.example.com:5720.
Note:
After configuring the WebCenter Domain with SES, if you encounter an issue
with searching the portal, you must perform in the following steps manually:
From the WebCenter Portal Oracle Home, copy
webcenter_portal_ses_admin.zip to the host where SES Domain is present.
Unzip the webcenter_portal_ses_admin.zip to find the files: facet.xml and
searchAttrSortable.xml.
Run these XML files as follows, for example:
/scratch/SES/oracle/middleware/Oracle_SES1/bin/searchadmin
-p welcome1 -c http://slc01rsk.us.example.com:5720/
search/api/admin/AdminService createAll facetTree -i
facet.xml
/scratch/SES/oracle/middleware/Oracle_SES1/bin/searchadmin
-c http://slc01rsk.us.example.com:5720/search/api/admin/
AdminService -p welcome1 updateAll searchAttr -a overwrite
-i searchAttrSortable.xml
Take a snapshot of the WebCenter Content server manually.
15. In the Custom Scripts section, you can select the scripts stored as directives in
Software Library to customize the deployment procedure. Following options are
possible, you may or may not choose to pass the scripts with parameters:
a. You can pass a script with input parameters. For more information, see Using
Custom Scripts with Input Parameters.
Note: If you pass the input parameter, ensure that you allow the default
option of Input File to be selected. For example, in the Pre Script field, you
can choose My Custom Script With Parameters script that you earlier
created. For more information see Storing Custom Scripts With Input
Parameters. How?
Following are the contents of a sample input properties
(input.properties) file:
ADMIN_SERVER_LISTEN_ADDRESS=slc01.example.com
ADMIN_SERVER_LISTEN_PORT=7001
ADMIN_PROTOCOL=t3
MIDDLEWARE_HOME=/scratch/usr1/soa/middleware
b. You can alternatively choose to pass a script without any input parameters.
For more information, see Using Custom Scripts Without Inputs Parameters.
Note: If you do not want to pass an input parameter, you should deselect the
Input File option. For example, in the Pre Script field, you can choose My
Custom Script Without Parameters script that you earlier created. For more
information, see Storing Custom Scripts Without Input Parameters. How?
You can pass the following scripts to customize your procedure:
16. Click Next to schedule the procedure. If you click Submit, the procedure is
submitted for execution right away. Click Save to save this as a template; this
feature is particularly useful with lockdowns. For example, you can login with
designer role and create a template with lockdowns, and assign the desired
privileges to other administrator/operators to run this template. Click Cancel to
exit the procedure configuration.
17. After submitting, you can track the progress of the provisioning operation from
the Procedure Activity page. For more information about this, see About
Deployment Procedures.
18. To view the newly provisioned target, from the Targets menu, select Middleware.
Note:
To provide inputs and further customize the destination environment, click
Advance. To understand the settings and configuration parameters that can be
customized, see Customizing the Destination Environment from an Existing
Oracle Home.
1. On Cloud Control, from Enterprise menu, select Provisioning and Patching, and
then click Middleware Provisioning
4. In the Hosts section, click +Add to search and select the destination hosts where
the cloned Middleware Home will reside. For the cloning operation, you will need
to provide the login credentials for the destination hosts. If you have selected
multiple hosts, and the login credentials for all of them are the same, then you can
select Same Credentials for all option.
6. In the Database section, select a Database Target, choose an existing schema and
provide the schema password.
7. (optional) In the Identity and Security section, you must enter the OID target name
and the OID credentials. These are mandatory fields for creating an LDAP
Authenticator and/or for reassociating the Domain Credential Store.
Note:
If you are provisioning a WebCenter profile, this section is displayed only for
the Production Topology which is the default option.
In addition to this, you must provide the following sets of inputs in the OID
section:
– User Base DN: Specify the DN under which your Users start. For example,
cn=users,dc=us,dc=mycompany,dc=com
– Group Base DN: Specify the DN that points to your Groups node. For
example:cn=groups,dc=us,dc=mycompany,dc=com
Note:
As a prerequisite, you must have already provisioned the users and groups in
the LDAP.
• Configure Security Store Inputs: In this section, provide the JPS Root Node
information. The JPS root node is the target LDAP repository under which all
data is migrated. The format is cn=nodeName.
Note:
As a prerequisite, you must have already created the root node in the LDAP.
8. In the WebTier section, click +Add to search and select an Oracle HTTP Server
(OHS) target. In the Credential field provide the credentials to access the target. If
you have more than one Oracle HTTP Server, then you need to provide an
External Load Balancer URL.
The format of the URL for an External Load Balancer: (http|https://
hostname:port)
For example: http://wcp-prov.example.com:80
Note:
Note:
If you are provisioning a plain WebLogic Domain, this section will not be
displayed.
• The SES Domain must be configured with the same OID as the WebCenter
Domain.
a. Provide the credentials for Crawl admin user in OID. You must enter the
same Crawl Administrator Username and Crawl Administrator Password
that was created as a part of the prerequisite step. The Crawl Administration
users in Spaces, and in the Identity Management System, are required to
crawl certain Space objects, such as lists, pages, spaces, and people
connections profiles. For example, mycrawladmin.
c. The Search User Username and Search User Password are the credentials of
the Oracle SES federation trusted entity. These get created while installing the
SES. Each Oracle SES instance must have a trusted entity for allowing
WebCenter Portal end users to be securely propagated at search time. A
trusted entity allows the WebCenter Portal application to authenticate itself to
Oracle SES and assert its users when making queries on Oracle SES. This
trusted entity can be any user that either exists on the Identity Management
Server behind Oracle SES or is created internally in Oracle SES. For example,
wesearch.
d. In the SES Search URL field, enter the URL of Search Administration Tool.
The format of this URL should be: http://
search_server_listenAddress:search_server_listenPort. For
example, http://slc01rsk.us.example.com:5720.
Note:
After configuring the WebCenter Domain with SES, if you encounter an issue
with searching the portal, you must perform in the following steps manually:
From the WebCenter Portal Oracle Home, copy
webcenter_portal_ses_admin.zip to the host where SES Domain is present.
Unzip the webcenter_portal_ses_admin.zip to find the files: facet.xml and
searchAttrSortable.xml.
Run these XML files as follows, for example:
/scratch/SES/oracle/middleware/Oracle_SES1/bin/searchadmin
-p welcome1 -c http://slc01rsk.us.example.com:5720/
search/api/admin/AdminService createAll facetTree -i
facet.xml
/scratch/SES/oracle/middleware/Oracle_SES1/bin/searchadmin
-c http://slc01rsk.us.example.com:5720/search/api/admin/
AdminService -p welcome1 updateAll searchAttr -a overwrite
-i searchAttrSortable.xml
Take a snapshot of the WebCenter Content server manually.
10. Click Next to schedule the procedure. If you click Submit, the procedure is
submitted for execution right away. Click Save to save this as a template; this
feature is particularly useful with lockdowns. For example, if you want to create a
template with lockdowns, and allow other users with operator privilege to run the
template multiple times with minor modifications.
Click Cancel to exit the procedure configuration.
11. After submitting, you can track the progress of the provisioning operation from
the Procedure Activity page. For more information about this, see About
Deployment Procedures.
12. To view the newly provisioned target, from the Targets menu, select Middleware.
Note:
Oracle recommends that you allow the default option Typical to remain
selected, and provide all the details. Following which, you can click Advance
to customize your destination environment. This way, the default values for
most of the parameters appear pre-populated, and you will need to enter only
the remaining (delta) details.Also, note that if you switch from Advance mode
to typical mode, you will lose all the changes that you have made so far.
1. On Cloud Control, from Enterprise menu, select Provisioning and Patching, and
then click Middleware Provisioning
4. In the Hosts section, search and select the destination hosts on which the
Middleware Home and WebLogic Domain need to be cloned. Click +Add to add
the target hosts, and provide the login credentials for them. If you have selected
multiple hosts, and the login credentials for all of them are the same, then you can
select Same Credentials for all.
5. In the Domain section, the configuration for the domain is displayed by default.
You can change the following attributes to customize the domain properties:
• Domain Name: The name of the domain. The generated components for the
domain are stored under the specified Domain directory. For example, if you
enter mydomain, your domain files are stored (by default) in MW_HOME
\user_projects\domains\mydomain. Ensure that you provide a unique domain
name.
and the domain name is base_domain then the farm name would be
farm_base_domain.
• Domain Mode: The domain can operate in one of the following modes:
Production: The domain is used for production. In this mode, the security
configuration is relatively stringent, requiring a username and password to
deploy applications.
Development: The domain is used for development. In this mode, the security
configuration is relatively relaxed, allowing you to auto-deploy applications.
• Server Startup Mode: you can start the server in one of the following modes
depending on your requirement:
Start all Servers and Node Managers: This is the default option. Typically,
you will select this option if you have no changes to make, and if the
procedure has run as expected.
Start only Administration Server: This option starts only Administrator
Server and Node Manger. Typically, you will select this option if you want to
add a custom step to invoke the WLST online script, and then start the servers.
Do not start any Server or Node Manager: This option does not start any
server or Node Manager. Typically, you will select this option if you have to
customize the domain before starting any server.
6. In the Clusters section, you can modify the name of the cluster, enter the cluster
address that identifies the Managed Servers in the cluster. You can select either
Unicast or Multicast as the messaging mode. If you select Multicast mode, enter
the address and port number that will be dedicated for multicast communications
on the cluster. Click +Add to add one or more clusters to the configuration.
• Node Manager Home: The directory in which the Node Manager is installed.
By default, the Node Manager is installed under the parent directory of the
Middleware Home directory, but this can be modified.
Note that the Node Manager home must always be installed inside the
Administration Server domain home.
• Node Manager Listen Address: Enter the listen address used by Node
Manager to listen for connection requests. By default, the IP addresses defined
for the local system and localhost are shown in the drop-down list. The default
value is the same as specified in the source domain. Note that if multiple
machines are running on the same host, the Node Manager Home location
must be different for each host.
• Node Manager Listen Port: Enter a valid value for the listen port used by
Node Manager to listen for connection requests. The valid Node Manager
listen port range is 1 to 65535. The default value is 5556. The port number must
be available on the destination machine.
8. In the Server section, enter the configuration information for the Administration
Server and one or managed servers.
• Coherence Port: This field is enabled only when you select the Configure
Coherence option. You can retain the port number that is populated by default
or change it.
• Listen Port: By default, this option is selected. The values for the listen ports
are pre-populated. You may enter any value from 1 to 65535. The port number
you enter here must be available on the destination machine.
Note If a domain was registered on the host with a port number whose status
is down, you need to select a different port or manually de-register the domain
before launching the deployment procedure.
• SSL Listen Port: If you enable SSL Listen Port, enter the port number of the
SSL Listen Port for secure requests. You must ensure that the port numbers
you specify for the Listen Port and SSL Listen Port are available. If you are
using the SSL configuration, you must ensure that the security/identity stores
are present in the file system under the same path as on the source and are
configured with certificates generated for the destination hosts.
• Host: Select the host on which the Administration Server or Managed Server is
to be installed.
• Server Start: Click to enter the Server Startup Parameters.Usually, the Node
Manager can start a server without requiring you to specify startup options,
however, since you have customized your environment you must specify the
startup options in the Server Startup Parameter dialog box.
• Domain Directory for Managed Server: The directory in which the Managed
Servers are installed. By default, they are installed under the parent directory
of the Middleware Home directory, but this can be modified.
9. In the JMS Servers section, click +Add to add new JMS persistent stores and JMS
servers. The storage type can be one of the following:
• A JMS file store is a disk-based file in which persistent messages can be saved.
You can modify the JMS file stores configured in your domain.
• A JDBC Store:
– Data Source
– Targets
10. In the Database section, choose an existing schema and provide the schema
password. Select Same password for all option if you would like to use the same
username and password for all the data sources on the selected database target.
11. (optional) In the Identity and Security section, you must enter the OID target name
and the OID credentials. These are mandatory fields for creating an LDAP
Authenticator and/or for reassociating the Domain Credential Store.
Note:
If you are provisioning a WebCenter profile, this section is displayed only for
the Production Topology which is the default option.
In addition to this, you must provide the following sets of inputs in the OID
section:
– User Base DN: Specify the DN under which your Users start. For example,
cn=users,dc=us,dc=mycompany,dc=com
– Group Base DN: Specify the DN that points to your Groups node. For
example:cn=groups,dc=us,dc=mycompany,dc=com
Note:
As a prerequisite, you must have already provisioned the users and groups in
the LDAP.
• Configure Security Store Inputs: In this section, provide the JPS Root Node
information. The JPS root node is the target LDAP repository under which all
data is migrated. The format is cn=nodeName.
Note:
As a prerequisite, you must have already created the root node in the LDAP.
12. In the WebTier section, click +Add to search and select an Oracle HTTP Server
(OHS) target. In the Credential field provide the credentials to access the target. If
you have more than one Oracle HTTP Server, then you need to provide an
External Load Balancer URL.
The format of the URL for an External Load Balancer: (http|https://
hostname:port)
For example: http://wcp-prov.example.com:80
Note:
Note:
If you are provisioning a plain WebLogic Domain, this section will not be
displayed.
13. (For a WebCenter Production Domain only) To configure an SES Domain with a
WebCenter Production Domain, follow these steps:
Prerequisites:
• The SES Domain must be configured with the same OID as the WebCenter
Domain.
a. Provide the credentials for Crawl admin user in OID. You must enter the
same Crawl Administrator Username and Crawl Administrator Password
that was created as a part of the prerequisite step. The Crawl Administration
c. The Search User Username and Search User Password are the credentials of
the Oracle SES federation trusted entity. These get created while installing the
SES. Each Oracle SES instance must have a trusted entity for allowing
WebCenter Portal end users to be securely propagated at search time. A
trusted entity allows the WebCenter Portal application to authenticate itself to
Oracle SES and assert its users when making queries on Oracle SES. This
trusted entity can be any user that either exists on the Identity Management
Server behind Oracle SES or is created internally in Oracle SES. For example,
wesearch.
d. In the SES Search URL field, enter the URL of Search Administration Tool.
The format of this URL should be: http://
search_server_listenAddress:search_server_listenPort. For
example, http://slc01rsk.us.example.com:5720.
Note:
After configuring the WebCenter Domain with SES, if you encounter an issue
with searching the portal, you must perform in the following steps manually:
From the WebCenter Portal Oracle Home, copy
webcenter_portal_ses_admin.zip to the host where SES Domain is present.
Unzip the webcenter_portal_ses_admin.zip to find the files: facet.xml and
searchAttrSortable.xml.
Run these XML files as follows, for example:
/scratch/SES/oracle/middleware/Oracle_SES1/bin/searchadmin
-p welcome1 -c http://slc01rsk.us.example.com:5720/
search/api/admin/AdminService createAll facetTree -i
facet.xml
/scratch/SES/oracle/middleware/Oracle_SES1/bin/searchadmin
-c http://slc01rsk.us.example.com:5720/search/api/admin/
AdminService -p welcome1 updateAll searchAttr -a overwrite
-i searchAttrSortable.xml
Take a snapshot of the WebCenter Content server manually.
14. In the Custom Scripts section, you can select the scripts stored as directives in
Software Library to customize the deployment procedure. Following options are
possible, you may or may not choose to pass the scripts with parameters:
a. You can pass a script with input parameters. For more information, see Using
Custom Scripts with Input Parameters.
Note: If you pass the input parameter, ensure that you allow the default
option of Input File to be selected. For example, in the Pre Script field, you
can choose My Custom Script With Parameters script that you earlier
created. For more information see Storing Custom Scripts With Input
Parameters. How?
Following are the contents of a sample input properties
(input.properties) file:
ADMIN_SERVER_LISTEN_ADDRESS=slc01.example.com
ADMIN_SERVER_LISTEN_PORT=7001
ADMIN_PROTOCOL=t3
MIDDLEWARE_HOME=/scratch/usr1/soa/middleware
b. You can alternatively choose to pass a script without any input parameters.
How?
Note: If you do not want to pass an input parameter, you should deselect the
Input File option. For example, in the Pre Script field, you can choose My
Custom Script Without Parameters script that you earlier created. For more
information, see Using Custom Scripts Without Inputs Parameters.
You can pass the following scripts to customize your procedure:
• Pre Script: This script runs soon after the prerequisite checks and before the
Oracle Home or WebLogic Domain is deployed.
• Post Administration Server Start Script: This script runs after the
Administration Server has been started.
• Post Script: This script runs after all the Managed Servers have started.
15. Click Next to schedule the procedure. If you click Submit, the procedure is
submitted for execution right away. Click Save to save this as a template; this
feature is particularly useful with lockdowns. For example, you can login with
designer role and create a template with lockdowns, and assign the desired
privileges to other administrator/operators to run this template. Click Cancel to
exit the procedure configuration.
16. After submitting, you can track the progress of the provisioning operation from
the Procedure Activity page. For more information about this, see About
Deployment Procedures.
17. To view the newly provisioned target, from the Targets menu, select Middleware.
Note:
To provide inputs and further customize the destination environment, click
Advance. To understand the settings and configuration parameters that can be
customized, see Customizing the Destination Environment from an Existing
WeLogic Domain Based-Profile.
Note:
For more information on cloning a database in WebLogic domain, see Use
Case 6 - Cloning a database in WebLogic Domain
Note:
1. In Cloud Control, from Enterprise menu, select Provisioning and Patching, and
then click Middleware Provisioning.
2. On the Middleware Provisioning home page, from the Profiles table select a
WebLogic Domain profile, then click Provision.
3. On the Provision Fusion Middleware page, in the General section, the WebLogic
Domain profile appears pre-selected.
4. In the Hosts section, search and select the destination hosts where the cloned
WebLogic Domain will reside. Click +Add to add the target hosts, and provide
the login credentials for them. If you have selected multiple hosts, and the login
credentials for all of them are the same, then you can select Same Credentials for
all.
Note: The domain configuration for the destination host and the source host must
be exactly the same for cloning operation to be successful. For example, if the
source domain has servers on two different hosts, then you will need to select two
different destinations hosts. You will be prompted to do so before you proceed.
5. In the Middleware section the value for Java Home appears pre-populated.
Provide the domain Administrator credentials.
6. In the Database section, select the cloned database target, and provide the schema
password.
Note:
This section appears only if the source database had data sources.
7. (optional) The Identity and Security section, enter the OID target name, and the
credential. As a prerequisite, you must have cloned the source OID to a
destination environment.
8. In the WebTier section, click +Add to search and select an Oracle HTTP Server
(OHS) target. In the Credential field provide the credentials to access the target. If
you have more than one Oracle HTTP Server, then you need to provide an
External Load Balancer URL.
The format of the URL for an External Load Balancer: (http|https://
hostname:port)
For example: http://wcp-prov.example.com:80
Note:
If you are provisioning a WebCenter target, you will additionally need to
provide an Internal Load Balancer URL.
Note:
If you are provisioning a plain WebLogic Domain, this section will not be
displayed.
• The SES Domain must be configured with the same OID as the WebCenter
Domain.
a. Provide the credentials for Crawl admin user in OID. You must enter the
same Crawl Administrator Username and Crawl Administrator Password
that was created as a part of the prerequisite step. The Crawl Administration
users in Spaces, and in the Identity Management System, are required to
crawl certain Space objects, such as lists, pages, spaces, and people
connections profiles. For example, mycrawladmin.
c. The Search User Username and Search User Password are the credentials of
the Oracle SES federation trusted entity. These get created while installing the
SES. Each Oracle SES instance must have a trusted entity for allowing
WebCenter Portal end users to be securely propagated at search time. A
trusted entity allows the WebCenter Portal application to authenticate itself to
Oracle SES and assert its users when making queries on Oracle SES. This
trusted entity can be any user that either exists on the Identity Management
Server behind Oracle SES or is created internally in Oracle SES. For example,
wesearch.
d. In the SES Search URL field, enter the URL of Search Administration Tool.
The format of this URL should be: http://
search_server_listenAddress:search_server_listenPort. For
example, http://slc01rsk.us.example.com:5720.
Note:
After configuring the WebCenter Domain with SES, if you encounter an issue
with searching the portal, you must perform in the following steps manually:
From the WebCenter Portal Oracle Home, copy
webcenter_portal_ses_admin.zip to the host where SES Domain is present.
Unzip the webcenter_portal_ses_admin.zip to find the files: facet.xml and
searchAttrSortable.xml.
Run these XML files as follows, for example:
/scratch/SES/oracle/middleware/Oracle_SES1/bin/searchadmin
-p welcome1 -c http://slc01rsk.us.example.com:5720/
search/api/admin/AdminService createAll facetTree -i
facet.xml
/scratch/SES/oracle/middleware/Oracle_SES1/bin/searchadmin
-c http://slc01rsk.us.example.com:5720/search/api/admin/
AdminService -p welcome1 updateAll searchAttr -a overwrite
-i searchAttrSortable.xml
Take a snapshot of the WebCenter Content server manually.
10. Click Next to schedule the procedure. If you click Submit, the procedure is
submitted for execution right away. Click Save to save this as a template; this
feature is particularly useful with lockdowns. For example, if you want to create a
template with lockdowns, and allow other users with operator privilege to run the
template multiple times with minor modifications.
Click Cancel to exit the procedure configuration.
11. After submitting, you can track the progress of the provisioning operation from
the Procedure Activity page. For more information about this, see About
Deployment Procedures.
12. To view the newly provisioned target, from the Targets menu, select Middleware.
Note:
Oracle recommends that you allow the default option Typical to remain
selected, and provide all the details. Following which, you can click Advance
to customize your destination environment. This way, the default values for
most of the parameters appear pre-populated, and you will need to enter only
the remaining (delta) details.Also, note that if you switch from Advance mode
to typical mode, you will lose all the changes that you have made so far.
1. In Cloud Control, from Enterprise menu, select Provisioning and Patching, and
then click Middleware Provisioning.
3. On the Middleware Provisioning page, in the General section, select the WebLogic
domain profile that contains the source domain details that is to be cloned.
By default, you will provision the WebLogic domain along with Oracle home. To
provision only the domain (Bitless profile), you must deselect Provision
Middleware Home and select Provision Domain. Alternatively, to provision only
the middleware home retain Provision Middleware Home option, and deselect
Provision Domain.
Use Shared Storage is particularly useful when you have multiple destination
hosts. This option allows you to use a mounted location that is accessible by all
the hosts.
4. In the Hosts section, search and select the destination hosts where the cloned
WebLogic Domain will reside. Click +Add to add the target hosts, and provide
the login credentials for them. If you have selected multiple hosts, and the login
credentials for all of them are the same, then you can select Same Credentials for
all.
Note: The domain configuration for the destination host and the source host must
be exactly the same for cloning operation to be successful. For example, if the
source domain has servers on 2 different hosts, then you will need to select two
different destinations hosts. You will be prompted to do so before you proceed.
5. In the Middleware section, by default, the values for Middleware Home and Java
Home are pre-populated. You can change the location details if required.
• For Middleware Home, enter the full path to the directory in which the
Middleware Home is to be created.
• For Java Home, enter the absolute path to the JDK directory to be used on the
destination Host. You need to specify this path if a similar configuration is
detected on the source machine.
6. In the Domain section, the configuration for the source domain is displayed by
default. You can change the following attributes to customize the domain
properties:
a. Domain Name: The name of the domain. The generated components for the
domain are stored under the specified Domain directory. For example, if you
enter mydomain, your domain files are stored (by default) in MW_HOME
\user_projects\domains\mydomain.
g. Server Startup Mode: you can start the server in one of the following modes
depending on your requirement:
Start all Servers and Node Managers: This is the default option. Typically,
you will select this option if you have no changes to make, and if the
procedure has run as expected.
Start only Administration Server: This option starts only Administrator
Server and Node Manger. Typically, you will select this option if you want to
add a custom step to invoke the WLST online script, and then start the
servers.
Do not start any Server or Node Manager: This option does not start any
server or Node Manager. Typically, you will select this option if you have to
customize the domain before starting any server.
7. In the Clusters section, all the clusters available in the source domain are
provisioned on the destination host.
8. In the Machines section, the configuration information in the source domain are
pre-populated. A Machine is a logical representation of the system that hosts one
or more WebLogic Server instances. The Administration Server and Node
Manager use the Machine definition to start remote servers. You can not
customize any of the values here.
9. In the Servers section, all the configuration information for the Administration
Server and managed servers are picked up from the source domain. You can
customize the following:
a. Select the Configure Coherence check box
b. Coherence Port: the port value is pre-populated that can not be customized.
c. Custom Identity and Custom Trust: Use this option to specify custom
certificates when configuring a domain in SSL mode.
10. In the JMS Servers section, all JMS servers configured for the source domain are
cloned on the destination hosts. You can not customize any value here.
11. In the Database section, you can change the schema username and password for
your data sources.
Note:
This section appears only if the source database had data sources.
12. (optional) The Identity and Security section, enter the OID target name, and the
credential. As a prerequisite, you must have cloned the source OID to a
destination environment.
13. In the WebTier section, click +Add to search and select an Oracle HTTP Server
(OHS) target. In the Credential field provide the credentials to access the target. If
you have more than one Oracle HTTP Server, then you need to provide an
External Load Balancer URL.
The format of the URL for an External Load Balancer: (http|https://
hostname:port)
For example: http://wcp-prov.example.com:80
Note:
Note:
If you are provisioning a plain WebLogic Domain, this section will not be
displayed.
14. (For a WebCenter Production Domain only) To configure an SES Domain with a
WebCenter Production Domain, follow these steps:
Prerequisites:
• The SES Domain must be configured with the same OID as the WebCenter
Domain.
a. Provide the credentials for Crawl admin user in OID. You must enter the
same Crawl Administrator Username and Crawl Administrator Password
that was created as a part of the prerequisite step. The Crawl Administration
c. The Search User Username and Search User Password are the credentials of
the Oracle SES federation trusted entity. These get created while installing the
SES. Each Oracle SES instance must have a trusted entity for allowing
WebCenter Portal end users to be securely propagated at search time. A
trusted entity allows the WebCenter Portal application to authenticate itself to
Oracle SES and assert its users when making queries on Oracle SES. This
trusted entity can be any user that either exists on the Identity Management
Server behind Oracle SES or is created internally in Oracle SES. For example,
wesearch.
d. In the SES Search URL field, enter the URL of Search Administration Tool.
The format of this URL should be: http://
search_server_listenAddress:search_server_listenPort. For
example, http://slc01rsk.us.example.com:5720.
Note:
After configuring the WebCenter Domain with SES, if you encounter an issue
with searching the portal, you must perform in the following steps manually:
From the WebCenter Portal Oracle Home, copy
webcenter_portal_ses_admin.zip to the host where SES Domain is present.
Unzip the webcenter_portal_ses_admin.zip to find the files: facet.xml and
searchAttrSortable.xml.
Run these XML files as follows, for example:
/scratch/SES/oracle/middleware/Oracle_SES1/bin/searchadmin
-p welcome1 -c http://slc01rsk.us.example.com:5720/
search/api/admin/AdminService createAll facetTree -i
facet.xml
/scratch/SES/oracle/middleware/Oracle_SES1/bin/searchadmin
-c http://slc01rsk.us.example.com:5720/search/api/admin/
AdminService -p welcome1 updateAll searchAttr -a overwrite
-i searchAttrSortable.xml
Take a snapshot of the WebCenter Content server manually.
15. In the Custom Scripts section, you can select the scripts stored as directives in
Software Library to customize the deployment procedure. Following options are
possible, you may or may not choose to pass the scripts with parameters:
a. You can pass a script with input parameters. For more information, see Using
Custom Scripts with Input Parameters.
Note: If you pass the input parameter, ensure that you allow the default
option of Input File to be selected. For example, in the Pre Script field, you
can choose My Custom Script With Parameters script that you earlier
created. For more information see Storing Custom Scripts With Input
Parameters. How?
Following are the contents of a sample input properties
(input.properties) file:
ADMIN_SERVER_LISTEN_ADDRESS=slc01.example.com
ADMIN_SERVER_LISTEN_PORT=7001
ADMIN_PROTOCOL=t3
MIDDLEWARE_HOME=/scratch/usr1/soa/middleware
b. You can alternatively choose to pass a script without any input parameters.
For more information, see Using Custom Scripts Without Inputs Parameters.
Note: If you do not want to pass an input parameter, you should deselect the
Input File option. For example, in the Pre Script field, you can choose My
Custom Script Without Parameters script that you earlier created. For more
information, see Storing Custom Scripts Without Input Parameters. How?
You can pass the following scripts to customize your procedure:
• Pre Script: This script runs soon after the prerequisite checks and before the
Oracle Home or WebLogic Domain is deployed.
• Post Administration Server Start Script: This script runs after the
Administration Server has been started.
• Post Script: This script runs after all the Managed Servers have started.
16. Click Next to schedule the procedure. If you click Submit, the procedure is
submitted for execution right away. Click Save to save this as a template; this
feature is particularly useful with lockdowns. For example, if you want to create a
template with lockdowns, and allow other users with operator privilege to run the
template multiple times with minor modifications.
Click Cancel to exit the procedure configuration.
17. After submitting, you can track the progress of the provisioning operation from
the Procedure Activity page. For more information about this, see About
Deployment Procedures.
18. To view the newly provisioned target, from the Targets menu, select Middleware.
This chapter describes how you can use the Middleware Provisioning solution offered
in Enterprise Manager Cloud Control to provision a SOA Domain or/and an Oracle
Home.
Note:
• Use Case 2: Provisioning from a SOA Oracle Home Based Provisioning Profile
• Use Case 3: Cloning from a Provisioning Profile based on an Existing SOA Domain
24.1 Getting Started with Provisioning SOA Domain and Oracle Home
This section helps you get started by providing an overview of the steps involved in
provisioning WebLogic Domain and Middleware Home using the Fusion Middleware
Deployment procedure.
Step 2 Meeting Prerequisites to Provision a To learn about the prrequisites for provisioning a SOA
Middleware Profile domain or home, see Before you Begin Provisioning
Before you run the Fusion SOA Domain and Oracle Home.
Middleware Deployment Procedure,
there are a few prerequisites that you
must meet.
Step 3 Running the Fusion Middleware To learn about provisioning from an Installation Media
Deployment Procedure Profile or an Oracle Home Profile, see Provisioning of a
Run this deployment procedure to new Fusion Middleware Domain from an Installation
successfully provision a Weblogic Media Based-Profile or an Oracle Home Based-Profile.
Domain and/or an Oracle Home. To learn about provisioning from a WebLogic Domain
Profile, see Provisioning a Fusion Middleware Domain
from an Existing Oracle Home.
To provision from an existing home, see Cloning from
an Existing WebLogic Domain Based-Profile.
To scale out from a SOA domain, see Scaling Up /
Scaling Out Fusion Middleware Domains .
• Source and Destination Environments for a Fresh SOA Provisioning Use Case
24.2.1 Source and Destination Environments for a Fresh SOA Provisioning Use Case
For a fresh SOA provisioning use case, before you begin, you must ensure that you
have met the following topology requirements:
Note:
Ensure that Oracle HTTP Server, APPHOST1, APPHOST2, and the RAC
database are being monitored as managed targets in Cloud Control.
After running the Fusion Middleware Deployment Procedure, all the products that are
displayed inside the green box in the destination environment get provisioned.
24.2.2 Source and Destination Environments for SOA Cloning Use Case
For a SOA cloning use case, before you begin, ensure that you have met the following
topology requirements:
• If source environment is configured with Oracle ID, then OID must be cloned and
discovered.
Note:
Ensure that Oracle HTTP Server, APPHOST3, APPHOST4, and the RAC
database are being monitored as managed targets in Cloud Control.
For the cloning case WLS_SOA1 and WLS_SOA 2 are part of APPHOST1 and
APPHOST2 respectively are cloned into APPHOST3 and APPHOST4. The RAC DB is
cloned separately.
Product Version
Oracle Repository 11g
Creation Utility
(RCU)
24.4 Before you Begin Provisioning SOA Domain and Oracle Home
You must keep the things to keep in mind before you actually start creating
middleware profiles and provisioning from these profiles.
In particular, this section contains the following topics:
• Setting Named Credentials and Privileged Credentials for the Middleware Targets
• (Applicable only for a Cloning WebLogic Domain Use Case) Cloning a Database
For instructions to create administrators with these roles, see Creating Enterprise
Manager User Accounts.
24.4.2 Setting Named Credentials and Privileged Credentials for the Middleware Targets
Oracle recommends that you set the Named Credentials for normal operating system
user account (Oracle) and Named Credentials for privileged user accounts (root) to
perform any of the provisioning tasks in Enterprise Manager Cloud Control.
For instructions to set the Named Credentials, see Setting Up Credentials.
24.4.3 (Applicable only for a Cloning WebLogic Domain Use Case) Cloning a Database
You must have cloned a database from the source domain at the same time that the
domain was being cloned. To clone a database, you must have discovered the source
database as a managed target in Enterprise Manager, following which you can create a
profile out of the source database, and then provision the profile to complete the
cloning process.
Note:
If you use a Windows machine to provision the SOA Domain, after you have
discovered the source SOA domain, you must bring the Node Manager down,
and only then proceed with the SOA Domain Profile creation.
2. (optional) You may choose to create some lock-downs and save the profile as a
template after it passes the prerequisite checks. Doing so can be useful when you
have to run the same profile multiple times for provisioning middleware products.
The added benefit of saving the profile as a template is that you can grant accesses
to Operators so they can run the profiles and carry out the Middleware
Provisioning.
If you have not created a template out of the profile, you can select your profile
from the Profiles table on the Middleware Provisioning page, then click Provision.
5. For provisioning a SOA Domain and Oracle Home from an Installation Media,
follow the steps mentioned in Provisioning of a new Fusion Middleware Domain
from an Installation Media Based-Profile or an Oracle Home Based-Profile.
6. If you want to customize the settings in the destination environment, follow the
steps mentioned in Customizing the Destination Environment from an Installation
Media Based-Profile or an Oracle Home Based-Profile..
2. Before you provision from a middleware profile based on an Oracle Home, meet
the prerequisites mentioned in Prerequisites for Provisioning the Installation Media
Profile or the Oracle Home Profile.
3. Select the profile from the Profiles table on the Middleware Provisioning page, then
click Provision.
4. For creating a clone of an existing domain's Oracle Home (with binaries and
patches) but no domain configuration, follow the steps mentioned in Provisioning
5. If you want to customize the settings in the destination environment, follow the
steps mentioned in Customizing the Destination Environment from an Installation
Media Based-Profile or an Oracle Home Based-Profile..
3. Select the profile from the Profiles table on the Middleware Provisioning page, then
click Provision.
4. For provisioning a SOA Domain and Oracle Home from a profile, follow the steps
mentioned in Cloning from an Existing WebLogic Domain Based-Profile.
5. If you want to customize the settings in the destination environment, follow the
steps mentioned in Customizing the Destination Environment from an Existing
WeLogic Domain Based-Profile.
The database cloning for WebLogic Server domain, SOA, Oracle Service Bus,
WebCenter can be accomplished in the following ways:
1. The user can manually clone the database from the source domain to the
destination domain before provisioning. The destination database inherits the
schema from the original database.
2. The user can choose to export the database metadata as part of provisioning. This
exports the product metadata necessary to clone the domain. The user can clone
the application related schema later. Exporting the metadata is supported from
SOA release 11g onwards and the schema creation is supported from SOA release
12c onwards.
Note:
While cloning the WebLogic domain, the entire database along with schema
must be cloned in the destination domain before provisioning. Else, the
cloning fails.
The following tables list the mapping of database cloning process supported with
schema for the respective domains and releases.
Table 24-3 Database cloning process with schema supported for SOA domain
Table 24-3 (Cont.) Database cloning process with schema supported for SOA
domain
Table 24-4 Database cloning process with schema supported for Service Bus
domain
Table 24-5 Database cloning process with schema supported for WebCenter
domain
Note:
Table 24-6 Database cloning process with schema supported for JRF domain
Case JRF
User exports metadata during profile creation No
and provides a blank database with no
schema during provisioning. The schema is
created through Provisioning user interface
in release 12c onwards.
Table 24-6 (Cont.) Database cloning process with schema supported for JRF
domain
Case JRF
User exports metadata during profile creation No
and provides a blank database with empty
schema pre-created during provisioning.
Note:
This chapter describes how you can use the Middleware Provisioning solution offered
in Enterprise Manager Cloud Control to provision an Service Bus Domain or/and an
Oracle Home.
Note:
• Getting Started with Provisioning Service Bus Domain and Oracle Home
• Before you Begin Provisioning Service Bus Domain and Oracle Home
• Use Case 2: Provisioning from a Service Bus Home Based Provisioning Profile
• Use Case 3: Cloning from a Provisioning Profile based on an Existing Service Bus
Domain
25.1 Getting Started with Provisioning Service Bus Domain and Oracle
Home
This section helps you get started by providing an overview of the steps involved in
provisioning WebLogic Domain and Middleware Home using the Fusion Middleware
Deployment procedure.
Step 2 Meeting Prerequisites to Provision a To learn about the prrequisites for provisioning an
Middleware Profile Service Bus domain or home, see Before you Begin
Before you run the Fusion Provisioning Service Bus Domain and Oracle Home.
Middleware Deployment Procedure,
there are a few prerequisites that you
must meet.
Step 3 Running the Fusion Middleware To learn about provisioning from an Installation Media
Deployment Procedure Profile or an Oracle Home Profile, see Provisioning of a
Run this deployment procedure to new Fusion Middleware Domain from an Installation
successfully provision a Weblogic Media Based-Profile or an Oracle Home Based-Profile.
Domain and/or an Oracle Home. To learn about provisioning from a WebLogic Domain
Profile, see Provisioning a Fusion Middleware Domain
from an Existing Oracle Home.
To provision from an existing home, see Cloning from
an Existing WebLogic Domain Based-Profile.
To scale out from an Service Bus domain, see Scaling
Up / Scaling Out Fusion Middleware Domains .
Product Version
Oracle Repository 11g
Creation Utility
(RCU)
25.3 Before you Begin Provisioning Service Bus Domain and Oracle
Home
You must keep the things to keep in mind before you actually start creating
middleware profiles and provisioning from these profiles.
In particular, this section contains the following topics:
• Setting Named Credentials and Privileged Credentials for the Middleware Targets
• (Applicable only for a Cloning WebLogic Domain Use Case) Cloning a Database
For instructions to create administrators with these roles, see Creating Enterprise
Manager User Accounts.
25.3.2 Setting Named Credentials and Privileged Credentials for the Middleware Targets
Oracle recommends that you set the Named Credentials for normal operating system
user account (Oracle) and Named Credentials for privileged user accounts (root) to
perform any of the provisioning tasks in Enterprise Manager Cloud Control.
For instructions to set the Named Credentials, see Setting Up Credentials.
25.3.3 (Applicable only for a Cloning WebLogic Domain Use Case) Cloning a Database
You must have cloned a database from the source domain at the same time that the
domain was being cloned. To clone a database, you must have discovered the source
database as a managed target in Enterprise Manager, following which you can create a
profile out of the source database, and then provision the profile to complete the
cloning process.
2. (optional) You may choose to create some lock-downs and save the profile as a
template after it passes the prerequisite checks. Doing so can be useful when you
have to run the same profile multiple times for provisioning middleware products.
The added benefit of saving the profile as a template is that you can grant accesses
to Operators so they can run the profiles and carry out the Middleware
Provisioning.
If you have not created a template out of the profile, you can select your profile
from the Profiles table on the Middleware Provisioning page, then click Provision.
5. For provisioning a Service Bus Domain and Oracle Home from an Installation
Media, follow the steps mentioned in Provisioning of a new Fusion Middleware
Domain from an Installation Media Based-Profile or an Oracle Home Based-Profile.
6. If you want to customize the settings in the destination environment, follow the
steps mentioned in Customizing the Destination Environment from an Installation
Media Based-Profile or an Oracle Home Based-Profile..
2. Before you provision a middleware profile based on an Oracle Home, meet the
prerequisites mentioned in Prerequisites for Provisioning the Installation Media
Profile or the Oracle Home Profile.
3. Select the profile from the Profiles table on the Middleware Provisioning page, then
click Provision.
4. For creating a clone of an existing domain's Oracle Home (with binaries and
patches) but no domain configuration, follow the steps mentioned in Provisioning
of a new Fusion Middleware Domain from an Installation Media Based-Profile or
an Oracle Home Based-Profile.
5. If you want to customize the settings in the destination environment, follow the
steps mentioned in Customizing the Destination Environment from an Installation
Media Based-Profile or an Oracle Home Based-Profile..
3. Select the profile from the Profiles table on the Middleware Provisioning page, then
click Provision.
4. For provisioning a Service Bus Domain and Oracle Home from a profile, follow the
steps mentioned in Cloning from an Existing WebLogic Domain Based-Profile.
5. If you want to customize the settings in the destination environment, follow the
steps mentioned in Customizing the Destination Environment from an Existing
WeLogic Domain Based-Profile.
This chapter describes how you can use the Middleware Provisioning solution offered
in Enterprise Manager Cloud Control to provision a WebCenter Domain or/and an
Oracle Home.
In particular, this chapter contains the following topics:
Step 2 Meeting Prerequisites to Provision a To learn about the prrequisites for provisioning a
Middleware Profile WebCenter domain or home, see Before you Begin
Before you run the Fusion Provisioning WebCenter Domain and Oracle Home.
Middleware Deployment Procedure,
there are a few prerequisites that you
must meet.
Step 3 Running the Fusion Middleware To learn about provisioning from an Installation Media
Deployment Procedure Profile or an Oracle Home Profile, see Provisioning of a
Run this deployment procedure to new Fusion Middleware Domain from an Installation
successfully provision a Weblogic Media Based-Profile or an Oracle Home Based-Profile.
Domain and/or an Oracle Home. To learn about provisioning from a WebLogic Domain
Profile, see Provisioning a Fusion Middleware Domain
from an Existing Oracle Home.
To provision from an existing home, see Cloning from
an Existing WebLogic Domain Based-Profile.
To scale out from a WebCenter domain, see Scaling
Up / Scaling Out Fusion Middleware Domains .
Task Description
1. WebCenter Provisioning This use case is useful if you want to provision a fresh vanilla
- Fresh Install WebCenter (Oracle WebCenter Portal + Oracle WebCenter
Content) environment.
Task Description
2. WebCenter Provisioning This use case is useful if you already have installed WebCenter
- Cloning (like-to-like) environment (Oracle WebCenter Portal + Oracle WebCenter
Content), and a later point want to copy/clone from that
environment.
Production Topology (HA) WCP, WCC, DB, OHS, DB, OHS, OTD, SES, LDAP
LBR, SES, LDAP w/SSO
Production topology supports multi-node cluster environment, which means that you
can select however many nodes per cluster. After provisioning, you can even
reconfigure the node count for clusters.
26.3.1 Source and Destination Environments for a Fresh WebCenter Provisioning Use
Case
Before you begin, ensure that you have met the following topology requirements:
Note:
Ensure that Oracle HTTP Server, APPHOST1, APPHOST2, and the RAC
database are being monitored as managed targets in Cloud Control.
26.3.2 Source and Destination Environments for WebCenter Cloning Use Case
Before you begin, ensure that you have met the following topology requirements:
• If source environment is configured with Oracle ID, then OID must be cloned and
discovered.
Note:
Ensure that Oracle HTTP Server, APPHOST3, APPHOST4, and the RAC
database are being monitored as managed targets in Cloud Control.
Product Version
Oracle Repository 11g
Creation Utility
(RCU)
26.5 Before you Begin Provisioning WebCenter Domain and Oracle Home
You must keep the things to keep in mind before you actually start creating
middleware profiles and provisioning from these profiles.
Note:
To provision a WebCenter Domain using LDAP in the Production mode,
ensure that a Weblogic user is present with Administrator group privileges in
the LDAP.
Ensure that you have identical topology of servers on all nodes in a single
cluster. For example, in case of a two node cluster, the same set of servers
must be available on node 1 and node 2 of the cluster.
• Setting Named Credentials and Privileged Credentials for the Middleware Targets
• (Applicable only for a Cloning WebLogic Domain Use Case) Cloning a Database
For instructions to create administrators with these roles, see Creating Enterprise
Manager User Accounts.
26.5.2 Setting Named Credentials and Privileged Credentials for the Middleware Targets
Oracle recommends that you set the Named Credentials for normal operating system
user account (Oracle) and Named Credentials for privileged user accounts (root) to
perform any of the provisioning tasks in Enterprise Manager Cloud Control.
For instructions to set the Named Credentials, see Setting Up Credentials.
26.5.3 (Applicable only for a Cloning WebLogic Domain Use Case) Cloning a Database
You must have cloned a database from the source domain at the same time that the
domain was being cloned. To clone a database, you must have discovered the source
database as a managed target in Enterprise Manager, following which you can create a
profile out of the source database, and then provision the profile to complete the
cloning process.
you do not wish to clone from a provisioning profile based upon an existing domain.
To do so, follow these steps:
3. (optional) While provisioning the profile, you can create a template from the inputs
that has already been entered in this deployment procedure. For this, you need to
use the lockdown feature offered in Enterprise Manager, which enables you to lock
select inputs thereby restricting other users to the change these values in future. For
example, you can lock the values entered in the Middleware, Database, Identity
and Security, and WebTier sections. By saving the Deployment Procedure with
these lockdowns, other administrators can leverage the defaulted inputted values
and submit the procedure with very few inputs required from them.
If you have not created a template out of the profile, you can select your profile
from the Profiles table on the Middleware Provisioning page, then click Provision.
6. If you want to customize the settings in the destination environment, follow the
steps mentioned in Customizing the Destination Environment from an Installation
Media Based-Profile or an Oracle Home Based-Profile..
2. Before you provision a middleware profile based on an Oracle Home, meet the
prerequisites mentioned in Prerequisites for Provisioning the Installation Media
Profile or the Oracle Home Profile.
3. Log in with Designer/Operator privileges, select your profile from the Profiles
table on the Middleware Provisioning page, then click Provision.
4. For creating a clone of an existing domain's Oracle Home (with binaries and
patches) but no domain configuration, follow the steps mentioned in Provisioning
of a new Fusion Middleware Domain from an Installation Media Based-Profile or
an Oracle Home Based-Profile.
5. If you want to customize the settings in the destination environment, follow the
steps mentioned in Customizing the Destination Environment from an Installation
Media Based-Profile or an Oracle Home Based-Profile..
3. Log in with Designer/Operator Privileges, select your profile from the Profiles
table on the Middleware Provisioning page, then click Provision.
4. For provisioning a WebCenter Domain and Oracle Home from a profile, follow the
steps mentioned in Cloning from an Existing WebLogic Domain Based-Profile.
5. If you want to customize the settings in the destination environment, follow the
steps mentioned in Customizing the Destination Environment from an Existing
WeLogic Domain Based-Profile.
This chapter describes how you can use the command line option Enterprise Manager
Command Line Interface offered by Oracle to create, describe, list, delete, and
customize the Middleware Profiles.
In particular, this chapter covers the following:
-name
Name of the WebLogic Domain profile.
-ref_target
Name of the reference target used to create the WebLogic Domain profile.
-description
A short description for the WebLogic Domain profile you create.
-oh_cred
Named credential that will be used to access the reference host.
Format: CREDENTIAL_NAME:CREDENTIAL_OWNER.
All operations will be performed on the Administration Server host.
Credentials of the Oracle Home owner on the Administration Server host are
required.
If no named credential is provided, then preferred host credentials for the
Oracle Home target will be used.
-wls_cred
Named credential used to access the Administration Server. This is an
optional parameter.
To pass the credential parameter, enter a name:value pair in the following
format:
credential_name:credential_owner.
Where,
Credential_name is the name of the named credential.
Credential_owner is the credentials of the Administrator of the
WebLogic Domain.
All operations are performed in online mode (using T2P) in case of a Fusion
Middleware domain.
If no named credential is provided, the preferred administrator credentials
for the domain target will be used.
-includeOh
Whether the Oracle Home binaries have to be included in the profile or not.
-schedule
The schedule for the Deployment Procedure.
If not specified, the procedure will be executed immediately.
start_time: when the procedure should start.
tz: the timezone ID.
grace_period: grace period in minutes.
Output:
For example you will see following attributes when the profile has been submitted
successfully:
– instance_name: 'CreateFmwProfile-
SoaProfile_SYSMAN_07_09_2014_11_36_AM'
– instance_guid: 'FDC7FC56E2CF2972E04373B1F00A1512'
Note:
To track the status of the profile being created, use the command:
emcli get_instance_status -
instance=FDC7FC56E2CF2972E04373B1F00A1512 -xml -details -
showJobOutput
-ref_target
Name of the Oracle Home target used as reference to create the profile.
-description
A short description for the Oracle home profile you create.
-oh_cred
Named credentials used to access the reference host. This is an optional parameter.
To pass the credential parameter, enter a name:value pair in the following format:
credential_name:credential_owner.
Where,
Credential_name is the name of the named credential.
Credential_owner is the credentials of the Oracle home owner on the Administration
Server host.
If no named credential is provided, the preferred host credential for the Oracle
home target will be used.
-schedule
The schedule for the Deployment Procedure.
If not specified, the procedure will be executed immediately.
To specify a value, enter:
start_time: when the procedure should start.
tz: the timezone ID.
grace_period: grace period in minutes.
Output:
You will see the following attributes when the profile has been submitted
successfully:
– instance_name: 'CreateFmwProfile-
SoaProfile_SYSMAN_07_09_2014_11_36_AM'
– instance_guid: 'FDC7FC56E2CF2972E04373B1F00A1512'
Note:
To track the status of the profile being created, use the command:
emcli get_instance_status -
instance=FDC7FC56E2CF2972E04373B1F00A1512 -xml -details -
showJobOutput
• An Oracle home profile called OhProfile1 is created using the reference host target
at the specified schedule using preferred credentials.
emcli create_fmw_home_profile
-name="OhProfile1"
-ref_target=""WebLogicServer_10.3.6.0_myhost.example.com_5033""
-host
Name of the host target where the installation media files are stored.
-version
Version of the installation media files.
-platform
Platform for which the installation media is applicable.
-description
A short description for the installation media profile.
-host_cred
Named credentials used to access the reference host. This is an optional parameter.
To pass the credential parameter, enter a name:value pair in the following format:
credential_name:credential_owner.
Where,
Credential_name is the name of the named credential.
Credential_owner is the credentials of the Oracle home owner on the Administration
Server host.
If no named credential is provided, the preferred host credential for the host
target will be used.
.
-files
List of files that have to be uploaded to Software Library. These files are passed
in the format product:file1,file2. The Installation Media profile supports the
following products: WebLogic, SOA, OSB, and RCU. Note that, to create any of these
profiles, you must upload the WebLogic media file.
• Upload the SOA and WLS installation media files to the Software Library from the
host myhost.example.com. Additionally, you must provide the platform and
version details for the installation media file. The profile called SOA+WLSInstaller
is created using the named credentials specified.
emcli create_inst_media_profile
-name="SOA+WLSInstaller"
-host="myhost.example.com"
-description="SOA 11.1.1.7.0 and WebLogic Server 10.3.6.0 installer"
-version="11.1.1.7.0"
-platform="Generic"
-host_cred="MY_HOST_CRED:SYSMAN"
-files="WebLogic:/u01/media/weblogic/wls1036_generic.jar;SOA:/u01/media/soa/
soa1.zip,/u01/media/soa/soa2.zip"
from text-based consoles (shells and command windows) for a variety of operating
systems. You can call Enterprise Manager functionality using custom scripts, such as
SQL*Plus, OS shell, Perl, or Tcl, thus easily integrating Enterprise Manager
functionality with your company's business process.
Note:
Use the procedure below to generate a template input file based on
configuration deployment procedure functionality:
6. Run the following command to fetch an example input file which contains
all the values:
emcli describe_procedure_input -
procedure=1965463666C90D9CE053E37BF00A8C65
7. Make the required changes to the input file including the password.
emcli list_fmw_profiles
[-source_type="Profile Source"]
• To list all the Middleware Provisioning Profiles, run the following command:
emcli list_fmw_profiles
Output:
• To list all the WebLogic Domain Provisioning Profiles, run the following
command:
emcli list_fmw_profiles
-source_type=weblogic_domain
Output:
emcli list_fmw_profiles
[-source_type="Profile Source"]
• To list all the WebLogic Domain Provisioning Profiles, run the following
command:
emcli list_fmw_profiles
-source_type=oracle_home
Output:
• To list all the WebLogic Domain Provisioning Profiles, run the following
command:
emcli list_fmw_profiles
-source_type=install_media
Output:
emcli describe_fmw_profile
-location="Profile Location
Output
Location: Fusion Middleware Provisioning/Profiles/SoaBitlessProfile
Created By: SYSMAN
Created On: Sun, 6 Jul 2014
URN: oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_FMWBundle:FD8AEF47A15C0874E04373B1F00AEC31:0.1
Attachments:
Oracle Homes:
Product: Oracle SOA
Version: 11.1.1.7.0
WebLogic Domain:
Domain Name: SoaDomain
Size: 611.45 MB
Products: Oracle SOA
Data Sources:
JMS Servers:
Output
Oracle Homes:
Product: Oracle SOA
Version: 11.1.1.7.0
Output:
Location: Fusion Middleware Provisioning/Profiles/WebCenter Install Media
Description: WC 11.1.1.8.0 and WebLogic Server 10.3.6.0 installer and RCU
Created By: SYSMAN
Created On: Sun, 6 Jul 2014
URN: oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_InstallationMedia:FD8AA7EEEEA369A5E04373B1F00AE77C:0.1
Source: Installation Media
Product: Oracle WebCenter
Platform: Generic Platform
Version: 11.1.1.8.0
Files:
Examples:
To delete a profile by the name MyProfile, run the following command:
emcli delete_fmw_profile
-location="Fusion Middleware Provisioning/Profiles/MyProfile"
This chapter describes how you can use REST APIs to create, describe, list, and delete
Middleware Profiles. All the operations that were only possible from the Cloud
Control console is now being additionally supported using a REST request/response
interactions.
Profiles are like templates that you can create and store in Software Library. Once a
profile is created, it can be launched numerous times to provision WebLogic Domain
and/or Oracle Home. The advantage of using a profile is that you can ensure that
future WebLogic installations follow a standard, consistent configuration.
In particular, this chapter covers the following:
1. Perform the GET operation on the URL to view the inputs that you need to
provide to create a profile.
Table 28-1 GET Request Configuration for Creating a WebLogic Domain Profile
Feature Description
URL :/em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/create/weblogic_domain
Body None
Feature Description
Parameters/ None
Constraints
Example Response:
{
name: "Name of the profile to be created."
targetName: "Name of the WebLogic Domain target that will be used as reference
to create the profile."
description: "[Optional] Description of the profile that will be created."
credential: "[Optional] Named credential that will be used to access the
reference host. Format: CREDENTIAL_NAME:CREDENTIAL_OWNER. All operations will be
performed on the Administration Server host. Credentials of the Oracle Home
owner on the Administration Server host are required. If no named credential is
provided, then preferred host credentials for the Oracle Home target will be
used."
wlsCredential: "[Optional] Named credential that will be used to access the
Administration Server. Format: CREDENTIAL_NAME:CREDENTIAL_OWNER. If no named
credential is provided, then preferred administrator credentials for the domain
target will be used."
includeOh: "[Optional] Whether the Oracle Home binaries have to be included in
the profile or not. Value: true/false"
schedule: "[Optional] The schedule for the Deployment Procedure. If not
specified, the procedure will be executed immediately. Format:
start_time:yyyy/MM/dd HH:mm; [tz:{java timezone ID}]; [grace_period:xxx];
}
Feature Description
URL em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/create/weblogic_domain
Request header
Body {
"name":"DomainProfileViaRestAPI",
"targetName":"/Farm01_SoaDomain/SoaDomain",
"credential":"HOSTCRED:SYSMAN",
"description":"Domain Profile",
"includeOh":"false"
}
Feature Description
Parameters NA
Example Response:
{
instanceName: "CreateFmwProfile-DomainProfileViaRest_SYSMAN_07_09_2014_09_47_AM"
instanceGuid: "FDBDDB0C6767690CE04373B1F00A09E8"
executionGuid: "FDBDDB0C676A690CE04373B1F00A09E8"
executionUrl: "http://slc03qtn:7802/em/faces/core-jobs-
procedureExecutionTracking?
instanceGUID=FDBDDB0C6767690CE04373B1F00A09E8&showProcActLink=yes&executionGUID=F
DBDDB0C676A690CE04373B1F00A09E8"
name: "DomainProfileViaRestAPI"
targetName: "/Farm01_SoaDomain/SoaDomain"
description: "Domain Profile"
credential: "HOSTCRED:SYSMAN"
includeOh: "false"
}
1. Perform the GET operation on the URL to view the inputs that you need to
provide to create a profile.
Table 28-3 GET Request Configuration for Creating an Oracle Home Profile
Feature Description
URL /em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/create/oracle_home
Body NA
Parameters/ NA
Constraints
Example Response:
{
name: "Name of the profile to be created."
targetName: "Name of the Oracle Home target that will be used as reference to
Table 28-4 POST Request Configuration for Creating an Oracle Home Profile
Feature Description
URL /em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/create/oracle_home
Body {
"name":"OhProfileViaRestAPI",
"targetName":"WebLogicServer10_3_6_0_slc00tkv.example.com_631
6",
"credential":"HOSTCRED:SYSMAN",
"description":"OH Profile"
}
Parameters NA
Example Response:
{
instanceName: "CreateFmwProfile-OhProfileViaRestAPI_SYSMAN_07_09_2014_09_54_AM"
instanceGuid: "FDC6776FB9B31CCDE04373B1F00A2B1C"
executionGuid: "FDC6776FB9B61CCDE04373B1F00A2B1C"
executionUrl: "http://slc03qtn:7802/em/faces/core-jobs-
procedureExecutionTracking?
instanceGUID=FDC6776FB9B31CCDE04373B1F00A2B1C&showProcActLink=yes&executionGUID=F
DC6776FB9B61CCDE04373B1F00A2B1C"
name: "OhProfileViaRestAPI"
targetName: "WebLogicServer10_3_6_0_slc00tkv.example.com_6316"
description: "OH Profile"
credential: "HOSTCRED:SYSMAN"
}
1. Perform the GET operation on the URL to view the inputs that you need to
provide to create a profile.
Feature Description
URL /em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/create/install_media
Body NA
Parameters/ NA
Constraints
Example Response:
{
name: "Name of the profile to be created."
targetName: "Name of the Host target that where all the installation files are
stored."
description: "[Optional] Description of the profile that will be created."
credential: "[Optional] Named credential that will be used to access the files.
Format: CREDENTIAL_NAME:CREDENTIAL_OWNER. If no named credential is provided,
then normal preferred credentials for the Host target will be used."
platform: "Platform for which the installation media is applicable."
version: "Version of the installation media."
-files: [2]
-0:
{
product: "Product1"
-files: [1]
0: "file1"
}
-1:
{
product: "Product2"
-files: [2]
0: "file2"
1: "file3"
}
}
Feature Description
URL /em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/create/install_media
Feature Description
Body {
"name":"ImProfileViaRestAPI",
"targetName":"slc03qtn.example.com",
"platform":"2000",
"version":"11.1.1.8.0",
"credential":"HOSTCRED:SYSMAN",
"description":"IM Profile",
"files":
[
{
"product":"WebLogic",
"files":
[ "/net/adcnas438/export/emgpqa/provisioning/fmwprov/
linux64/shiphomes/upinsmed_wls/wls1036_generic.jar"
]
},
{
"product":"WCC",
"files":
[
"/net/slcnas478/export/farm_em_repos/emgpqa/provisioning/
fmwprov/linux64/shiphomes/wcc_11.1.1.8.0/ecm_main1.zip",
"/net/slcnas478/export/farm_em_repos/emgpqa/provisioning/
fmwprov/linux64/shiphomes/wcc_11.1.1.8.0/ecm_main2.zip"
]
},
{
"product":"WCP",
"files":
[
"/net/slcnas478/export/farm_em_repos/emgpqa/provisioning/
fmwprov/linux64/shiphomes/wcp_11.1.1.8.0/wc.zip"
]
}
]
}
Parameters NA
Example Response:
{
instanceName: "CreateFmwProfile-ImProfileViaRestAPI_SYSMAN_07_09_2014_10_07_AM"
instanceGuid: "FDC6BDD54DA70346E04373B1F00A2DBE"
executionGuid: "FDC6BDD54DAA0346E04373B1F00A2DBE"
executionUrl: "http://slc03qtn:7802/em/faces/core-jobs-
procedureExecutionTracking?
instanceGUID=FDC6BDD54DA70346E04373B1F00A2DBE&showProcActLink=yes&executionGUID=F
DC6BDD54DAA0346E04373B1F00A2DBE"
name: "ImProfileViaRestAPI"
targetName: "slc03qtn.example.com"
description: "IM Profile"
credential: "HOSTCRED:SYSMAN"
platform: "2000"
version: "11.1.1.8.0"
-files: [3]
-0:
{
product: "WebLogic"
-files: [1]
0:"/net/adcnas438/export/emgpqa/provisioning/fmwprov/linux64/shiphomes/
upinsmed_wls/wls1036_generic.jar"
}
-1:
{
product: "WCC"
-files: [2]
0: "/net/slcnas478/export/farm_em_repos/emgpqa/provisioning/fmwprov/linux64/
shiphomes/wcc_11.1.1.8.0/ecm_main1.zip"
1: "/net/slcnas478/export/farm_em_repos/emgpqa/provisioning/fmwprov/linux64/
shiphomes/wcc_11.1.1.8.0/ecm_main2.zip"
}
-2:
{
product: "WCP"
-files: [1]
0: "/net/slcnas478/export/farm_em_repos/emgpqa/provisioning/fmwprov/linux64/
shiphomes/wcp_11.1.1.8.0/wc.zip"
}
}
Table 28-7 GET Request Configuration for Listing All the Profiles
Feature Description
URL /em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/list
Table 28-7 (Cont.) GET Request Configuration for Listing All the Profiles
Feature Description
Body NA
Parameters/Constraints NA
Example Response:
{
-profiles: [3]
-0:
{
canonicalLink: "http://slc03qtn:7802/em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/describe/oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_FMWBundle:FD8AEF47A15C0874E04373B1F00AEC31:0.1"
location: "Fusion Middleware Provisioning/Profiles/SoaBitlessProfile"
products: "Oracle SOA"
source: "WebLogic Domain"
urn: "oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_FMWBundle:FD8AEF47A15C0874E04373B1F00AEC31:0.1"
}
-1:
{
canonicalLink: "http://slc03qtn:7802/em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/describe/oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_FMWBundle:FDB228F176CA1B47E04373B1F00AF708:0.1"
location: "Fusion Middleware Provisioning/Profiles/SoaGoldImage"
products: "Oracle SOA"
source: "Oracle Home"
urn: "oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_FMWBundle:FDB228F176CA1B47E04373B1F00AF708:0.1"
}
-2:
{
canonicalLink: "http://slc03qtn:7802/em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/describe/oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_InstallationMedia:FD8AA7EEEEA369A5E04373B1F00AE77C:0.1"
description: "WC 11.1.1.8.0 and WLS 10.3.6.0 installer and RCU"
location: "Fusion Middleware Provisioning/Profiles/WebCenter Install Media"
-products: [2]
0: "Oracle WebCenter Content"
1: "Oracle WebCenter Portal"
source: "Installation Media"
urn: "oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_InstallationMedia:FD8AA7EEEEA369A5E04373B1F00AE77C:0.1"
}
}
Table 28-8 GET Request Configuration for Listing the WebLogic Domain Profiles
Feature Description
URL /em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/list/weblogic_domain
Body NA
Parameters/Constraints NA
Example Response:
{
-profiles:
{
canonicalLink: "http://slc03qtn:7802/em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/describe/oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_FMWBundle:FD8AEF47A15C0874E04373B1F00AEC31:0.1"
location: "Fusion Middleware Provisioning/Profiles/SoaBitlessProfile"
products: "Oracle SOA"
source: "WebLogic Domain"
urn: "oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_FMWBundle:FD8AEF47A15C0874E04373B1F00AEC31:0.1"
}
}
Table 28-9 GET Request Configuration for Listing the Oracle Home Profiles
Feature Description
URL /em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/list/oracle_home
Body NA
Parameters/Constraints NA
Example Response:
{
-profiles:
{
canonicalLink: "http://slc03qtn:7802/em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/describe/oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_FMWBundle:FDB228F176CA1B47E04373B1F00AF708:0.1"
location: "Fusion Middleware Provisioning/Profiles/SoaGoldImage"
Table 28-10 GET Request Configuration for Listing the Installation Media Profiles
Feature Description
URL /em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/list/install_media
Body NA
Parameters/Constraints NA
Example Response:
{
-profiles:
{
canonicalLink: "http://slc03qtn:7802/em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/describe/oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_InstallationMedia:FD8AA7EEEEA369A5E04373B1F00AE77C:0.1"
description: "WC 11.1.1.8.0 and WLS 10.3.6.0 installer and RCU"
location: "Fusion Middleware Provisioning/Profiles/WebCenter Install Media"
-products: [2]
0: "Oracle WebCenter Content"
1: "Oracle WebCenter Portal"
source: "Installation Media"
urn: "oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_InstallationMedia:FD8AA7EEEEA369A5E04373B1F00AE77C:0.1"
}
}
Table 28-11 GET Request Configuration for Describing the WebLogic Domain
Profile
Feature Description
URL /em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/describe/
oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_FMWBundle:FD8AEF47A15C08
74E04373B1F00AEC31:0.1
Body NA
Parameters/Constraints NA
Example Response:
{
location: "Fusion Middleware Provisioning/Profiles/SoaBitlessProfile"
createdBy: "SYSMAN"
createdOn: "Sun, 6 Jul 2014"
urn: "oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_FMWBundle:FD8AEF47A15C0874E04373B1F00AEC31:0.1"
-attachments: [3]
-0:
{
fileName: "nmMovePlan.xml"
size: "6.95 KB"
contentType: "text/plain"
addedBy: "SYSMAN"
addedOn: "Sun, 6 Jul 2014"
}
-1:
{
fileName: "mdm-url-resolver.xml"
size: "268 Bytes"
contentType: "text/plain"
addedBy: "SYSMAN"
addedOn: "Sun, 6 Jul 2014"
}
-2:
{
fileName: "domainMovePlan.xml"
size: "56.91 KB"
contentType: "text/plain"
addedBy: "SYSMAN"
addedOn: "Sun, 6 Jul 2014"
}
targets: [1]
0: "cluster_soa"
}
-2:
{
name: "OraSDPMDataSource"
schema: "SOA_ORASDPM"
url: "jdbc:oracle:thin:@slc00dbv.example.com:1521/dbv.example.com"
targets: [1]
0: "cluster_soa"
}
-3:
{
name: "SOADataSource"
schema: "SOA_SOAINFRA"
url: "jdbc:oracle:thin:@slc00dbv.example.com:1521/dbv.example.com"
targets: [1]
0: "cluster_soa"
}
-4:
{
name: "SOALocalTxDataSource"
schema: "SOA_SOAINFRA"
url: "jdbc:oracle:thin:@slc00dbv.example.com:1521/dbv.example.com"
targets: [1]
0: "cluster_soa"
}
-5:
{
name: "mds-owsm"
schema: "SOA_MDS"
url: "jdbc:oracle:thin:@slc00dbv.example.com:1521/dbv.example.com"
targets: [2]
0: "cluster_soa"
1: "AdminServer"
}
-6:
{
name: "mds-soa"
schema: "SOA_MDS"
url: "jdbc:oracle:thin:@slc00dbv.example.com:1521/dbv.example.com"
targets: [2]
0: "cluster_soa"
1: "AdminServer"
}
-jmsServers: [4]
-0:
{
name: "BPMJMSServer_auto_1"
persistentStore: "BPMJMSFileStore_auto_1"
storeType: "fileStore"
directory: "BPMJMSFileStore_auto_1"
target: "soa_server1"
}
-1:
{
name: "PS6SOAJMSServer_auto_1"
persistentStore: "PS6SOAJMSFileStore_auto_1"
storeType: "fileStore"
directory: "PS6SOAJMSFileStore_auto_1"
target: "soa_server1"
}
-2:
{
name: "SOAJMSServer_auto_1"
persistentStore: "SOAJMSFileStore_auto_1"
storeType: "fileStore"
directory: "SOAJMSFileStore_auto_1"
target: "soa_server1"
}
-3:
{
name: "UMSJMSServer_auto_1"
persistentStore: "UMSJMSFileStore_auto_1"
storeType: "fileStore"
directory: "UMSJMSFileStore_auto_1"
target: "soa_server1"
}
}
Table 28-12 GET Request Configuration for Describing the Oracle Home Profile
Feature Description
URL /em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/describe/
oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_FMWBundle:FDB228F176CA1B
47E04373B1F00AF708:0.1
Body NA
Parameters/Constraints NA
Example Response:
{
location: "Fusion Middleware Provisioning/Profiles/SoaGoldImage"
createdBy: "SYSMAN"
createdOn: "Tue, 8 Jul 2014"
urn: "oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_FMWBundle:FDB228F176CA1B47E04373B1F00AF708:0.1"
source: "Oracle Home"
platform: "Linux x86-64"
wlsVersion: "10.3.6.0"
-oracleHome:
{
size: "3.51 GB"
-products: [1]
-0:
{
Table 28-13 GET Request Configuration for Describing the Installation Media
Profile
Feature Description
URL /em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/describe/
oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_InstallationMedia:FD8AA7
EEEEA369A5E04373B1F00AE77C:0.1
Body NA
Parameters/Constraints NA
Example Response:
{
location: "Fusion Middleware Provisioning/Profiles/WebCenter Install Media"
description: "WC 11.1.1.8.0 and WebLogic Server 10.3.6.0 installer and RCU"
createdBy: "SYSMAN"
createdOn: "Sun, 6 Jul 2014"
urn: "oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_InstallationMedia:FD8AA7EEEEA369A5E04373B1F00AE77C:0.1"
source: "Installation Media"
product: "Oracle WebCenter"
platform: "Generic Platform"
version: "11.1.1.8.0"
-files: [4]
-0:
{
product: "Oracle WebCenter Content"
size: "2.34 GB"
-files: [2]
-0:
{
fileName: "ecm_main1.zip"
size: "1.91 GB"
}
-1:
{
fileName: "ecm_main2.zip"
size: "437.6 MB"
}
}
-1:
{
product: "Oracle RCU"
size: "496.16 MB"
-files: [1]
-0:
{
fileName: "rcuHome.zip"
size: "496.16 MB"
}
}
-2:
{
product: "Oracle WebCenter Portal"
size: "1.96 GB"
-files: [1]
-0:
{
fileName: "wc.zip"
size: "1.96 GB"
}
}
-3:
{
product: "Oracle WebLogic Server"
size: "1,019.01 MB"
-files: [1]
-0:
{
fileName: "wls1036_generic.jar"
size: "1,019.01 MB"
}
}
}
Table 28-14 DELETE Request Configuration for Deleting the WebLogic Domain
Profile
Feature Description
URL /em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/describe/
oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_FMWBundle:FD8AEF47A15C08
74E04373B1F00AEC31:0.1
Body NA
Parameters/Constraints NA
Example Response:
After the deletion is successful, you might see something on the lines of the following
message:
Profile Fusion Middleware Provisioning/Profiles/MyProfile deleted successfully.
Table 28-15 DELETE Request Configuration for Deleting the Oracle Home Profile
Feature Description
URL /em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/describe/
oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_FMWBundle:FDB228F176CA1B
47E04373B1F00AF708:0.1
Body NA
Parameters/Constraints NA
Example Response:
After the deletion is successful, you might see something on the lines of the following
message:
Profile Fusion Middleware Provisioning/Profiles/MyProfile deleted successfully.
Table 28-16 DELETE Request Configuration for Deleting the Installation Media
Profile
Feature Description
URL /em/websvcs/restful/extws/cloudservices/fmw/
provisioning/profile/describe/
oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_InstallationMedia:FD8AA7
EEEEA369A5E04373B1F00AE77C:0.1
Body NA
Parameters/Constraints NA
Example Response:
After the deletion is successful, you might see something on the lines of the following
message:
Profile Fusion Middleware Provisioning/Profiles/MyProfile deleted successfully.
This chapter explains how you can scale up and scale out a SOA Domain, an Service
Bus Domain, a WebLogic Domain, and a WebCenter Domain using Oracle Enterprise
Manager Cloud Control (Cloud Control). In particular, this chapter covers the
following:
• Getting Started
• Prerequisites
Note:
If you have a Fusion Middleware domain with multiple products like SOA,
and Service Bus, configured, then you must scale out one product at a time. If
you try to scale out more than one cluster, for example SOA and Service Bus
clusters, in one session, then the procedure performs product specific
configurations only for one of them.
If you have a Fusion Middleware domain with multiple products like SOA,
Service Bus, and WebCenter configured, then you must scale up one product
at a time. Scaling up all the products simultaneously is not supported.
• Scale out a domain by adding or cloning a managed server to a host that is not
present in the domain or cluster.
29.2 Prerequisites
Before running the Scale Up / Scale Out Middleware deployment procedure, you
must meet the prerequisites listed in this section.
Note:
Meet the following prerequisites before you start extending the WebLogic Domain:
• The WebLogic Domain that is scaled up / scaled out must be an existing domain
that has been discovered with Cloud Control.
• If you are scaling out a domain, ensure that the destination machine contains
sufficient space. If the size of the Middleware Home on the source machine is 3 GB,
you need approximately 3 GB in the working directory on the source and
destination machines. Additionally, the destination machine should also have 3 GB
of space for the Middleware Home. The working directory is cleaned up after
deployment procedure has been successfully completed.
• The Middleware Home directory you specify on the destination machine must be a
new directory or must be empty.
• The Management Agent must be installed on the source (where the Administration
Server is running) and the destination machines. The Administration Server for the
domain must be up and running.
• The Administration Server and Managed Server (being cloned) must be up and
running before you run the deployment procedure.
• Ensure that the user has the necessary permission to the stage directory inside the
work directory chosen in the template. The scale up may fail with an error in case
sufficient permissions are not granted to the user. In such an event, assign the
necessary 700 permission to the stage directory and retry.
• For scaling out a domain, the user must have the following permissions:
• For scaling up a domain, the user must have the following permissions:
• The domain being scaled up / out should not be in Edit mode. Ensure that there is
a running WebLogic Console for this domain.
• If you choose to associate a new manages server with an existing Node Manager or
a machine, ensure that the Node Manager is up and running. If not, the
deployment procedure will fail.
• Ensure that you have discovered an existing OHS target in Enterprise Manager, if
you want to front end the Oracle HTTP Server.
• Ensure that the target machine which is being scaled up and Enterprise Manager
host where the OMS is running are in the same timezone, before you begin the
scaleup process.
• Ensure that you do not delete the Aggregation Server when you are scaling down a
domain. If the Aggregation Server is deleted, then a lot of dependent applications
will stop running.
Note:
Java Object Cache is always configured on a scaled out or scaled up
middleware managed server whether it was available before scale up or not.
2. Specify the source domain. See WebLogic Domain Scaling Up: Select Source Page.
3. Add a new managed server or clone an existing one. See Weblogic Domain
Scaling Up: Managed Servers Page
4. Add a new server to be front ended with the selected OHS. See WebLogic Domain
Scaling Up / Scaling Out: Web Tier.
6. Specify when the scale up or scale out operation should be performed. Weblogic
Domain Scaling Up / Scaling Out : Schedule Page.
7. Review the inputs and submit the deployment procedure. See WebLogic Domain
Scaling Up / Scaling Out : Review Page.
Note:
For scaling out a WebCenter Domain, you must perform the following steps
manually:
2. If the Spaces Server is scaled out, then you must run the following
commands to attach the WebService Policy:
attachWebServicePolicy(application='WC_Spaces2/webcenter',
moduleName='webcenter', moduleType='web',
serviceName='SpacesWebService',
subjectName='SpacesWebServiceSoapHttpPort',
policyURI='oracle/
wss11_saml_token_with_message_protection_service_policy')
3. If the Discussion Server is scaled out, then you must run the following
commands to attach the WebService Policy:
attachWebServicePolicy(application='WC_Collaboration2/owc_discussions',
moduleName='owc_discussions', moduleType='web',
serviceName='OWCDiscussionsServiceAuthenticated',
subjectName='OWCDiscussionsServiceAuthenticated',
policyURI='oracle/wss10_saml_token_service_policy')
• Cloning a managed server. If the source server is clustered, when you clone an
existing Managed Server, another Managed Server will be created in the same
cluster.
A wizard guides you through the process.
2. A list of Middleware targets is displayed. Find the WebLogic Domain that you
want to use as the source for the cloning operation. Right click on that WebLogic
Domain to access the context sensitive menu. From the menu, select Scale Up /
Scale Out WebLogic Domain.
Alternatively, you can click WebLogic Domain link. On the domain home page,
from the WebLogic Domain menu, select Provisioning, and click Scale Up/Scale
Out WebLogic Domain.
The WebLogic Domain Scale Up: Source page is displayed.
You can also launch this as a procedure from Middleware Provisioning page. To do
so, from Enterprise menu, select Provisioning and Patching, then select
Middleware provisioning. On the Middleware Provisioning page, from the
deployment procedures table, select Scale up/Scale out Middleware, and click
Launch. In this case you will need to provide the source information by selecting
the WebLogic domain that needs to be extended.
3. In the Working Directory field, specify the directory on the Administration Server
machine where the domain scale up related files are temporarily stored. A
minimum of one GB of directory space is required to store the temporary files. If
this directory is not present, it will be created. When the scale up operation has
been completed, the directory and its contents will be deleted.
Note:
The Working Directory must not be created under the Middleware Home or
the WebLogic Domain Home directory.
4. In the Source Information section, details of the source domain including the
Middleware Home, WebLogic Server Home, and the Middleware Domain Location
are displayed.
5. In the Select destination Hosts section, click Add Hosts to select the host you want
to add managed servers to.
6. Click Next.
• Cloning an Existing Managed Server: To do so, select the host you want to add the
new server to and click Clone. This option adds a new managed server to the
domain, and copies a pre-determined set of the attributes from the existing server.
For information about updating the Managed Server details, see Adding a New
Managed Server.
• Deleting a Managed Server: To do so, select the managed server and click Delete
Server.
2. On the Managed Server Page, enter configuration details for a new Managed
Server like a unique name for the new Managed Server, listen address, and the
SSL port.
• Do not associate with any machine (Node Manager): If you select this option,
the managed server is not associated with the machine (Node Manager) and
therefore, you cannot use the Node Manager console to start the Managed
Server Host.
• Use an existing machine (Node Manager): Select this option to associate the
managed server with an existing machine (Node Manager). You must select
the machine with which the managed server is to be associated from the
Machine Name menu.
• Create a new machine (Node Manager): Select this option to create a new
machine and specify the machine name, node manager address, and port
number. If the Node Manager is not up and running, this operation will be
timed out.
Note:
4. In the Configure Keystore for the Managed Server section, you can select one of
the following options:
b. Use Custom Certificates: If you select this option, you will need to provide
valid inputs for all the KeyStore fields.
2. From the Target selector dialog box, select any target server, then click Select.
3. The newly added OHS component appears in the Web Tier table.
4. Click Next.
• Host Credentials: In this section, you must provide all the credentials for all the
hosts on which the Managed Servers are running. To do so, click the add icon for
each host, and in the Add New Credentials dialog box, enter a valid username and
password. If you want the job to use these credentials for each target when the job
runs, select the Set As Preferred Credentials check box.
Note that if the OHS instance is running on another host, you will need to
additionally provide the credentials for the OHS host.
• WebLogic Administrator Credentials: In this section, you must set the credentials
to access WebLogic administration server. To do so, click the Add for each of the
WebLogic Domains, and in the Add new administrator credentials dialog box,
enter a valid administrator name and password. If you want the job to use these
credentials for each target when the job runs, then select the Set As Preferred
Credentials check box.
• Provisioning on the same machine: If you are using the Deployment Procedure to
provision or scale up to the same machine as the source, the working directory on
the source and target machines is populated by default. If these values are changed,
you must ensure that the working directory on the source and the destination
machines are different. For example, if the working directory is /tmp/source for
the source machine, it could be /tmp/dest on the destination directory. You must
also ensure that the listen port number and SSL port numbers (if enabled) for the
Administration Server and Managed Server are different on the source and
destination servers.
• JDBC Configuration: While configuring the JDBC data sources, the database user
and schema owner must enter appropriate passwords.
• Multi NIC Machines: If the destination machine is a multi NIC system, enter a
listen address that is accessible to both the Administration Server and Managed
Server.
2. Security differences
3. Data Co-Mingling
4. Administrative challenges
A domain partition is an administrative portion of a domain that can be managed
independently and can share runtime capacity in a domain, that is, the managed
servers and clusters. Each domain partition has its own runtime copy of the
applications and resources. It provides application isolation and efficiency. Multiple
instances of an application can run in different domain partitions without modifying
the application.
Domain partitions use fewer server and domain resources. This enables you to
simplify the management of Software As a Service (SaaS) and Platform As a Service
(PaaS) applications.
• The applications and services running in the domain are cloned along with the
partition. They can neither be deleted nor can new ones be added during
provisioning.
Note:
Ensure to have at least one partition in the domain, prior to importing the
partition.
Related Topics:
Note:
See section Exporting and Importing Partitions in the Oracle® Fusion Middleware
Using WebLogic Server Multitenant document for more information.
Note:
Exporting WebLogic domain partition functionality is supported from WLS
version 12.2.1.0.0 onwards.
2. Expand the Target Type and check the option against WebLogic to view a list of
WebLogic targets.
3. Right click on the WebLogic Domain target, and from the Provisioning menu, click
Create Provisioning Profile and select By Exporting WebLogic Domain Partition
option and click Next.
4. Enter a profile name in the Name field under the Profile tab.
Note:
As part of the export process, the Enterprise Manager creates a profile of the
partition and stores in the software library. The profile contains exported
partition archive and some metadata that helps the Enterprise Manager in the
importing process.
7. Verify the values in the fields Domain Home and Oracle Home.
8. Select the values for the remaining fields Oracle Home Credential, WebLogic
Administrator Credential, and Working Directory.
Note:
9. In the Software Library Upload Location, select the Software Library storage
details. Ensure to provide a valid storage type and upload location to update the
profile.
Note:
Related Topics:
Note:
Importing WebLogic domain partition functionality is supported from WLS
version 12.2.1.0.0 onwards.
To import a domain partition into an existing WebLogic domain, follow these steps:
2. Expand the Target Type and check the option against WebLogic to view a list of
WebLogic targets.
3. To import the WebLogic Domain Partition, right click on the domain where you
want to import the partition.
You can also import WebLogic Domain Partition from the Middleware
Provisioning page. From the Enterprise menu, select Provisioning and Patching
and click Middleware Provisioning. Select a profile under the Profiles table and
click Import Partition button.
5. Under the General tab, select a profile under the WebLogic Domain Partition
Profile field.
7. Upon selecting a profile, the values in the following fields are prefilled:
• Virtual Targets
• Database
Note:
The Database field is filled only if the source partition has data sources.
8. The details of the profile can be viewed by clicking on the view icon available next
to the WebLogic Domain Partition Profile field. The details include information
on Data Sources, JMS Servers, Applications, and Virtual Targets.
If you like to make an advanced configuration, click Use Custom JSON file and
enter the location of custom well-formed JSON file in the JSON File Location field.
Note:
You can download the JSON file from Profile Viewer. Click the View icon
next to the profile search icon and download it from the attachments table.
You can then modify it and copy it to the Administration Server host of the
domain and provide that location in the JSON File Location field.
11. Under the Virtual Targets tab, in the Targets field choose the virtual targets from
the drop down menu. The host name specified must have DNS entries that resolve
to the correct server or load balancer. If the host name is not specified then it is
equivalent of wildcarding the host name to match all incoming requests.
Note:
12. Choose a database under the Database tab against the Connection String field.
Note:
This is optional and has to be executed only if the source partition has data
source. Ensure that the schema is already created, the database services are up
and running and the database service has the same name mentioned in the
data source.
While selecting the target in the Select target window the following database types
are available:
• Cluster Database
• Database Instance
• Pluggable Database
Importing WebLogic Domain Partition operation can be performed using EMCLI verb
too. Use the following steps.
4. Click Save.
Note:
For more details on the verbs, refer to the Deployment Procedure Verbs section
in the Enterprise Manager Command Line Interface document.
Related Topics:
This chapter explains how you can deploy, undeploy, and redeploy Java EE
Applications using Oracle Enterprise Manager Cloud Control (Cloud Control).
In particular, this chapter covers the following:
• Deploy
• Undeploy
• Redeploy
Consider this section to be a documentation map to understand the sequence of
actions you must perform to successfully provision a Java EE Application. Click the
reference links provided against the steps to reach the relevant sections that provide
more information.
Table 31-1 Getting Started with Deploying, Undeploying, or Redeploying a Java EE Application
Step 4 Running the Deployment Procedure To run the Deploy / Undeploy Java EE
Run the Deployment Procedure to Applications Deployment Procedure,
successfully deploy, redeploy, or undeploy follow the steps explained in Java EE
one or more Java EE applications. Applications Deployment Procedure.
• Ensure that the Software Library is configured. See Setting Up Oracle Software
Library for details.
• The Java EE Application component must have been created in the Software
Library.
Note:
• The plug-ins required for this deployment procedure must be deployed to the
Management Agent on the destination machines.
Note:
If you create a Java EE Application component on the host where the Software
Library is configured, then you must ensure that this host name and the
management agent host name should match, if not, you will see the following
error while creating the component:
An error was encountered during saving entity samples.gar.
Please see the log for details.
oracle.sysman.emSDK.app.exception.EMSystemException
1. From the Enterprise menu, select Provisioning and Patching, then select Software
Library.
2. Create a folder or select a folder from the Software Library, select Create Entity,
then select Component.
3. From the Create Entity: Component dialog box, select Java EE Application and
click Continue.
4. In the Create Java EE Application: Describe page, enter the Name, Description, and
click Next.
5. In the Create Java EE Application: Select Files page, select one or more files to be
associated with the Java EE Application. You can upload files from a storage
location in the Software Library. For Software Library to become usable, at least
one upload file location must be configured. In the Specify Destination section,
click the Browse button in the Upload Location field. Select either of the following:
• OMS Shared File System: An OMS Shared File System location is required to
be shared (or mounted) across all the Oracle Management Server (OMS) hosts.
This option is ideal for UNIX systems.
For single OMS environments, you can configure the Software Library either on
the host where the OMS is running or in a shared location, so that it is accessible
to all the OMS hosts. For multiple OMS environments, Oracle recommends that
you configure the Software Library in a shared location so that the storage is
accessible through NFS mount points to all Oracle Management Servers in the
environment.
• OMS Agent File System: An OMS Agent File System location is a location that
is accessible to one of the OMS host's Agent. This option is ideal for OMS
installed on Windows hosts. By selecting this option for uploading files, you can
avoid sharing a location between all participating OMS hosts.
Credentials must be set before using an OMS Shared File System or OMS Agent
File System. For an OMS Shared File System, normal host credentials must set
before configuring a storage location. However, for OMS Agent File System
location configuration, a credential (preferred or named) has to be specified.
6. In the Specify Source section, you can add the standard Java EE archive files such
as .ear, .war, .jar, .rar, .gar, and other optional files such pre and post-deploy
scripts, target execution script, execution plan and additional files. You can either
upload each file separately (Individual Files) or upload a zip file (Zip File) that
contains the JavaEEAppComp.manifest file. You can upload the files from:
• Local Filesystem: Click Browse and upload the files from your local system.
• Agent Filesystem: You can upload the files from a remote filesystem monitored
by the Management Agent. Click Browse and select a host machine from the list
and click Select. Click Add. The Remote File Browser window is displayed.
Click the Login As button and enter the credentials for the host machine.
Specify the location in which the files are present, select one or more archive
related files and click Add. The selected files are listed in the Current Selection
section. Click OK to return to the Create Entity: Select Files page.
7. The files are listed in the table. Specify the type of the file by selecting the options
in the Type field. Click Next.
8. Review and verify the information entered so far. Click Save and Upload to upload
the files and create the Java EE Application component.
1. From the Enterprise menu, select Provisioning and Patching, then select
Middleware Provisioning.
2. Select the Java EE Application procedure from the list and click Launch. You can
also use the following method to launch the deployment procedure:
• Right click on a WebLogic Domain from the list and from the context sensitive
menu, select Provisioning, then select Deploy / Undeploy Java EE
Applications.
4. Select WebLogic Domains and select the targets on which the Java EE application is
to be deployed. Click Add WebLogic Domains. Choose one or more WebLogic
domains from the list and click Select.
Note:
You can customize the deployment procedure by locking certain features. You
can lock an operation, a target, or an application. Before you proceed with the
deployment, you must ensure that the selected domains do not have an active
configuration lock. If the selected domains are locked, click the Lock icon to
unlock the configuration lock.
5. The selected WebLogic domains are listed in the Targets table. Select the targets
(clusters or managed servers) for each domain and click Next.
6. In the Deploy / Undeploy Java EE Applications: Select Applications page, add the
archives and other related files that are to be deployed from the Software Library.
Click Add to select one or more archives and other application related files or
components from the Software Library. The Add Application popup is displayed.
In the Component Name field, enter a file name or a wild card pattern to search for
and retrieve components from the Software Library. Select the Show Java EE
Application Components Only checkbox to list only the Java EE Application
components in the Components in Software Library column. Select the archives
and click the right arrow to move them to the Components Selected for
Deployment section.
Note:
In the image shown above Application Version and Plan Version will not be
visible if you are redeploying non-versioned application.
7. In the Type field, the type of each component is displayed. The Type can be:
• Archive: This is the archive file which can be a .ear, .war, .jar, .rar, or .gar
file.
Note:
If you have selected a .gar archive file, the WebLogic Domain to which the
application is being deployed must be 12.1.2.0.0 or higher.
• Plan: This is an .xml file containing the deployment options for this
application.
• Post Deploy Script: This is a WLST script that is executed by the Management
Agent on the Administration Server after the application is deployed. You can
use this script to perform any post deployment configuration. For example, if
you need to roll back and undo the changes made by the pre deploy script, you
can select this option.
Note:
The archive, plan, predeploy, and postdeploy scripts can be moved only to the
Administration Server.
• Additional File: You can add one or more files that will be required by the
application that are not part of the application archive. These files can be of any
type and can be moved only to the selected targets (managed servers and
clusters).
• Target Execution Script: These scripts can be used to set up the required
environment or replace tokens in the additional files like property files. These
scripts will be executed on selected targets.
8. In the Location On Target field, for each component, specify the location on the
WebLogic Server Host on which the application is to be deployed. This can be an
absolute path or relative to the $WLS_HOME for the selected targets.
9. After selecting the required files for deployment, enter a unique name for the
application and specify the Staging Mode which can be:
• Default: Each server in the WebLogic Domain maintains two attributes which
are Staging Mode and StagingDirectoryName. The Staging Mode is the default
staging mode for the server and StagingDirectoryName is the location on which
the staged files are stored. Select this option to use the default staging mode for
all the targets.
• Stage: Select this option if the archive files should be moved to the destination
machine.
• No Stage: Select this option if the archive files should not be moved to the
destination machine.
10. Select the Deploy this archive as library option if the application needs to be
deployed as a shared library. You can select this option if one or more applications
need the same set of files.
11. Select the Start Mode for deployment which can be:
• Start in full mode (servicing all requests): Select this option to make the
deployed application available to all users.
• Do not start: The application is deployed but not started. You can select this
option if any manual post-deployment configuration is required.
12. Click OK to add the archive and return to the Select Applications page. You can
add more archives or click Next to proceed. If you have added more than one
archive, select the Skip on Failure checkbox to skip any failed deployments and
continue deploying the remaining applications.
13. Click the Lock icon to lock the fields you have configured.
Note:
The Designer can lock the fields after configuring them. This ensures that the
Operator can run the deployment procedure with minimal input.
14. Click Next. Specify the credentials for each domain you have selected, the host on
which the Administration Server is running, and the hosts to which the additional
files or execution scripts are to be moved. You can choose:
15. Click the Lock icon to lock the fields you have configured. These fields cannot be
edited once they are locked.
16. In the Schedule Deployment page, you can schedule the date on which the Java EE
Application deployment procedure should be executed.
17. Click Next. On the Review page, review the details you have provided for the
Deployment Procedure. If you are satisfied with the details, then click Submit to
run the Deployment Procedure according to the schedule set. If you want to
modify the details, click the Edit link in the section to be modified or click Back
repeatedly to reach the page where you want to make the changes.
After you submit the deployment procedure, you will return to the Procedure
Activity page where you can view the status of the Deployment Procedure. After
the Java EE Application has been deployed, you can search for the target and
navigate to the Target Home page.
2. Select the Java EE Application procedure from the list and click Launch. You can
also use the following method to launch the deployment procedure:
• Right click on a WebLogic Domain from the list and from the context sensitive
menu, select Provisioning, then select Deploy / Undeploy Java EE
Applications.
Note:
Click the Lock icon to lock an operation or the fields you are configuring in
any of the pages in the wizard. Once the fields have been locked, the Operator
needs to provide minimal input while running the deployment procedure.
4. Click Add WebLogic Domains to add one or more WebLogic domains. In the list
of targets displayed, choose a target and click Select.
5. The deployment targets are listed in the Targets table. Select the applications that
need to be redeployed and click Next.
6. In the Select Applications page, a list of applications that can be redeployed are
displayed. Select an application and click Edit to modify the archive details and
other application related files. In the Application Details window, enter a file name
or a wild card pattern to search for and retrieve files from the Software Library.
Select the archives and click the right arrow to move them to the Components
Selected for Deployment section.
7. In the Type field, the type of each component is displayed. The Type can be:
• Archive: This is the archive file which can be a .ear, .war, .jar, or .rar file.
• Plan: This is an .xml file containing the deployment options for this application.
Note:
The archive, plan, predeploy, and postdeploy scripts can be moved only to the
Administration Server.
• Additional File: You can add one or more files that will be required by the
application that are not part of the application archive. These files can be of any
type and can be moved only to the selected targets (managed servers and
clusters).
• Target Execution Script: These scripts can be used to set up the required
environment or replace tokens in the additional files like property files. These
scripts will be executed on selected targets.
8. Review the default location on the target machine on which the component will
reside. This can be an absolute path or relative to the $WLS_HOME for the selected
targets.
9. After selecting the required files for deployment, enter a unique name for the
application and specify the Staging Mode which can be:
• Default: Each server in the WebLogic Domain maintains two attributes which
are Staging Mode and StagingDirectoryName. The Staging Mode is the default
staging mode for the server and StagingDirectoryName is the location on which
the staged files are stored. Select this option to use the default staging mode for
all the targets.
• Stage: Select this option if the archive files should be moved to the destination
machine.
• No Stage: Select this option if the archive files should not be moved to the
destination machine.
10. Select the Start Mode for deployment which can be:
• Start in full mode (servicing all requests): Select this option to make the
deployed application available to all users.
• Do not start: The application is deployed but not started. You can select this
option if any post-deployment configuration is required.
11. Specify the Retirement Policy for the application. You can select:
• Allow the application to finish its current sessions and then retire: Select this
option if all the current sessions should be completed before retirement.
• Retire the previous version after retire timeout: Specify a timeout period after
which the application will be automatically retired.
Note:
The Retirement Policy field is applicable only when you are redeploying
versioned application.
12. Click OK to add the archive and return to the Select Applications page. You can
add more archives or click Next to proceed. If you have added more than one
archive, select the Skip on Failure checkbox to skip any failed deployments and
continue deploying the remaining applications.
13. Click the Lock icon to lock the fields you have configured. These fields cannot be
edited once they are locked.
14. Click Next. Specify the credentials for each domain you have selected, the host on
which the Administration Server is running, and the hosts to which the additional
files or execution scripts are to be moved. You can choose:
15. Click the Lock icon to lock the fields you have configured. These fields cannot be
edited once they are locked.
16. In the Schedule Deployment page, you can schedule the date on which the Java EE
Application deployment procedure should be executed.
17. Click Next. On the Review page, review the details you have provided for the
Deployment Procedure. If you are satisfied with the details, then click Submit to
run the Deployment Procedure according to the schedule set. If you want to
modify the details, click the Edit link in the section to be modified or click Back
repeatedly to reach the page where you want to make the changes.
After you submit the deployment procedure, you will return to the Procedure
Activity page where you can view the status of the Deployment Procedure. After
the Java EE Application has been deployed, you can search for the target and
navigate to the Target Home page.
1. From the Enterprise menu, select Provisioning and Patching, then select
Middleware Provisioning.
2. Select the Java EE Application procedure from the list and click Launch. You can
also use the following method to launch the deployment procedure:
• Right click on a WebLogic Domain from the list and from the context sensitive
menu, select Provisioning, then select Deploy / Undeploy Java EE
Applications.
4. Click Add WLS Domains to add one or more WebLogic domains. In the list of
targets displayed, choose a target and click Select.
5. The deployment targets are listed in the Targets table. When an application is
undeployed from the WebLogic domain, select the applications that need to be
undeployed and click Next.
6. Click Next. Specify the credentials for each domain you have selected, the host on
which the Administration Server is running, and the hosts to which the additional
files or execution scripts are to be moved. You can choose:
8. Review the details and click Undeploy. You will return to the Procedure Activity
page where you can check the status.
domains.0.javaeeApps.0.archiveSwLibPath=demo_folder/calendar_app
domains.0.javaeeApps.0.archiveFileName=Calendar.war
domains.0.javaeeApps.0.archiveStagingLocation=/my_arch/mystage
• stopOnError: Set this flag to true if the deployment procedure should stop when an
error is encountered while a Java EE application is being deployed.
• Domain Details: Specify the details of the WebLogic domains on which these
applications are being deployed.
– domains.0.domainTargetNamewhere:
• Host Details:
Specify the Administration Server host details here.
– hosts.0.hostName: where:
⁎ 0 is the index for the Administration Server host. If additional files, and
target execution scripts are present, indexes 1, 2, 3, and so on must be used
for all the other hosts in the domain.
• Application Details:
Specify the details of the Java EE applications that are being deployed.
– javaeeApps.0.appName: where:
Note: In the remaining optional sections, you can specify the Software
Library path, the file name, and the staging location as necessary.
#domains.0.javaeeApps.0.isSharedLib=false
#domains.0.javaeeApps.0.stageMode=DEFAULT
#domains.0.javaeeApps.0.startMode=full
#domains.0.javaeeApps.0.retirementPolicy=true
#domains.0.javaeeApps.0.retirementTimeout=0
domains.0.javaeeApps.0.targets=Cluster_1
domains.0.javaeeApps.0.deleteTarget=true
#domains.0.javaeeApps.0.archiveSwLibPath=my_folder/calpp
#domains.0.javaeeApps.0.archiveFileName=file.jar
#domains.0.javaeeApps.0.archiveStagingLocation=/my_arch/mystage
• stopOnError: Set this flag to true if the deployment procedure should stop when an
error is encountered while a Java EE application is being undeployed.
• Domain Details: Specify the details of the WebLogic domains on which these
applications are being undeployed.
– domains.0.domainTargetNamewhere:
• Host Details:
Specify the Administration Server host details here.
– hosts.0.hostName: where:
⁎ 0 is the index for the Administration Server host. If additional files, and
target execution scripts are present, indexes 1, 2, 3, and so on must be used
for all the other hosts in the domain.
• Application Details:
Specify the details of the Java EE applications that are being undeployed.
– javaeeApps.0.appName: where:
⁎ deleteTarget:
Set this flag to true if the target is to be deleted after the application has been
undeployed.
Note: In the remaining optional sections, you can specify the Software
Library path, the file name, and the staging location as necessary.
domains.0.javaeeApps.0.archiveSwLibPath=demo_folder/calendar_app
domains.0.javaeeApps.0.archiveFileName=Calendar.war
domains.0.javaeeApps.0.archiveStagingLocation=/my_arch/mystage
#domains.0.javaeeApps.0.preDeployScriptFileName=xyz.py
#domains.0.javaeeApps.0.preDeployScriptStagingLocation=/my_prescript/mystage
Note: See Deploying a Java EE Application Using EMCLI for details on the
input details.
This chapter explains how you can provision Coherence nodes or clusters across
multiple targets in a farm using Oracle Enterprise Manager Cloud Control (Cloud
Control). In particular, this chapter covers the following:
• Getting Started
• Supported Releases
• Troubleshooting
• Add one or more nodes to a new cluster and add this cluster as an Cloud Control
monitored target.
• Add a management node to an existing cluster and add this cluster as an Cloud
Control monitored target.
• Add one or more nodes to a cluster that is already being monitored by Cloud
Control.
• Prerequisites
• Deployment Procedure
32.3.1 Prerequisites
Before running the Deployment Procedure, meet the following prerequisites:
• A zip file with the Coherence software, default configuration files and start scripts
must be created. This zip file must be added as a software component to the Oracle
Software Library. If the size of the zip file is more than 25 MB, it must be uploaded
from a host monitored by the Management Agent. You can add specific
configuration files as components to the Software Library which will override the
default configuration files. These configuration files can be different depending on
the type of node (storage, management, etc.). While adding a software component,
it is recommended that you specify the Product Name as Coherence.
• If you are provisioning a new node on a host on which the Coherence binaries are
not present, you must upload the coherence.zip file and the default-start-
script.pl to the Oracle Software Library. Each file required for provisioning must be
uploaded as an individual software component.
1. From the Enterprise menu, select Provisioning and Patching, then select
Software Library.
2. Create a folder in which the components are to be stored. After the folder has been
created, right click on the folder and select Create Entity and Component from
the Actions menu.
3. A Create Component popup window appears. From the Select Subtype drop
down list, select the Generic Component and click Continue.
4. In the Create Generic Component: Describe page, enter the Product as Coherence.
This is helpful when you are searching for files during Coherence Provisioning.
Click Next.
5. In the Create Generic Component: Select Files page, check the Upload Files
option.
6. In the Specify Source section, select Agent Machine in the File Source drop down
box and upload the following files:
• $AGENT_ROOT/plugins/
oracle.sysman.emas.agent.plugin_12.1.0.1.0/archives/
coherence/bulkoperationsmbean_11.1.1.jar
• $AGENT_ROOT/plugins/oracle.sysman.emas.agent.plugin_12.1.0.1.0/
archives/coherence/coherenceEMIntg.jar
• $AGENT_ROOT/plugins/
oracle.sysman.emas.agent.plugin_12.1.0.1.0/scripts/
coherence/default-start-script.pl
If you are uploading a large zip file such as Coherence.zip, you must save it on
the Agent machine. Specify the path on the Agent to upload the file. This zip file
must be downloaded from http://www.oracle.com/technetwork/
middleware/coherence/downloads/index.htm
7. Click Save and Upload to submit a file transfer job to upload the remote files to
the specified upload location.
1. From the Enterprise menu, select Provisioning and Patching, then select
Middleware Provisioning.
2. Select the Coherence Node Provisioning deployment procedure and click Launch.
Note:
You can also use the following methods to launch the deployment procedure:
• From the Enterprise menu, select Provisioning and Patching, then select
Procedure Library. Select the Coherence deployment procedure from the
list and click Launch.
• Select the Coherence Node Provisioning option from the Coherence Home
page menu.
3. The Source Selection page which is the first page of the Coherence Node
Provisioning wizard is displayed.
4. You can add all the software components needed to add or update a Coherence
cluster. If the Coherence Home has already been created, you can click Next to go
to the next page.
If the Coherence Home does not exist, click Add to add the Coherence binaries
and the Start script from the Software Library. The Select Source popup is
displayed. All software components with the product name Coherence that are
present in the Software Library are displayed. Select required components and
click Select.
5. For each component you have selected, specify the destination directory on the
target machine. This can be an absolute path or a relative path from $
{INSTALL_DIR} or ${COHERENCE_HOME}. The contents of the coherence.zip
file will be extracted to this directory.
Note:
• If the software components are available in the target machine, this step
can be skipped.
6. Click Next. The Target Selection page is displayed. On this page, you can:
• Add Nodes: You can add a new node or make a copy of an existing node.
Click the Search icon in the Target Name field and select a Coherence cluster
from the list. The following details are displayed:
– License Mode: The mode in which the cluster has been deployed.
Click Add in the New Nodes section to add new nodes to an existing
Coherence cluster monitored by Cloud Control. The Add Coherence Node
page is displayed. Specify the details of the node and click Continue to add the
node and return to the Coherence Node Provisioning: Select Target page. See
Adding a Coherence Node for details.
• Create Cluster: Click Create Cluster to a create a new Coherence cluster. Enter
Cluster Name along with the following details:
Note:
7. Click Next to go to the next step in the wizard. In the Coherence Node
Provisioning: Set Credentials page, you can set credentials for each host. You can
apply the same credentials for multiple hosts by selecting multiple hosts from the
list.
8. Select the host and specify the credentials which can be:
• Named Credentials: You can override the preferred credentials and select a
common set of credentials that will be used for all the hosts and WebLogic
domains.
• New Credentials: You can override the preferred credentials and specify a
separate set of credentials for each host.
Select the credentials and click Apply to apply the credentials to the selected
hosts. For more information on setting up credentials, see the Enterprise Manager
Security chapter in the Enterprise Manager Security Guide.
9. Click Next. The Schedule page is displayed. On this page, you can specify the
schedule for deploying the node. You can choose to deploy the node immediately
or at a later date.
Note:
If you set the Grace Period as Indefinite, Cloud Control will keep trying to
deploy the node for an indefinite period. If you specify a date / time in this
field, the deployment process will be aborted after this period.
10. Click Next. The Review page is displayed. You can review the details you have
provided for deploying the node. If you are updating a node, you can view the
node processes that will be stopped on this page. Click Finish to deploy or update
the node.
After the new Management Node has been created, you must wait for the first
collection before you add nodes to the cluster.
1. Click Add in the Coherence Node Provisioning: Target Selection page. The Add
Coherence Node page is displayed. Enter the following details:
Number of Nodes Specify the number of nodes that need to be added. By default, this field has the value of 1
but you can add as many nodes as required depending on the machine and node
configuration. If the value is more than 1, then all the nodes will have the following
properties:
• Each node will use the same COHERENCE_HOME and start script.
• The Node Name value will be added as the prefix and a number will be appended to
each node. For example <node_name>_1, <node_name>_2 and so on. Each name
should be a unique one in the cluster.
• The JMX Remote Port value will be increased by 1 for each additional node. For
example, if the value of the JMX Remote Port for the first node is 8088, the value for
the second node will be 8089 and so on.
Site Name This is the location of the Coherence node. This is geographical physical site name which
identifies the racks and machines on which the node is running.
Rack Name The name of the rack in the site on which the machine is located.
Role Name The role could be storage/data, application/process, proxy or management node.
Note: The Node Name, Site Name, Rack Name, and Role Name cannot exceed 32
characters.
Well Known If Cluster Communication has been set to WKA in the Coherence Node Provisioning:
Address (WKA) Target Selection page, enter the host and port number in the format host1:port1,
host2:port2 and so on.
JVM Diagnostics
Details
JVM Manager Host If this node is to be monitored by JVM Diagnostics Manager, specify the address and port
and Port number of the JVM Diagnostics Console.
Management Node You can define multiple management nodes in the cluster but only one management node
with MBeanServer can be marked as the Primary Management Node. We recommend that you add at least
two management nodes preferably running on different hosts / machines to support fail
over.
JMX Remote Port The port number of the EMIntegration Mbean server.
JMX User Name The user name for the JMX server if authentication is enabled.
JMX Password The password for the JMX server if authentication is enabled.
Note: To enable the JMX authentication, you need to set
com.sun.management.jmxremote.authenticate=true. The JMX User name and
JMX Password need to be set in the $JDK_HOME/jre/lib/management/
jmxremote.password and $JDK_HOME/jre/lib/management/jmxremote.access
files.
Primary Select this checkbox to mark the management node you are adding as the Primary
Management Node Management Node used for Monitoring. This node is used to discover the Coherence
used for Monitoring cluster and any nodes added later will be added to the newly discovered cluster. If several
nodes are being added to a cluster, only one management node can be marked as the
primary one. If the primary management node fails, you can configure any of the other
management nodes for monitoring. If no other management node is available, you can
add a new primary management node to an existing cluster and this node can be used to
monitoring.
Use Bulk Operations This checkbox is selected by default. When this option is selected, a new management
MBean node with BulkOperationsMBean will be started.
Install Directory Enter the absolute path to the folder under which the Coherence software components
reside. The path specified here will be used as the Destination Directory specified on the
Coherence Node Provisioning: Source Selection page. This value could be different for
each node or the same for one or more nodes.
Start Script This script is used to bring up the Coherence node. This script is operating system specific
and sets the proper environment required for the node by specifying the relevant system
parameters. See sample script for an example.
The following table summarizes how the values specified during deployment will
be used by the environment variables specified in the start script. The deployment
procedure also sets the JAVA_HOME and AGENT_HOME variables by using the
2. After adding the node, click Continue to return to the Coherence Node
Provisioning: Target Selection page.
32.3.3.2.1 default-start-script.pl
This script is the default start script used to start a Coherence node. A sample script is
shown below:
#!/usr/local/bin/perl
sets all user entered options as environment variables. Based on the values of
these environment variables, you can start different types of Coherence nodes
#
# - Management Node with Oracle Bulk Operation MBean is started when
# "bulk_mbean" and "jmx_remote_port" variables are set. For this option,
# oracle.sysman.integration.coherence.EMIntegrationServer Java class is executed
# that starts a MBeanServer in this node and registers Oracle Bulk Operation
# MBean. You need coherenceEMIntg.jar and bulkoperationsmbean_11.1.1.jar in the
# classpath.
#
# - Management Node is started when "jmx_remote_port" is set, but "bulk_mbean" is
NOT set.
#
# - Managed node when "jmx_remote_port" is not set.
#
#
# Following variables are set from the deployment procedure. Use these values to
# define required system parameters to override Coherence default settings.
my $coherence_home=$ENV{'COHERENCE_HOME'};
my $start_script=$ENV{'START_SCRIPT'};
my $java_home=$ENV{'JAVA_HOME'};
my $agent_home=$ENV{'AGENT_HOME'};
my $wka_port=$ENV{'WKA_PORT'};
my $license_mode=$ENV{'LICENSE_MODE'};
my $jamhost=$ENV{'JAM_CONSOLE_HOST'};
my $jamport=$ENV{'JAM_CONSOLE_PORT'};
my $member=$ENV{'tangosol_coherence_member'};
my $site=$ENV{'tangosol_coherence_site'};
my $rack=$ENV{'tangosol_coherence_rack'};
my $machine=$ENV{'tangosol_coherence_machine'};
my $jmxport=$ENV{'jmx_remote_port'};
my $cluster=$ENV{'tangosol_coherence_cluster'};
my $clusteraddr=$ENV{'tangosol_coherence_clusteraddress'};
my $clusterport=$ENV{'tangosol_coherence_clusterport'};
my $bulkmbean=$ENV{'bulk_mbean'};
my $jmx_auth=$ENV{'jmx_enable_auth'};
my $SYS_OPT="";
my $JVM_OPT="";
my $psep="";
my $dsep="";
if ( !&IsWindows() ) {
$psep=":";
$dsep="/";
}
else
{
$psep=";";
$dsep="\\";
print
"\n\n*************************************************************************\n"
# you may run a local script as part of this script and override those
# settings.
# Override JAVA_HOME variable by setting it locally
#
#. ./set-env.sh
#echo "After setting JAVA_HOME locally, JAVA_HOME: $JAVA_HOME"
# Options for Java Virtual Machine.
$JVM_OPT="-server -Xms512m -Xmx512m -Xincgc -verbose:gc";
#
# Set system parameters to Coherence node
$SYS_OPT="-Djava.net.preferIPv4Stack=true";
# This param allows the mbeans on this node to be registered to mbean servers
running on management nodes
$SYS_OPT="$SYS_OPT -Dtangosol.coherence.management.remote=true";
$SYS_OPT="$SYS_OPT -Dcom.sun.management.jmxremote.ssl=false";
$SYS_OPT="$SYS_OPT -Dtangosol.coherence.cluster=$cluster";
$SYS_OPT="$SYS_OPT -Dtangosol.coherence.member=$member";
$SYS_OPT="$SYS_OPT -Dtangosol.coherence.site=$site";
$SYS_OPT="$SYS_OPT -Dtangosol.coherence.rack=$rack";
$SYS_OPT="$SYS_OPT -Dtangosol.coherence.machine=$machine";
# set this if machine name > 32 chars, tangosol.coherence.machine has a limitaion
# of 32 chars
$SYS_OPT="$SYS_OPT -Doracle.coherence.machine=$oracle_coherence_machine";
$SYS_OPT="$SYS_OPT -Dtangosol.coherence.role=$role";
# Set coh home and start script so they will be part of input args
$SYS_OPT="$SYS_OPT -Doracle.coherence.home=$coherence_home";
$SYS_OPT="$SYS_OPT -Doracle.coherence.startscript=$start_script";
# $JDK_HOME/jre/lib/management/jmxremote.access files.
Uncomment the following block after adding these files.
#
if ($jmx_auth ne "") {
$SYS_OPT="$SYS_OPT -Dcom.sun.management.jmxremote.authenticate=$jmx_auth";
else {
$SYS_OPT="$SYS_OPT -Dcom.sun.management.jmxremote.authenticate=false";
}
if("$clusterport" ne "") {
$SYS_OPT="$SYS_OPT -Dtangosol.coherence.clusterport=$clusterport";
}
if("$license_mode" ne "") {
$SYS_OPT="$SYS_OPT -Dtangosol.coherence.mode=$license_mode";
}
#This is used to generate WKA override file. If you choose to use an existing
#override file, you can comment this out.
#Make sure you set "-Dtangosol.coherence.override" to the appropriate file name.
if("$wka_port" ne "") {
$wka_port = "\"".$wka_port."\"";
$wka_script = $agent_
home.$dsep."sysman".$dsep."admin".$dsep."scripts".$dsep."coherence".$dsep."genera
te-wka-override.pl";
print "executing $wka_script $wka_port\n";
if ( !&IsWindows() ) {
system("chmod 0700 $wka_script");
}
if(fork() == 0) {
exec("$wka_script $wka_port") or die "Could not execute
generate-wka-override.xml\n";
}
$SYS_OPT="$SYS_OPT -Dtangosol.coherence.override=em-coherence-override.xml";
}
my $startup_class="";
my $cmd="";
# Note that Coherence lib is under $COHERENCE_HOME/coherence. Add any application
# specific jars to this classpath, if needed.
my $CLASSPATH=$coherence_home.$dsep."lib".$dsep."coherence.jar".$psep.$coherence
_home.$dsep."lib".$dsep."reporter.jar";
print "CLASSPATH: $CLASSPATH\n";
if($jamhost ne "" && $jamport ne "") {
$CLASSPATH=$CLASSPATH.$psep.$agent
_home.$dsep."archives".$dsep."jlib".$dsep."jamagent.war";
my $jamjvmid="$cluster/$member";
print "Using Oracle JVMD - $jamjvmid\n";
$SYS_OPT="$SYS_OPT -Doracle.coherence.jamjvmid=$jamjvmid";
$SYS_OPT="jamconshost=$jamhost $SYS_OPT";
$SYS_OPT="jamconsport=$jamport $SYS_OPT";
$SYS_OPT=" oracle.ad4j.groupidprop=$jamjvmid $SYS_OPT";
}
if ($bulkmbean ne "" && $jmxport ne "") {
# Management node with Bulk Operation MBean.
# add Oracle supplied jars for Bulk Operation MBean
$CLASSPATH=$CLASSPATH.$psep.$agent
_home.$dsep."..".$dsep."..".$dsep."lib".$dsep."coherenceEMIntg.jar".$psep.$agent
_home.$dsep."..".$dsep."..".$dsep."dependencies".$dsep."bulkoperationsmbean
_11.1.1.jar";
# Start MBeanServer
$SYS_OPT="$SYS_OPT -Dtangosol.coherence.management=all";
print "Starting a management node with Bulk Operation MBean \n";
$startup_class="oracle.sysman.integration.coherence.EMIntegrationServer";
$cmd=$java_home.$dsep."bin".$dsep."java -cp $CLASSPATH $JVM_OPT $SYS_OPT $startup
_class";
} elsif ($jmxport ne "") {
# Management Node with out Bulk Operation MBean
# Start MBeanServer
$SYS_OPT="$SYS_OPT -Dtangosol.coherence.management=all";
print "Starting a management node ...\n";
$startup_class="com.tangosol.net.DefaultCacheServer";
$cmd=$java_home.$dsep."bin".$dsep."java -cp $CLASSPATH $JVM_OPT $SYS_OPT $startup
_class";
} else {
# A simple managed node. Do not start MBeanServer.
$SYS_OPT="$SYS_OPT -Dtangosol.coherence.management=none";
print "Starting a simple managed node ...\n";
$startup_class="com.tangosol.net.DefaultCacheServer";
$cmd=$java_home.$dsep."bin".$dsep."java -cp $CLASSPATH $JVM_OPT $SYS_OPT $startup
_class";
}
if ( !&IsWindows() ) {
if (fork() == 0) {
print "Executing start script from child process... $cmd \n";
exec("$cmd") or die "Could not execute $cmd\n";
}
} else {
print "Command used to start node = $cmd\n";
exec($cmd);
}
print "exiting default start script\n";
exit 0;
sub IsWindows {
$osname = $^O;
if ( $osname eq "Windows_NT"
|| $osname eq "MSWin32"
|| $osname eq "MSWin64" )
{
return 1;
}
else {
return 0;
}
}
32.3.3.2.1.1 generate-wka-override.pl
If Cluster Communication has been set to WKA in the Coherence Node Provisioning:
Target Selection page, this script is launched by the default-start-script.pl.
The generate-wka-override.pl is used to generate the override file. If you have
your own override file, you can comment out the part that uses the generate-wka-
override.pl script in the default-start-script.pl.
#!/usr/local/bin/perl
#
# $Header: emas/sysman/admin/scripts/coherence/generate-wka-override.pl /main/1
# 2011/02/01 16:51:33 $
#
# generate-wka-override.pl
#
# Copyright (c) 2011, 2011, Oracle and/or its affiliates. All rights reserved.
#
# NAME
# generate-wka-override.pl - <one-line expansion of the name>
#
# DESCRIPTION
# <short description of component this file declares/defines>
# expects input args as:
# host1:port1,host2:port2,host3:port3
# writes the wka information to em-coherence-override.xml
# Sample xml file:
#<coherence xml-override="/tangosol-coherence-override-{mode}.xml">
# <cluster-config>
# <unicast-listener>
# <well-known-addresses>
# <socket-address id="1">
# <address>10.232.129.69</address>
# <port>8088</port>
# </socket-address>
# <socket-address id="2">
# <address>10.232.129.69</address>
# <port>8089</port>
# </socket-address>
# </well-known-addresses>
# <port>8088</port>
# </unicast-listener>
# </cluster-config>
#</coherence>
#
use Cwd;
use IPC::Open3;
my $host_port = $ARGV[0];
@host_port_array = split(',', $host_port);
$size = @host_port_array;
my $xmlfile="em-coherence-override.xml";
print "$xmlfile\n";
open(XMLFL,"> $xmlfile");
print XMLFL "<coherence
xml-override=\"/tangosol-coherence-override-{mode}.xml\">\n";
print XMLFL "<cluster-config>\n";
print XMLFL "<unicast-listener>\n";
print XMLFL "<well-known-addresses>\n";
my $id = 1;
for($i = 0; $i < $size; $i++) {
$single_host_port = $host_port_array[$i];
$single_host_port =~ s/^\s+|\s+$//g;
32.4 Troubleshooting
We recommend that you have at least two management nodes running on different
machines in a Coherence cluster. If a monitoring failure occurs, the second
management node can be used. Some of the common failure scenarios are listed
below:
This chapter explains how you can provision SOA Artifacts and Composites using
Oracle Enterprise Manager Cloud Control (Cloud Control). In particular, this chapter
covers the following:
Note:
Table 33-1 Getting Started with Provisioning SOA Artifacts and Composites
Table 33-1 (Cont.) Getting Started with Provisioning SOA Artifacts and Composites
service. As a result, a consumer does not have to know anything more about a Web
service than the WSDL file that describes what it can do.
A Web service consumer (such as, a desktop application or a Java Platform, Enterprise
Edition client such as a portlet) invokes a Web service by submitting a request in the
form of an XML document to a Web service provider. The Web service provider
processes the request and returns the result to the Web service consumer in an XML
document.
WS Policies and Assertions
Policies describe the capabilities and requirements of a Web service such as whether
and how a message must be secured, whether and how a message must be delivered
reliably, and so on. Policies belong to one of the following categories: Reliable
Messaging, Management, WS-Addressing, Security, and MTOM.
Policies are comprised of one or more assertions. A policy assertion is the smallest unit
of a policy that performs a specific action. Policy assertions are executed on the request
message and the response message, and the same set of assertions is executed on both
types of messages. The assertions are executed in the order in which they appear in the
policy. Assertions, like policies, belong to one of the following categories: Reliable
Messaging, Management, WS-Addressing, Security, and MTOM.
Policy Stores
The Policy Store is a repository of system and application-specific policies and roles.
Application roles can include enterprise users and groups specific to the application
(such as administrative roles). A policy can use any of these groups or users as
principals. A policy store can be file-based or LDAP-based. A file-based policy store is
an XML file, and this store is the out-of-the-box policy store provider. An LDAP-based
policy store can use either of the following LDAP servers: Oracle Internet Directory or
Oracle Virtual Directory (with a local store adapter, or LSA).
Credential Stores
A Credential Store is a repository of security data (credentials) that certify the
authority of users, Java components, and system components. A credential can hold
user name and password combinations, tickets, or public key certificates. This data is
used during authentication, when principals are populated in subjects, and, further,
during authorization, when determining what actions the subject can perform.
Human Workflow
Human Workflow component is responsible for managing the lifecycle of human
tasks, including creation, assignment, expiration, deadlines, and notifications, as well
as its presentation to end users. It supports sophisticated dynamic task routing
leveraging declarative patterns and tight integration with business rules. The three
main sub-components of Human Workflow are a Task editor, Task Service Engine,
and a Worklist application.
Oracle B2B
Oracle B2B provides the secure and reliable exchange of documents between
businesses. For example, Retailer, Supplier, and Manufacturer. This type of
eCommerce, B2B, represents mature business documents, classic business processes
and industry specific messaging services and requires an architecture to manage the
complete end-to-end business process. Together with the Oracle SOA Suite, Oracle
B2B meets this challenge and provides an architecture enabling a unified business
process platform, end-to-end instance tracking, visibility, auditing, process
intelligence, governance, and security.
Note:
Provisioning of a gold image from the Software Library is not supported for
Microsoft Windows Vista.
Note:
Cloning of human workflow artifacts and B2B artifacts are not supported. For
information about cloning human workflow artifacts, see the Oracle Fusion
Middleware Administrator's Guide for Oracle SOA Suite and Oracle Business
Process Management Suite. For information about cloning B2B artifacts, see the
Oracle Fusion Middleware User's Guide for Oracle B2B. These guides are
available at:
http://docs.oracle.com/docs/cd/E14571_01/index.htm
• Ensure that you meet the prerequisites described in Provisioning SOA Artifacts
and Composites.
• Ensure that you have already provisioned Oracle SOA Suite 11g and its underlying
Oracle WebLogic Server Domain.
• Ensure that all the components (not only the soa-infra domain) within the source
and target Oracle WebLogic Server Domains are up and running.
• Ensure that the source and the destination soa-infra domains are of the same
version.
To provision SOA artifacts (composites, web service policies, JPS configuration) from a
reference installation, follow these steps:
1. From the Enterprise menu, select Provisioning and Patching, then select
Procedure Library.
Note:
a. Retain the default selection, that is, Provision from reference environment.
b. Click on the torch icon against the Domain Name field. Search for the Oracle
WebLogic Server Domain that you want to deploy the SOA artifacts from and
select it. Ensure that the source Oracle WebLogic Server Domain is up and
running.
c. In the Credentials section, retain the default section, that is, Preferred
Credentials so that the preferred credentials stored in the Management
Repository can be used.
To override the preferred credentials with another set of credentials, select
Override Preferred Credentials. You are prompted to specify the Oracle
WebLogic Server Domain credentials and the Oracle WebLogic
Administration Server host credentials. In the Oracle WebLogic Server
Domain Credentials section, specify the administrator credentials that can be
used to access the WebLogic Server Administration Console. In the Oracle
WebLogic Administration Server Host Credentials section, specify the
operating system credentials of the user who installed the Admin Server.
d. Optionally, if you want to save the SOA artifacts as an image in the Software
Library, select Save SOA Artifacts Gold Image in Software Library.
For example, in future, if you want to provision this particular version to
other Oracle WebLogic Server Domains, then instead of using the reference
installation, which could potentially be down, you can use the gold image
you saved in the Software Library.
e. Click Next.
a. Click on the torch icon against the Domain Name field. Search for the Oracle
WebLogic Server Domain that you want to deploy the SOA artifacts to and
select it. Ensure that the destination Oracle WebLogic Server Domain is up
and running.
b. In the Credentials section, retain the default selection, that is, Preferred
Credentials so that the preferred credentials stored in the Management
Repository can be used.
To override the preferred credentials with another set of credentials, select
Override Preferred Credentials. You are prompted to specify the Oracle
WebLogic Server Domain credentials and the Oracle WebLogic
Administration Server host credentials. In the Oracle WebLogic Server
Domain Credentials section, specify the administrator credentials that can be
used to access the WebLogic Server Administration Console. In the Oracle
WebLogic Administration Server Host Credentials section, specify the
operating system credentials of the user who installed the Admin Server.
c. Click Next.
a. In the Choose the type of SOA artifacts to provision section, select SOA
Composites, Web Services Policies, and Java Platform Security
Configuration.
b. Click Next.
a. Select the composites you want to provision and specify a configuration plan
from the Software Library or a directory.
It is recommended that you place the configuration plan in a directory on
destination machine. Alternatively, you can also place the configuration plan
in any other shared location which is accessible from the destination machine.
If the composite already exists on the destination host, then select Overwrite
to overwrite that existing composite with the composite from the source
domain.
b. Click Next.
c. Click Next.
a. In the Migrate Policy Store and Credential Store section, select Migrate Policy
Store and Migrate Credential Store check boxes.
To view a list of providers for the source and target, click Provider details
link.
b. Click Next.
9. On the Human Workflow page, select all the workflow artifacts that you want to
migrate like Views, Flex Field Mappings, and Attribute Labels, then click Next.
10. On the B2B artifacts page, select all the B2B artifacts that you want to migrate like
Trading Partners, Trading Agreements, and Document Protocols, then click
Next.
11. On the Schedule page, schedule the Deployment Procedure to run either
immediately or later.
12. On the Review page, review the details you have provided for provisioning SOA
artifacts, and click Submit.
• Ensure that you meet the prerequisites described in Setting Up Your Infrastructure.
• Ensure that you have already provisioned Oracle SOA Suite 11g and its underlying
Oracle WebLogic Server Domain.
• Ensure that the source and the destination soa-infra domains are of the same
version.
• Ensure that you have already saved the gold image in the Software Library while
provisioning the SOA artifacts from a reference installation.
To provision SOA artifacts from a gold image, follow these steps:
1. From the Enterprise menu, select Provisioning and Patching, then click
Procedure Library.
Note:
b. Click on the torch icon against the Gold Image Name field. Search for the
gold image you want to provision the SOA artifacts from and select it.
c. Click Next.
a. Click on the torch icon against the Domain Name field. Search for the Oracle
WebLogic Server Domain that you want to deploy the SOA artifacts to and
select it.
b. In the Credentials section, retain the default section, that is, Preferred
Credentials so that the preferred credentials stored in the Management
Repository can be used.
To override the preferred credentials with another set of credentials, select
Override Preferred Credentials. You are prompted to specify the Oracle
WebLogic Server Domain credentials and the Oracle WebLogic
Administration Server host credentials. In the Oracle WebLogic Server
Domain Credentials section, specify the administrator credentials that can be
used to access the WebLogic Server Administration Console. In the Oracle
WebLogic Administration Server Host Credentials section, specify the
operating system credentials of the user who installed the Admin Server.
c. Click Next.
a. In the Choose the type of SOA artifacts to provision section, select SOA
Composites, Web Services Policies, and Java Platform Security
Configuration.
b. Click Next.
a. Select the composites you want to provision and specify a configuration plan
from the Software Library or a directory.
If the composite already exists on the destination host, then select Overwrite
to overwrite that existing composite with the composite from the source
domain.
b. Click Next.
b. In the Web Services Policies section, select the policy assertions to migrate.
c. Click Next.
a. In the Migrate Policy Store and Credential Store section, select Migrate Policy
Store and Migrate Credential Store check boxes.
To view a list of providers for the target, click Provider details link.
b. Click Next.
9. On the Human Workflow page, select all the workflow artifacts that you want to
migrate like Views, Flex Field Mappings, and Attribute Labels, then click Next.
10. On the B2B artifacts page, select all the B2B artifacts that you want to migrate like
Trading Partners, Trading Agreements, and Document Protocols, then click
Next.
11. On the Review page, review the details you have provided for provisioning SOA
artifacts, and click Submit.
• Ensure that you meet the prerequisites described in Provisioning SOA Artifacts
and Composites.
• Ensure that you have already provisioned Oracle SOA Suite 11g and its underlying
Oracle WebLogic Server Domain.
• Ensure that the source and the destination soa-infra domains are of the same
version.
The domain should have at least one managed server with the SOA Infrastructure
application running. In the case of a SOA Cluster, the composites will be deployed
to any one managed server in the cluster.
• Ensure that you have the SOA Composites either in the Software Library or in a file
system accessible from the Admin Server host.
To provision SOA composites, follow these steps:
1. From the Enterprise menu, select Provisioning and Patching, then select
Procedure Library.
a. Click on the torch icon against the Destination Domain Name field. Search
for the Oracle WebLogic domain that you want to deploy the SOA composites
to, and select it.
b. In the Credentials section, retain the default section, that is, Preferred
Credentials so that the preferred credentials stored in the Management
Repository can be used.
To override the preferred credentials with another set of credentials, select
Override Preferred Credentials. You are prompted to specify the Oracle
WebLogic Server Domain credentials and the Oracle WebLogic
Administration Server host credentials. In the Oracle WebLogic Server
Domain Credentials section, specify the administrator credentials that can be
used to access the WebLogic Server Administration Console. In the Oracle
WebLogic Administration Server Host Credentials section, specify the
operating system credentials of the user who installed the Admin Server.
c. Click Next.
c. Click Next.
6. On the Review page, review the details you have provided for provisioning SOA
composites, and click Submit.
Service Bus is an enterprise-class service bus that connects, manages, and mediates
interactions between heterogeneous services. Service Bus accelerates service
configuration, integration, and deployment, thus simplifying management of shared
services across the Service-Oriented Architecture (SOA).
The resources of Service Bus can be organized into individual projects. Projects are
non-hierarchical, disjointed, top-level grouping constructs. All resources (such as
business services, proxy services, WS-Policies, WSDLs, schemas, XQuery
transformations, JARs, and so on) reside in exactly one non-overlapping project.
Resources can be created directly under a project or be further organized into folders.
Folders may be created inside projects or inside other folders, and the folders are
similar to directories in a file system, with the project level being the root directory.
While Oracle Enterprise Manager Cloud Control (Cloud Control) allows you to
discover and monitor these Service Bus targets, it also provides Deployment
Procedures that help you provision Service Bus resources.
This chapter explains how you can provision Service Bus resources. In particular, this
chapter covers the following:
• Supported Releases
Table 34-1 (Cont.) Getting Started with Provisioning Service Bus Resources
Step 4 Running the Deployment Procedure • To provision Service Bus resources from the
Run the Deployment Procedure to an Service Bus domain, follow the steps
successfully provision Service Bus explained in Provisioning Service Bus
resources. Resources from Service Bus Domain.
• To provision Service Bus resources from the
Software Library, follow the steps explained
in Provisioning Service Bus Resources from
Oracle Software Library.
• Ensure that you meet the prerequisites described in Setting Up Your Infrastructure.
• Ensure that the source Service Bus (from where you want to export the resources)
is already discovered and monitored in Cloud Control.
• If you have PAM/LDAP enabled in your environment, then ensure that the target
agents are configured with PAM/LDAP. For more information, see My Oracle
Support note 422073.1.
• Ensure that you use an operating system user that has the privileges to run the
Deployment Procedure, and that can switch to root user and run all commands on
the target hosts. For example, commands such as mkdir, ls, and so on.
If you do not have the privileges to do so, that is, if you are using a locked account,
then request your administrator (a designer) to either customize the Deployment
Procedure to run it as another user or ignore the steps that require special
privileges.
For example, user account A might have the root privileges, but you might use user
account B to run the Deployment Procedure. In this case, you can switch from user
account B to A by customizing the Deployment Procedure.
For information about customization, see Setting Up Credentials.
To provision Service Bus resources from a source Service Bus domain, follow these
steps:
1. From the Enterprise menu, select Provisioning and Patching, then select
Middleware Provisioning.
2. From the Deployment Procedures section, select the Service Bus Resource
Provisioning procedure from the list and click Launch.
3. On the Select Source page, in the Source section, select Service Bus Domain.
a. For Domain, click the torch icon and select the Service Bus domain from
where the resources can be exported and deployed to a target Service Bus
domain. In the following page of the wizard, you will be allowed to select the
domain's projects that you want to export.
b. For BEA Home Directory, specify the full path to the BEA home directory
where all BEA product-related files are stored. For example, /home/mark/
bea.
c. Click Next.
a. In the Resource Summary section, select the projects you want to export and
deploy to the target Service Bus domain. The selected projects are exported to
a JAR file, and the JAR file is moved to the host where the target Service Bus
domain is running.
Note that the resources of the selected projects that exist in the target Service
Bus domain but not in the exported JAR file will be deleted.
c. (Optional) In the Security Options section, if the projects you want to export
contain any resources with sensitive data, then specify a pass-phrase to
protect them. The same pass-phrase will be used to import the protected
resources during deployment.
b. (Optional) In the Advanced Options section, select the settings you want to
retain if you have done some customization to the resources selected for
deployment, and if you want to preserve those changes in the target Service
Bus domain.
Note that for Service Bus 2.6.x, Security and Policy Configuration,
Credentials, and Access Control Policies cannot be preserved.
c. In the Customization section, provide details about the customization file that
can be used to modify the environment settings in the target Service Bus
domain.
If you do not want to use a customization file, select None.
If you are using a customization file and if it is available on the host where the
target Service Bus domain is running, then select Use the Customization file
on the target host and specify the full path to the location where the file is
present.
If the customization file is stored as a generic component in Oracle Software
Library, then select Select the customization file from the Software Library
and specify the full path to the location in Oracle Software Library where the
generic component is stored.
d. Click Next.
6. On the Set Credentials page, specify the following and click Next.
a. Specify the login credentials of the source and target Service Bus domains.
b. Specify the credentials of the hosts where the Management Agents, which are
monitoring the administration servers of the Service Bus domains, are
running
7. In the Schedule page, specify a Deployment Instance name. If you want to run the
procedure immediately, then retain the default selection, that is, One Time
(Immediately). If you want to run the procedure later, then select One Time
(Later) and provide time zone, start date, and start time details. You can set the
notification preferences according to deployment procedure status. If you want to
run only prerequisites, you can select Pause the procedure to allow me to analyze
results after performing prerequisite checks to pause the procedure execution
after all prerequisite checks are performed. Click Next.
8. On the Review page, review the details you have provided for the Deployment
Procedure. If you are satisfied with the details, then click Submit to run the
Deployment Procedure according to the schedule set. If you want to modify the
details, click the Edit link in the section to be modified or click Back repeatedly to
reach the page where you want to make the changes.
9. In the Procedure Activity page, view the status of the execution of the job and
steps in the deployment procedure. Click the Status link for each step to view the
details of the execution of each step. You can click Debug to set the logging level
to Debug and click Stop to stop the procedure execution.
Source Domain Target Domain Export at Project Level Export at Resource Level
You have selected The target domain The entire Project_1 will be The entire Project_1 will be
Project_1 from the has no projects at all. deployed to the target domain. deployed to the target domain.
source domain, and
this project has
Resource_1,
Resource_2, and
Resource_3.
You have selected The target domain The entire Project_1 will be Only the resources of Project_1
Project_1 from the has Project_1, and deployed to the target domain, will be deployed to the target
source domain, and this project has wherein, Resource_1 will be domain, wherein, Resource_1
this project has Resource_1. overwritten because it is already will be overwritten because it is
Resource_1, available in the target domain, already available in the target
Resource_2, and and Resource_2 and Resource_3 domain, and Resource_2 and
Resource_3. will be ADDED. Resource_3 will be ADDED.
Source Domain Target Domain Export at Project Level Export at Resource Level
You have selected The target domain The entire Project_1 will be Only the resources of Project_1
Project_1 from the has Project_1, and deployed to the target domain, will be deployed to the target
source domain, and this project has wherein, Resource_1 will be domain, wherein, only
this project has Resource_1, overwritten because it is already Resource_1 will be overwritten
Resource_1. Resource_2, and available in the target domain, because it is already available in
Resource_3. and Resource_2 and Resource_3 the target domain. The other
will be DELETED. two resources already available
in the target domain, that is,
Resource_2 and Resource_3 will
NOT be affected.
• Ensure that you meet the prerequisites described in Setting Up Your Infrastructure.
• Export the resources of an Service Bus domain as a JAR file. Use Service Bus
console for this.
• Ensure that the JAR file is available as a generic component in Oracle Software
Library. For instructions to create generic components, see Setting Up Oracle
Software Library.
• If you have PAM/LDAP enabled in your environment, then ensure that the target
agents are configured with PAM/LDAP. For more information, see My Oracle
Support note 422073.1.
• Ensure that you use an operating system user that has the privileges to run the
Deployment Procedure, and that can switch to root user and run all commands on
the target hosts. For example, commands such as mkdir, ls, and so on.
If you do not have the privileges to do so, that is, if you are using a locked account,
then request your administrator (a designer) to either customize the Deployment
Procedure to run it as another user or ignore the steps that require special
privileges.
For example, user account A might have the root privileges, but you might use user
account B to run the Deployment Procedure. In this case, you can switch from user
account B to A by customizing the Deployment Procedure.
For information about customization, see Customizing Deployment Procedures .
To provision Service Bus resources from a source Service Bus domain, follow these
steps:
1. From the Enterprise menu, select Provisioning and Patching, then select
Middleware Provisioning.
2. From the Deployment Procedures section, select the Service Bus Resource
Provisioning procedure from the list and click Launch.
3. On the Select Source page, in the Source section, select Oracle Software Library.
a. For Component, click the torch icon and select the generic component that
contains the resources to be deployed to a target Service Bus domain.
b. (Optional) For Pass Phrase, specify a pass-phrase if any of the resources in the
JAR file contain sensitive data and are protected. The same pass-phrase is
used while importing these resources to the target domain.
b. (Optional) In the Options section, select the settings you want to retain if you
have done some customization to the resources selected for deployment, and
if you want to preserve those changes in the target Service Bus domain.
Note that for Service Bus 2.6.x, Security and Policy Configuration,
Credentials, and Access Control Policies cannot be preserved.
c. In the Customization section, provide details about the customization file that
can be used to modify the environment settings in the target Service Bus
domain.
If you do not want to use a customization file, select None.
If you are using a customization file and if it is available on the host where the
target Service Bus domain is running, then select Use the Customization file
on the target host and specify the full path to the location where the file is
present.
If the customization file is stored as a generic component in Oracle Software
Library, then select Select the customization file from the Software Library
and specify the full path to the location in Oracle Software Library where the
generic component is stored.
d. Click Next.
5. On the Set Credentials page, specify the following and click Next.
a. Specify the login credentials of the source and target Service BusService Bus
domains.
b. Specify the credentials of the hosts where the Management Agents, which are
monitoring the administration servers of the Service Bus domains, are
running
6. In the Schedule page, specify a Deployment Instance name. If you want to run the
procedure immediately, then retain the default selection, that is, One Time
(Immediately). If you want to run the procedure later, then select One Time
(Later) and provide time zone, start date, and start time details. You can set the
notification preferences according to deployment procedure status. If you want to
run only prerequisites, you can select Pause the procedure to allow me to analyze
results after performing prerequisite checks to pause the procedure execution
after all prerequisite checks are performed. Click Next.
7. On the Review page, review the details you have provided for the Deployment
Procedure. If you are satisfied with the details, then click Submit to run the
Deployment Procedure according to the schedule set. If you want to modify the
details, click the Edit link in the section to be modified or click Back repeatedly to
reach the page where you want to make the changes.
8. In the Procedure Activity page, view the status of the execution of the job and
steps in the deployment procedure. Click the Status link for each step to view the
details of the execution of each step. You can click Debug to set the logging level
to Debug and click Stop to stop the procedure execution.
This chapter explains how you can provision Linux on bare metal servers using Oracle
Enterprise Manager Cloud Control (Cloud Control). In particular, this chapter covers
the following:
• Using Saved Plans for Provisioning Linux Operating Systems on Bare Metal
Servers
Tip:
Note:
Before starting the provisioning Linux operations, ensure that you configure
sudo privileges. For more information about configuring sudo privileges, see
Setting Up Credentials.
Step 2 Knowing the Use Case • To learn about provisioning bare metal
This chapter covers provisioning boxes, see Provisioning Bare Metal
Linux. Understand the use case Servers.
for Linux provisioning.
• Powering up the bare metal machine on the network to begin the PXE-based OS
boot and install process. For information about PXE Booting and KickStart, see
Understanding PXE Booting and Kickstart Technology.
• Oracle Linux 4.x, Oracle Linux 5.x, Oracle Linux 6.x, Oracle Linux 7
• RedHat Enterprise Linux (RHEL) 4.x, RedHat Enterprise Linux (RHEL) 5.x, RedHat
Enterprise Linux (RHEL) 6.x
• Oracle VM Server (OVS) 3.0.x, Oracle VM Server 3.1.x, Oracle VM Server 3.2.x
• Checklist for Boot Server, Stage Server, RPM Repository, and Reference Host
• The user or role used to create the top-level directory (stage directory) where you
stage the Agent rpms should have Sudo access to root. To ensure that you have
sudo access on the stage storage, log in to the Cloud Control console and set the
sudo privileges.
Note:
Oracle recommends that the stage server must have very limited access due to
the criticality and sensitivity of the data it hosts. The super administrator can
enforce this by creating one account on the stage server, and setting it as the
preferred credential, to be used by all the provisioning users in Cloud Control.
This preferred credential should also be a valid ORACLE_HOME credential
(belonging to ORACLE_HOME owner's group).
• The user creating the top-level directory must have write permissions on it. To
ensure that you have write access on the stage server, log in to the Cloud Control
console and set the privileged preferred credentials for the stage server host.
• The minimum space requirement for the stage directory is 100 MB.
35.4.1.2 Setting up a Stage Server and Accessing the Management Agent files
To set up a stage server, and access the Management Agent RPM files, follow these
steps:
2. Log in to the stage server running on the Management Agent, and create a top-
level directory to store all the Management Agent installation files.
In this section, the variable STAGE_TOP_LEVEL_DIRECTORY is used to refer to
the top level directory on the stage server.
For example:
User: aime
Stage Server: upsgc.example.com
Stage Directory: /scratch/stage
Note, in this case, the aime user should have sudo access to root, and should have
write permissions on /scratch/stage directory
3. To create and copy the Management Agent Files to Stage location, run the
following commands on the OMS:
For using the NFS Stage Server:
STAGE_TOP_LEVEL_DIRECTORY=/scratch/stage
Note:
1. If NFS is used then the staging process will automatically discover the
agent rpm and there's no requirement for you to provide a URL for the
rpm.
2. If HTTP is used then a URL will be required to reference the Agent rpm.
The Agent URL is http://host.example.com/agent_dir/oracle-
agt-12.1.0.4.0-1.0.x86_64.rpm. For more information on setting
up HTTP Stage Server, see Setting up a HTTP Stage Server.
During the installation, hardware servers mount the stage directory so that all the files
required for installation appear as local files. In such a scenario, the stage server
functions as the NFS server, and the hardware servers as its clients. IF the stage server
is an NFS server then any files that it NFS exports must be available to its clients; for
files on NAS storage it might be necessary to configure the NAS to allow this to
happen.
Ensure that you perform the following steps on the stage server:
2. Run the following commands to configure NFS to export the stage server's top
level directory (STAGE_TOP_LEVEL_DIRECTORY):
STAGE_TOP_LEVEL_DIRECTORY=/scratch/stage
echo "${STAGE_TOP_LEVEL_DIRECTORY}*(ro,sync)" >>/etc/exports
3. To reflect these changes on the NFS daemons, run the following command:
service nfs restart
See Also:
Oracle Enterprise Manager Cloud Control Basic Installation Guide to install a
12.1.0.1.0 or higher version of Management Agent.
1. Run the following commands to install a stage server and start it:
rpm --quiet -q httpd || yum -y install httpd
service httpd restart
chkconfig httpd on
See Also:
Note:
Ensure that you have 2 GB RAM available for boot server, stage server, and
RPM repository server.
If you have the required boot server, stage server, and RPM repository already
created, then set up the preferred credentials.
Complete the following steps to setup a machine as the boot server:
The two servers could be running either on the same machine, or on different
machines. Oracle recommends running the TFTP server on the same host machine
as the DHCP server. In case the two servers are installed and configured on
different machines, the machine running the TFTP server will be referred to as the
boot server.
• Ensure that the pxelinux boot loader (pxelinux.0) exists in the directory that is
configured for your TFTP server (/tftpboot/linux-install in the given examples).
Edit the dhcpd.conf (/etc/dhcpd.conf) file. A sample dhcpd.conf file for PXE setup is
shown below:
allow booting;
allow bootp;
group {
next-server <TFTP_server_IP_address>;
filename "linux-install/pxelinux.0";
host <hostname> {
hardware ethernet <MAC address>;
fixed-address <IP address>;
}
}
The next-server option in the DHCP configuration file specifies the host name or IP
Address of the machine hosting the TFTP server. Oracle recommends running the
TFTP Server on the same host machine as the DHCP Server. Therefore, this address
should be the IP Address or host name for the local machine.
The filename option specifies the boot loader location on the TFTP server. The
location of the file is relative to the main TFTP directory.
Any standard DHCP configuration file is supported.The sample file format above
shows one entry (line 12-15) for each target host. The DHCP service must be
restarted every time you modify the configuration file.
4. Enable the tftp service. Edit the /etc/xinetd.d/tftp file to change the disable flag as
no (default=no).
6. Install Oracle Management Agent. This step is not necessary if the DHCP and Boot
servers are installed on the Cloud Control server.
Note:
Refer to Oracle Enterprise Manager Cloud Control Basic Installation Guide to
install a 12.1.0.1.0 or higher version of Management agent on the boot server.
Note:
It is recommended that you use RAM of 2 GB.
There are multiple ways to create a RPM repository. If Red Hat Enterprise Linux CDs
are available, do the following:
a. If there are custom RPMs installed on the reference host that need to be
provisioned on the bare metal machine, make sure to copy them to the
following repository location:
<RPM_REPOS>/Redhat/RPMS
4. Run yum-arch :
This should create a headers directory. Make sure this directory contains a
header.info file.
If yum is not installed then download it from the Linux Vendor's Web site.
Note:
If the Apache server that comes with Enterprise Manger Cloud Control is
used, enable the Apache directory index page using the "Options Indexes"
directive in the Apache configuration (httpd.conf) file.
You can set up Oracle Linux Repository by using the Oracle Linux installation media
as follows:
a. If there are custom RPMs installed on the reference host that need to be
provisioned on the bare metal machine, make sure to copy them to the
following repository location:
<RPM_REPOS>/Enterprise/RPMS
5. Run yum-arch :
This should create a headers directory. Make sure this directory contains a
header.info file.
You can set up Oracle Linux Repository by using the Oracle Linux installation media
as follows:
2. Copy all the contents of the first CD to a directory say Root Directory.
3. Copy all contents from the Cluster, ClusterStorage, Server, and VT directories in
the other CD to the respective directories.
a. If there are custom RPMs installed on the reference host that need to be
provisioned on the bare metal machine, make sure to copy them to the
directory containing the RPMS, such as Cluster, VT, ClusterStorage,
and Server.
1. Ensure that Apache Web Server is installed and HTTP service is running.
2. From the Enterprise menu, select Provisioning and Patching and then select Bare
Metal Provisioning.
3. In the Infrastructure tab, in the Stage Servers section, click Add Server.
4. In the Add Staging Server dialog, select a Stage Server, specify a Stage Directory,
for example, /scratch/stage, and Base URL, for example, file://
stgserver.example.com/scratch/stage. Click OK.
1. Read about DHCP, PXE, and Redhat Kickstart technology before going through the
boot server setup.
3. From the Enterprise menu, select Provisioning and Patching, then select Bare
Metal Provisioning.
5. In the Add Boot Server dialog, select a Boot Server and specify a TFTP Boot
Directory, for example, /tftpboot/linux-install/. Click OK.
2. From the Enterprise menu, select Provisioning and Patching, then select Bare
Metal Provisioning.
4. In the Add DHCP Server dialog, select a DHCP Server and specify a DHCP
Configuration File, for example, /etc/dhcpd.conf that has been modified to
support your target hosts. Click OK.
2. From the Enterprise menu, select Provisioning and Patching, then select Bare
Metal Provisioning.
4. In the Add RPM Repository Server dialog, specify a Repository Name and URL,
For RPM repository either accessible by HTTP or on a local server, specify the URL
in the HTTP format, for example, http://example.com/OL5/. For NFS location,
specify the URL as file://example/OEL5/.
Click OK.
35.4.8 Checklist for Boot Server, Stage Server, RPM Repository, and Reference Host
Ensure that the following criteria are met before provisioning:
Table 35-2 Checklist for Boot Server, Stage Server, RPM Repository, and Reference Host
Table 35-2 (Cont.) Checklist for Boot Server, Stage Server, RPM Repository, and Reference Host
Reference Host Agent is installed on local disk and not on NFS mounted directory.
Preferred Credentials are set.
Software Library Shared storage used for Software Library is accessible through NFS mount points to all
OMS servers.
1. From the Enterprise menu, select Provisioning and Patching, then select Software
Library.
2. From the Software Library Home, from the Actions menu, select Create Folder.
3. In the Create Folder popup, specify a Name and Description for the folder and
select the folder location. For example, create a folder BMP-OL56 to represent the
components you will use to provision a bare metal server of Oracle Linux 5 Update
6 Click Save.
4. From the Actions menu, select Create Entity and then Bare Metal Provisioning
Components.
5. In the Create Entity: Bare Metal Provisioning Components dialog box, select
Operating System Component and click Continue.
6. On the Describe page, enter the Name, Description, and Other Attributes that
describe the entity.
Note: The component name must be unique to the parent folder that it resides in.
Sometime even when you enter a unique name, it may report a conflict, this is
because there could be an entity with the same name in the folder that is not visible
to you, as you do not have view privilege on it.
Click +Add to attach files that describe the entity better like readme, collateral,
licensing, and so on. Ensure that the file size is less than 2 MB.
In the Notes field, include information related to the entity like changes being
made to the entity or modification history that you want to track.
7. In the Basic Operating System page, select a Time Zone and specify the Root
Password.
In the Operating System Users List, add the users for the operating system by
specifying the User Name, Password, Primary Group, and Additional Groups.
Specify if you want to Enable Sudo Access for the user.
In the Fetch Configuration properties from Reference Enterprise Manager Host
target section, select Fetch Properties to apply the host properties. Select the
reference host and select the Configurations you want to fetch.
Click Next.
The Configure Package Selection section displays the packages from the operating
component or reference host you specified in the previous screen. You can retain or
remove these packages from the component.
Click Next.
The operating system component will be saved in Software Library with the status
Ready.
Element Description
Install User User name for installing the agent.
Agent Registration Password Specify the password to be used to register the agent with Oracle Management
Server.
Element Description
Require TTY Select this option if you want sudo user to Log in to a separate terminal.
Mount Point Settings Specify entries for the /etc/fstab file. You can specify mount points on the
newly provisioned Linux machine. By default, mount point settings from the
reference Linux machine are inherited.
NIS Settings Specify entries for the /etc/yp.conf file. You can specify NIS settings for the
newly provisioned Linux machine. By default, NIS settings from the reference
Linux machine are inherited.
Element Description
NTP Settings Specify entries for the /etc/ntp.conf file. You can specify NTP settings for the
newly provisioned Linux machine. By default, NTP settings from the reference
Linux machine are inherited.
Initab Settings Specify settings for/etc/inittab file. All processes are started as part init
operation in boot process. Init operation decides the processes that will start
on booting of a machine or when runlevel changes.
Firewall Settings Specify firewall settings for the Linux target. Firewall settings are disabled by
default and can be configured. Make sure that the port used by Management
Agent is open for its communication with the Management Service. By default,
Management Agent uses port 3872 or a port number in the range 1830-1849,
unless configured to use some other port.
Element Description
Advanced Configuration & Specify settings for boot time parameter for kernel (acpi) in the /boot/grub/
Power Interface grub.conf file.
Post Install Script Specify any set of commands that need to be executed on the newly
provisioned machine. These commands will be appended to the post section of
the kickstart file.
First Boot Script Specify any set of commands that need to be executed on the newly
provisioned machine when it starts up for the first time.
1. From the Enterprise menu, select Provisioning and Patching, then select Software
Library.
2. From the Software Library Home, from the Actions menu, select Create Folder.
3. In the Create Folder popup, specify a Name and Description for the folder and
select the folder location. Click Save.
4. From the Actions menu, select Create Entity and then Bare Metal Provisioning
Components.
5. In the Create Entity: Bare Metal Provisioning Components dialog box, select Disk
Layout Component and click Continue.
6. On the Describe page, enter the Name, Description, and Other Attributes that
describe the entity.
Note: The component name must be unique to the parent folder that it resides in.
Sometime even when you enter a unique name, it may report a conflict, this is
because there could be an entity with the same name in the folder that is not visible
to you, as you do not have view privilege on it.
Click +Add to attach files that describe the entity better like readme, collateral,
licensing, and so on. Ensure that the file size is less than 2 MB.
In the Notes field, include information related to the entity like changes being
made to the entity or modification history that you want to track.
7. In the Configure page, specify the hard disk, RAID, partition, and logical
configurations.
To specify the hard disk profile, click Add. Specify the Mount Point, RAID Level,
Partitions, and File System Type.
To specify the Partition Configuration, click Add. Specify the Mount Point, Device
Name, File System Type, and Size (MB).
To specify RAID configuration, click Add. Specify the Device Name and Capacity.
To specify the Logical Volume Group Configuration, click Add. Specify the Group
Name, Partitions, and RAIDs.
To specify the Logical Volume Configuration, click Add. Specify the Mount Point,
Logical Volume Name, Logical Group Name, File System Type, and Size (MB).
Click Next.
The disk layout component will be saved in Software Library with the status
Ready.
1. From the Enterprise menu, select Provisioning and Patching, then select Software
Library.
2. From the Software Library Home, from the Actions menu, select Create Folder.
3. In the Create Folder popup, specify a Name and Description for the folder and
select the folder location.
4. From the Actions menu, select Create Entity and then Bare Metal Provisioning
Components.
5. In the Create Entity: Bare Metal Provisioning Components dialog box, select Oracle
Virtual Server Component and click Continue.
6. On the Describe page, enter the Name, Description, and Other Attributes that
describe the entity.
Note: The component name must be unique to the parent folder that it resides in.
Sometime even when you enter a unique name, it may report a conflict, this is
because there could be an entity with the same name in the folder that is not visible
to you, as you do not have view privilege on it.
Click +Add to attach files that describe the entity better like readme, collateral,
licensing, and so on. Ensure that the file size is less than 2 MB.
In the Notes field, include information related to the entity like changes being
made to the entity or modification history that you want to track.
7. In the Basic Operating System page, select a Time Zone and specify the Root
Password and the OVM Agent Password.
In the Operating System Users List, add the users for the operating system by
specifying the User Name, Password, Primary Group, and Additional Groups.
Specify if you want to Enable Sudo Access for the user.
Click Next.
Click Next.
The oracle virtual server component will be saved in the Software Library with the
status Ready.
Element Description
Mount Point Settings Specify entries for the /etc/fstab file. You can specify mount
points on the newly provisioned Linux machine. By default,
mount point settings from the reference Linux machine are
inherited.
NIS Settings Specify entries for the /etc/yp.conf file. You can specify NIS
settings for the newly provisioned Linux machine. By
default, NIS settings from the reference Linux machine are
inherited.
NTP Settings Specify entries for the /etc/ntp.conf file. You can specify
NTP settings for the newly provisioned Linux machine. By
default, NTP settings from the reference Linux machine are
inherited.
Initab Settings Specify settings for/etc/inittab file. All processes are started
as part init operation in boot process. Init operation decides
the processes that will start on booting of a machine or when
runlevel changes.
Firewall Settings Specify firewall settings for the Linux target. Firewall
settings are disabled by default and can be configured. Make
sure that the port used by Management Agent is open for its
communication with the Management Service. By default,
Management Agent uses port 3872 or a port number in the
range 1830-1849, unless configured to use some other port.
Element Description
Post Install Script Specify any set of commands that need to be executed on the
newly provisioned machine. These commands will be
appended to the post section of the kickstart file.
First Boot Script Specify any set of commands that need to be executed on the
newly provisioned machine when it starts up for the first
time.
• Ensure that you set up the bare metal provisioning infrastructure described in
Setting Up Infrastructure for Bare Metal Provisioning.
Note:
For information about downloading the Management Agent RPM kits, access
the following URL:
http://www.oracle.com/technology/software/products/oem/
htdocs/provisioning_agent.html
For instructions to install a Management Agent RPM kit, read the README
file associated with the Management Agent RPM kit you are downloading.
1. From the Enterprise menu, select Provisioning and Patching, then select Bare
Metal Provisioning.
2. In the Server Image section, from the Provision menu, select Operating System.
• MAC Addresses if you want to provision the bare metal systems by specifying
MAC addresses. Click Add to specify the list of MAC Address. In the Add
MAC dialog box, specify the MAC addresses. Click OK.
Optionally, click Add from File to add the MAC address from a file. In the
Add from File dialog box, click Browse and select the file from the location
where you have stored it.
• Subnet to specify the subnet for the bare metal provisioning. In the Subnet to
be Provisioned section, specify the Subnet IP, Netmask, Number of Network
Interfaces, and Bootable Network Interface.
a. Stage Server and select the Storage. Select Run Stage Server Pre-requisite
checks to check if the stage server is configured properly.
b. Boot Server and select Run Boot Server Pre-requisite checks to check if the
Boot server is configured properly.
c. DHCP Server and select Run DHCP Server Pre-requisite checks to check if
the DHCP server is configured properly.
5. In the Basic OS Details page, set the Time Zone and OS Root Password. In the
Add Operating System Users list section, click Add. Specify the User Name,
Password, Primary Group, and Additional Groups to add the operating system
users. Enable or Disable sudo access. Click OK.
If you have a reference host from which you want to provision your bare metal
servers, then in the Fetch Properties from Reference Enterprise Manager Host
Target section, select Fetch Properties to select reference host properties. Select
the reference host and the configurations you want to fetch. Specify reference host
credentials. The credentials you specify must have root access or you must have
sudo privileges set up for the target.
You can choose to use preferred credentials, named credentials, or enter your
credentials. If you choose to enter your credentials, specify the user name and
password and select the Run Privilege. Choose to Save Credentials to use these
credentials in future deployments.
Click Next.
7. In the Disk Layout page, specify hard disk profile, partition configuration, RAID
configuration, Logical Volume Group configuration, and Logical Volume
configuration.
To specify the hard disk profile, click Add. Specify the Device Name and
Capacity.
To specify the Partition Configuration, click Add. Specify the Mount Point,
Device Name, File System Type, and Size (MB).
To specify RAID Configuration, click Add. Specify the Mount Point, RAID Level,
Partitions, and File System Type. To configure RAID, ensure that your hard disk
has two partitions at the minimum.
To specify the Logical Volume Group Configuration, click Add. Specify the
Group Name, Partitions, and RAIDs.
To specify the Logical Volume Configuration, click Add. Specify the Mount Point,
Logical Volume Name, Logical Group Name, File System Type, and Size (MB).
If you selected a Disk Layout component in step 4, these settings will be displayed
here. You can edit, remove, or retain these values.
Click Next.
8. In the Network page, the network properties for the MAC Address or Subnet as
specified during target selection, is displayed.
Click Add to configure the network interfaces. In the Input Network Interface
Properties dialog box, specify the Interface name. Select the Configuration Type
as:
Click Next.
10. In the Review page, verify that the details you have selected are correctly
displayed and submit the job for the deployment. If you want to modify the
details, click Back repeatedly to reach the page where you want to make the
changes. Click Save As Plan to save the configuration details you have specified.
Specify a name and description and click OK to save the plan. You can later use
the saved plan to provision bare metal boxes. For more information, see Using
Saved Plans for Provisioning Linux Operating Systems on Bare Metal Servers
Click Submit.
11. The Deployment Procedure is displayed in the Bare Metal Provisioning page with
Status Running. Click on the Status message.
12. In the Procedure Activity page, view the job steps and verify that Status is
Success. If the status is Failed, view the steps that have failed, and fix them and
resubmit the job.
13. After bare metal systems have been provisioned, verify that they appear in the All
Targets page.
Element Description
Agent Registration Password Specify the password to be used to register the agent with Oracle Management
Server.
Element Description
Require TTY Select this option if you want sudo user to Log in to a separate terminal.
Mount Point Settings Specify entries for the /etc/fstab file. You can specify mount points on the
newly provisioned Linux machine. By default, mount point settings from the
reference Linux machine are inherited.
NIS Settings Specify entries for the /etc/yp.conf file. You can specify NIS settings for the
newly provisioned Linux machine. By default, NIS settings from the reference
Linux machine are inherited.
NTP Settings Specify entries for the /etc/ntp.conf file. You can specify NTP settings for the
newly provisioned Linux machine. By default, NTP settings from the reference
Linux machine are inherited.
Element Description
Initab Settings Specify settings for/etc/inittab file. All processes are started as part init
operation in boot process. Init operation decides the processes that will start
on booting of a machine or when runlevel changes.
Firewall Settings Specify firewall settings for the Linux target. Firewall settings are disabled by
default and can be configured. Make sure that the port used by Management
Agent is open for its communication with the Management Service. By default,
Management Agent uses port 3872 or a port number in the range 1830-1849,
unless configured to use some other port.
Element Description
Advanced Configuration & Specify settings for boot time parameter for kernel (acpi) in the /boot/grub/
Power Interface grub.conf file.
Post Install Script Specify any set of commands that need to be executed on the newly
provisioned machine. These commands will be appended to the post section of
the kickstart file.
First Boot Script Specify any set of commands that need to be executed on the newly
provisioned machine when it starts up for the first time.
Note:
1. From the Enterprise menu, select Provisioning and Patching, then select Bare
Metal Provisioning.
2. In the Server Image section, from the Provision menu, select Oracle VM Server.
• MAC Addresses if you want to provision the bare metal systems by specifying
MAC addresses. Click Add to specify the list of MAC Address. Alternately, to
add the addresses from a file, click Add from File. In the Add from File dialog
box, select the file that contains the addresses and click OK.
• Subnet to specify the subnet for the bare metal provisioning. In the Subnet to
be Provisioned section, specify the Subnet IP, Netmask, Number of Network
Interfaces, and Bootable Network Interface.
The Oracle VM Registration section allows you to select an OVM Manager
registered in cloud to manage the Oracle VM servers you are provisioning. To do
so, click the search icon. From the Select Target dialog box, select a target VM
machine, and click Select.
Click Next.
a. Select Stage Server, and a location on the stage server for preparing images to
be installed over the network. Select Run Stage Server Pre-requisite checks
to check if the stage server is configured properly.
b. Select Boot Server, and select Run Boot Server Pre-requisite checks to check
if the Boot server is configured properly.
c. Select DHCP Server, and select Run DHCP Server Pre-requisite checks to
check if the DHCP server is configured properly
5. In the Basic OS Details page, set the Time Zone, OS Root Password, and the
Oracle VM Agent password. In the Operating System Users list section, click Add.
Specify the User Name, Password, Primary Group, and Additional Groups to
add the operating system users. Enable or Disable sudo access. Click OK.
Click Next.
7. In the Disk Layout page, specify Hard Disk Profile, RAID configuration, and
Logical Configuration.
To specify the hard disk profile, click Add. Specify the Device Name and
Capacity.
To specify the Partition Configuration, click Add. Specify the Mount Point,
Device Name, File System Type, and Size (MB).
To specify RAID Configuration, click Add. Specify the Mount Point, RAID Level,
Partitions, and File System Type. To configure RAID, ensure that your hard disk
has two partitions at the minimum.
To specify the Logical Volume Group Configuration, click Add. Specify the
Group Name, Partitions, and RAIDs.
To specify the Logical Volume Configuration, click Add. Specify the Mount Point,
Logical Volume Name, Logical Group Name, File System Type, and Size (MB).
If you selected a Disk Layout component in step 4, these settings will be displayed
here. You can edit, remove, or retain these values.
Click Next.
8. In the Network page, the network properties for the MAC Address or Subnet as
specified during target selection, is displayed.
Click Add to configure the network interfaces. In the Add Network Interface
dialog box, specify the Interface name. Select the Configuration Type as:
10. In the Review page, verify that the details you have selected are correctly
displayed and submit the job for the deployment. If you want to modify the
details, click Back repeatedly to reach the page where you want to make the
changes.
Click Submit.
11. The Deployment Procedure is displayed in the Bare Metal Provisioning page with
Status Running. Click the Confirmation message.
12. In the Procedure Activity page, view the job steps and verify that Status is
Success. If the status is Failed, view the steps that have failed, and fix them and
resubmit the job.
13. After bare metal systems have been provisioned, verify that they appear in the All
Targets page.
Element Description
Mount Point Settings Specify entries for the /etc/fstab file. You can specify
mount points on the newly provisioned Linux machine. By
default, mount point settings from the reference Linux
machine are inherited.
NIS Settings Specify entries for the /etc/yp.conf file. You can specify NIS
settings for the newly provisioned Linux machine. By
default, NIS settings from the reference Linux machine are
inherited.
NTP Settings Specify entries for the /etc/ntp.conf file. You can specify
NTP settings for the newly provisioned Linux machine. By
default, NTP settings from the reference Linux machine are
inherited.
Initab Settings Specify settings for/etc/inittab file. All processes are started
as part init operation in boot process. Init operation decides
the processes that will start on booting of a machine or
when runlevel changes.
Firewall Settings Specify firewall settings for the Linux target. Firewall
settings are disabled by default and can be configured.
Make sure that the port used by Management Agent is open
for its communication with the Management Service. By
default, Management Agent uses port 3872 or a port
number in the range 1830-1849, unless configured to use
some other port.
Element Description
Post Install Script Specify any set of commands that need to be executed on
the newly provisioned machine. These commands will be
appended to the post section of the kickstart file.
First Boot Script Specify any set of commands that need to be executed on
the newly provisioned machine when it starts up for the
first time.
1. From the Enterprise menu, select Provisioning and Patching, then select Bare
Metal Provisioning.
Note:
To edit the saved plans, see Using Saved Plans for Provisioning Linux
Operating Systems on Bare Metal Servers.
1. From the Enterprise menu, select Provisioning and Patching, then select Bare
Metal Provisioning.
2. In Server Image section, from Provision menu, select Using Saved Plan.
3. From the Saved Plans dialog box, select any template to pre-populate the
provisioning wizard with the saved values, and click Continue.
5. Follow steps 4 to step 10 listed in the section Provisioning Bare Metal Servers.
7. In the Review page, verify all the details you have selected, and click Submit to
submit the job for the deployment.
• Monitoring Hosts
• Administering Hosts
36
Overview of Host Management
A host is a computer where managed databases and other services reside. A host is
one of many components or targets than can be monitored and managed by Oracle
Enterprise Manager.
Monitoring refers to the process of gathering information and keeping track of
activity, status, performance, and health of targets managed by Cloud Control on your
host. A Management Agent deployed on the host in conjunction with plug-ins
monitors every managed target on the host. Once hosts are discovered and promoted
within Enterprise Manager, you can monitor these hosts.
Administration is the process of managing and maintaining the hosts on your system.
To view all the hosts monitored by Oracle Enterprise Manager, select Hosts on the
Targets menu of the Enterprise Manager Cloud Control.
Note: Refer to the Oracle Enterprise Manager Cloud Control Administrator's Guide for
information on discovering and promoting hosts, discovering unmanaged hosts,
converting unmanaged hosts to managed hosts, and so on.
• Determine whether a particular host is available and whether there are incidents
and problems associated with that host.
• View statistics (metrics) applicable to each host. You have over 40 metrics to choose
from! Examples of metrics include CPU, memory utilization, file sysmtem and
network statistics. See the Oracle Enterprise Manager Framework, Host, and Services
Metric Reference Manual for details about each of the host metrics.
• Analyze job activity statistics including problem job executions, suspended job
executions, and running jobs
• Determine whether the statistics reported for CPU utilization, memory utilization,
file system usage, and network utilization are within acceptable levels for different
periods.
• Determine whether there are any compliance violations against this target.
– Ensure that the Management Agent is up when you are removing a target. If the
Management Agent is down when the target is deleted, the target will be
removed from the Management Repository only and not from the Management
Agent. Therefore when the Management Agent is brought back up, the target
will be back again.
– Be aware that the Management Agent cannot be deleted unless it is the only
target remaining on the host.
calculate history data depends on the amount of storage data associated with the host
target.
Before you start monitoring and administering hosts, it is recommended that you set
up credentials and install the needed software. This chapter describes:
• Required Installations
• Setting Up Credentials
Note:
2. Either type the name of the desired host in the Search field or scroll down to the
name of the host in the Name column.
5. The Required Installations page appears listing the software applications you need
to install on your Linux machine before you can perform any of the tasks available
from the Administration menu.
For example, for a Linux host, you must install Yet Another Setup Tool (YAST) and
EM Wrapper Scripts.
to run scripts. For Oracle Linux and RHEL4 (Red Hat), YAST rpm contains the
Enterprise Manager scripts. Therefore installing YAST rpm from the following
location will also install the Enterprise Manager scripts:
http://oss.oracle.com/projects/yast
For SUSE, you need to download the Enterprise Manager scripts and additional
remote access module from the following location:
http://oss.oracle.com/projects/yast/files/sles9
Before you install YAST, you need to determine the following:
1. Determine the version of Linux on your machine. For example, the uname -a
command lists the RHEL (RedHat), Oracle Linux, or SUSE versions, and the bits
of the machine, for example 32-bit versus 64-bit. YAST is supported on RHEL4
and later and on Oracle Linux.
2. Click the 'here' link. The Project Downloads: Yast page appears. Click the link that
coincides with your version of Linux, for example, EL5.
3. On the EL5 page, click the link associated with the bits on your machine, either i386
for 32 bits or x86-64 for 64 bits.
5. Once the tar is downloaded, go to the directory where the tar file is available.
9. To verify that YAST is installed, type /sbin/yast2. This should display the YAST
control center. If it does not, the YAST installation has failed.
10. When you return to the Administration menu, the options should now display the
available Linux administration features.
For a demonstration of how to install YAST, see the YouTube video Oracle Enterprise
Manager 12c: Install YAST located at http://www.youtube.com/watch?
v=7ZiwmxZVmAw.
• Named Credentials are used for the Management Agent install. Named credentials
explicitly grant you privileges on the host.
• Preferred Credentials
If a target has preferred credentials set, applications that log in to that target will
automatically use the preferred credentials. Using preferred credentials simplifies
access to managed targets.
Default credentials can be set for each target type. Default credentials are used for
any targets that do not have preferred credentials explicitly set.
2. Either type the name of the desired host in the Search field or scroll down to the
name of the host in the Name column.
4. From the Host menu, select Target Setup, then select Monitoring Configuration.
5. The Monitoring Configuration page appears. Details can include, for example, Disk
Activity Metrics Collection Max Rows Upload.
1. On the Enterprise Manager page, locate the Setup menu located at the top right of
the page.
2. From the Setup menu, select Security, then select Monitoring Credentials.
3. On the Monitoring Credentials page, select Host and click Manage Monitoring
Credentials.
4. On the Host Monitoring Credentials page, select a row and click Set Credentials
to edit the credentials.
• Using Groups
• Summary
• Configuration
• Job Activity
• Compliance Summary
• Configuration Details
• Incident List
1. From the Enterprise Manager Console, choose Targets then choose Groups.
3. On the General tab of the Create Group page or the Create Dynamic Group page,
enter the Name of the Group you want to create. If you want to make this a
privilege propagating group, then enable the Privilege Propagation option by
clicking Enabled. If you enable Privilege Propagation for the group, the target
privileges granted on the group to an administrator (or a role) are propagated to
the member targets.
Note: The Full any Target privilege is required to enable privilege propagation
because only targets on which the owner has Full Target privileges can be
members, and any target can potentially match the criteria and a system wide Full
privilege is required. To create a regular dynamic group, the View any Target
system wide privilege is required as the group owner must be able to view any
target that can potentially match the membership criteria.
4. Configure each page, then click OK. You should configure all the pages before
clicking OK. For more information about those steps, see the online help.
After you create the group, you always have immediate access to it from the Groups
page.
You can edit a group to change the targets that comprise the group, or change the
metrics that you want to use to summarize a given target type. To edit a group, follow
these steps:
1. From the Enterprise Manager Console, choose Targets then choose Groups.
2. Click the group Name for the group you want to edit.
3. From the Group menu, click Target Setup, then choose Edit Group.
Alternatively, you can select the group you want to edit from the list of groups on the
Groups page and click Edit from the top of the groups table.
As a host administrator, you must have a grasp of how your host is functioning. Host
monitoring can enable you to answer such questions as:
• Determine the commands that are taking the most CPU resources and perform the
appropriate action on the target host to reduce contention by using an
administrative tool of your choice.
• View trends in CPU Usage over various time periods including last 24 hours, last
week and last month.
• Monitor all CPUs, that is, not an aggregate view but a view of all the CPUs in the
system.
Note: You can use the Execute Host Command feature in Enterprise Manager to
perform actions on the host.
Storage Details are relevant to Enterprise Manager targets that are associated with one
or more hosts. In particular:
• Summary attributes presented are rolled up for one or multiple associated hosts.
– Explicit membership, or
Note that Writeable NFS is shown in Provisioning Summary to account for the storage
attached to the host over NFS. These layers are managed by IT administrators who are
responsible for provisioning space to applications.
Allocation related attributes do not change frequently, change typically results from
an administrative action taken by an IT administrator. See Provisioning Summary
section in the About Storage Computation Formula help topic for details on how this
information is calculated.
The bar chart summarizes the allocated, unallocated, and overhead for all entities
present in Disk, Volume, Oracle ASM, and Writeable Network File Systems (NFS)
portion of File System layer for the host or associated hosts of the group.
If a specific layer is not deployed, the corresponding bar is omitted from the chart. The
bar chart answers the following questions.
• How much space is available for allocation from the entities present in the given
layer?
• How much space was allocated from the entities present in the given layer?
39.2.5 ASM
Oracle Automatic Storage Management (ASM) is a simple storage management
solution that obviates the need for using volumes layer technologies for Oracle
databases.
39.2.6 Databases
Databases refer to Oracle databases (including Real Application Cluster (RAC)
databases) on top of which other applications may be running. Databases can consume
space from disks, volumes, file systems, and Oracle Automatic Storage Management
(ASM) layers.
39.2.7 Disks
Disks statistics provide the allocated and unallocated storage for all the disks and disk
partitions on a host. All disks are listed including virtual disks from external storage
systems such as EMC Storage Array.
Note: Overhead information for virtual disks is not instrumented nor presented.
For a disk to be deployed for usage, the disk must first be formatted. After formatting,
the disk can be configured (using vendor-specific configuration utilities) to have one
or more partitions.
A disk or disk partition can be associated (using vendor-specific configuration
utilities) with exactly one entity from one of the upper layers (volumes, Oracle ASM,
databases, and file systems) on the host. When an association exists for a disk or disk
partition to an upper layer entity, it is reported as allocated space in Enterprise
Manager.
tmpfs Solaris
NFS
Network File Systems (NFS) are accessible over the network from the host. A remote
server (NFS Server) performs the I/O to the actual disks. There are appliances that
provide dedicated NFS Server functionality, such as Network Appliance Filer. There
are also host systems, for example, Solaris and Linux, that can act as both NFS Server
and Client.
Writeable NFS refers to the NFS mounted on a host with write privilege.
Suggestions for Monitoring NFS Mounts
The following are suggestions on monitoring NFS mounts.
• Monitor the remote host if NFS exports are coming from another host supported by
Enterprise Manager. The Filesystems metric will monitor the local file systems on
the remote host.
• Monitor the Netapp Filer if NFS exports are coming from a remote Netapp Filer.
Volumes and Qtress metrics will monitor the exports from the remote Netapp Filer.
• Use the 'File and Directory Monitoring' metric if any of the previous choices do not
meet the need. Set the threshold against the 'File or Directory Size' metric to
monitor specific remote mounts.
39.2.9 Volumes
Various software packages are available in the industry that are either generically
known as Volume Manager technology or Software*RAID (Redundant Arrays of
Independent Disks) technology. These technologies are deployed to improve the RAS
(Reliability, Availability, and Scalability) characteristics of the underlying storage. For
example, Veritas Volume Manager is a popular product used across multiple
operating systems. Such technologies are referred to as Volumes in Enterprise
Manager.
The Volumes option displays the allocated and unallocated storage space for all the
entities present in the Volumes layer, including relevant attributes for the underlying
Volumes layer technology.
Types of Entities
The Volumes layer can have entities of various types present internally. Entity type
shown in Enterprise Manager is based on the terminology as defined by the deployed
Volumes layer technology. For example, a Veritas volume manager defines and
supports the following entity types: Volume, Plex, Sub Disk, VM Disk, VM Spare Disk,
and Diskgroup. Refer to the vendor documentation for more details about the
Volumes technology deployed on your system.
Top-Level Entities
Top-level Volumes layer entities provide storage space to the upper layers for usage. If
a top-level entity does not have an association to an entity from an upper layer, the
top-level entity is unallocated and it is available for further allocation related activity.
For each vendor technology, entities of specific types from their layer can be
associated with entities from the upper layers. File Systems, Databases, and ASM are
examples of upper layers. For example, entities of type 'Volume' in Veritas Volume
Manager are such entities. These entities are referred to as top-level Volumes layer
entities in this documentation.
Bottom-Level Entities
For each vendor technology, entities of specific types from their layer can be
associated with entities from the disk layer. For example, VM Disk and VM Spare Disk
entities in Veritas Volume Manager are such entities. These entities are considered to
be bottom-level Volumes layer entities in this documentation.
Bottom-level Volumes layer entities consume storage space from the disk layer and
provide storage space to the rest of the entities in the Volumes layer. Bottom-level
entities of 'reserve' or 'spare' type are always allocated and no space is available from
them for allocation purposes. Note that spare entities are utilized by the Volumes
technology for handling disk failures and they are not allocated to other entities
present in the Volumes layer by way of administrator operations.
Non-spare bottom-level entities can have an association to an intermediate or top-level
entity configured using respective vendor administration utilities. If no association
exists for a non-spare bottom-level entity, then it is unallocated. If one or more
associations exist for the non-spare bottom-level entity, then the space consumed
through the existing associations is allocated. It is possible that some space could be
left in the bottom-level entity even if it has some associations defined for it.
Storage space in non-spare bottom-level entities not associated with intermediate or
top-level entities is available for allocation and it is accounted as unallocated space in
the bottom-level entity.
Intermediate Entities
Non top-level and bottom-level entities are considered to be intermediate level entities
of the Volumes layer. For example, Volume (layered-volume case), Plex and Sub Disk
entities in Veritas Volume Manager are such entities.
If an intermediate entity has association to another intermediate or top-level entity, the
storage space consumed through the association is allocated. Space present in the
intermediate entity that is not consumed through an association is unallocated.
The following vendor products are instrumented:
Platform Product
Solaris Solaris Volume Manager
ASM Database
As you monitor your host, you will find that the host needs to be fine-tuned to
perform at optimum levels. This chapter explains how to administer your host to reap
the best performance. In particular, this chapter explains:
• Administration Tasks
• Miscellaneous Tasks
2. Either type the name of the desired host in the Search field or scroll down to the
name of the host in the Name column.
4. From the Host menu, select Monitoring, then Metric and Collection Settings.
Follow the instructions for each configuration explanation.
1. On the Metric and Collection Settings page, select All metrics in the View menu.
Locate the File and Directory Monitoring metrics. The metrics are:
2. After reviewing each metric, decide which metrics need to change. Click the pencil
icon to navigate to the corresponding Edit Advanced Settings page.
3. You can specify new criteria for monitoring by clicking Add on this page. Refer to
Notes about Specifying Monitored Objects for details on configuring the criteria.
4. You can edit or remove existing criteria by selecting the row from the Monitored
Objects table and clicking Edit or Remove.
1. Search for Log File Pattern Matched Line Count in the table displayed for Metrics
with Thresholds filter. Click the pencil icon in this row to navigate to the Edit
Advanced Settings: Log File Pattern Matched Line Count page.
2. You can edit or remove existing criteria by selecting the row from the Monitored
Objects table and clicking Edit or Remove. Refer to Notes about Specifying
Monitored Objects for details on configuring the criteria.
2. For security purposes, you may want to disable monitoring of sensitive files by
Enterprise Manager permanently by adding the names of the sensitive files to the
$ORACLE_HOME/sysman/admin/lfm_efiles file. The $ORACLE_HOME/
sysman/admin/lfm_efiles.template file is a template needed for creating
the $ORACLE_HOME/sysman/admin/lfm_efiles file.
Column Description
Log File Name In this column, specify the absolute path for the log file to be
monitored. SQL wild characters can be used for specifying multiple
file names.
Examples:
(a) /orahome/log/f1.log This value would monitor single log file.
(b) /orahome/log/%.log This value would monitor all files with
suffix .log in /orahome/log directory.
Match Pattern in Perl In this column, specify the pattern to be matched for. Perl
expressions are supported.
This column specifies the pattern that should be monitored in the log
file. During each scan, the file is scanned for occurrence of the
specified match pattern [with case ignored].
Example:
(a) Pattern Value = ERROR This pattern will be true for any line
containing error
(b) Pattern Value = .*fan.*error.* This pattern will be true for lines
containing fan and error
Ignore Pattern in Perl This column specifies the ignore pattern. In the given Log file, line
containing the match pattern will be ignored if the ignore pattern is
contained in that line.
In this column, specify any pattern that should be ignored. Perl
expressions are supported.
If nothing needs to be ignored, specify %
1. On the Metric and Collection Settings page, select All metrics in the View menu.
Locate the Program Resource Utilization metrics. The metrics are:
2. After reviewing each metric, decide which metrics need to change. Click the pencil
icon to navigate to the corresponding Edit Advanced Settings page.
3. You can edit or remove existing criteria by selecting the row from the Monitored
Objects table and clicking Edit or Remove. Refer to Notes about Specifying
Monitored Objects for details on configuring the criteria.
Column Description
Program Name Program name is the name of the command being executed on the host
operating system. On UNIX systems, ps command displays the name for
each process being executed.Either exact name or name with SQL wild
cards (% and _) can be specified for program name. SQL wild card
matches 0 or more characters. SQL wild card _ matches exactly one
character.
Owner Owner is the name of the user running the given process on the host
operating system. On UNIX systems, ps command displays the name for
each process being executed.Either exact name or name with SQL wild
cards (% and _) can be specified for owner. SQL wild card matches 0 or
more characters. SQL wild card _ matches exactly one character.
• Services
View statistics of individual service and edit their services.
Note: This feature is only available on hosts running Oracle Linux, Red Hat Linux
and SUSE Linux Operating Systems (x86 and x64 architectures only).
• Network Cards
Manage routing configuration, view configuration statistics, and view network file
system clients.
• NFS Client
2. Either type the name of the desired host in the Search field or scroll down to the
name of the host in the Name column.
4. From the Host menu, select Administration, then the entity on which you want to
make changes.
Note: To see a video showing the navigation in the Administration menu option, see
http://www.youtube.com/watch?v=ROfqR2GhQ_E.
40.2.1 Services
The Services page provides a list of all the services and their statistics for this host.
This page enables you to:
• Access the page that allows you to edit the properties of individual services.
• View the current system run level. When no run level is defined for the service, the
service uses the current system run level.
• Determine whether the service is enabled and view the service run levels:
6 System reboot
Note: Be aware that you must restart the system for the run level to take effect.
• Click Change to change the host credentials. You must have SUDOER credentials
to complete the Default System Run Level operation. If you do not have SUDOER
credentials, this button provides the opportunity to change credentials.
• Click Cancel to abandon the changes and return to the Host Administration page.
• Click Save to keep the changes made to the default system run level and return to
the Host Administration page.
• You need to install the YAST toolkit to use the Default System Run Level feature.
See Required Installations.
The run levels are not immediately affected and hence the default run level and
current run level may be different if the system has not been rebooted.
Note:
The default run level is a powerful tool. You should only change the default
system run level if you have sufficient knowledge and experience. Changing
the default system run level inappropriately could result in improper system
functionality after rebooting.
• View the device name and IP address of the network card used by Enterprise
Manager
• View the DNS settings and click Edit to change the global DNS settings for the
listed domains.
• Opt to use either of the following setup methods: Static Address Setup or
Automatic Address Setup using DHCP.
• Add a lookup table entry for a host by accessing the Add Host Configuration page.
Note the following:
• Click Change to edit the credentials used for this page. You do not need to reboot
for changes to take effect.
• Access the page that allows you to edit the properties of individual clients
• Access the page that allows you to add NFS clients to the host
• Click Done to exit the page without making any changes and returning to the
previous page.
• Click Change to edit the host credentials used for this page.
• Mount options
Options
ro
rsize=32768
wsize=32768
acregmin=1200
acregmax=1200
acdirmin=1200
acdirmax=1200
hard
intr
tcp
lock
rw
nosuid
nodev
Note the following:
• Click Cancel to ignore all changes and return to the NFS Client page.
• Click OK to accept all changes made. All changes are implemented immediately.
• Check Persist Over Reboot to ensure mounts are available between reboots
Password In conjunction with the username, a set of characters that allows access
to this host
The password must be no shorter than 5 characters and no longer than
72 characters. If you have changed the user's password, ensure you
inform the user of this change.
Confirm Password The password typed in this field must be exactly as the password
typed in the Password field
If the confirm password does not match the password typed in the
Password field, either retype the password or define a new password
and confirm it.
Home Directory Ensure the home directory begins with a slash (/)
Login Shell Select the Login Shell from the list of available shells from the drop-
down list
Default Group Select the default group from the drop-down list of available groups
Group Members Groups that belong to the group. Group shares the permissions given
to the subordinate groups.
Group Members Groups that belong to this group. Group shares the permissions given
to the subordinate groups. Group names are separated by a comma.
Do not include any spaces: for example, adm,daemon,root
• Host Command
1. From the Setup menu located at the top-right of the page, select Security then
select Named Credentials.
3. On the Create Credential page, provide host credentials with root privileges.
5. Provide the details for Sudo or PowerBroker and the system performs the
administrative task.
• Provide host credentials with root privileges. Provide information for Sudo (runas)
support or Power Broker (profile) support.
• Provide the details and the system performs the administrative task.
From the Targets menu, select Hosts. On the Hosts page, highlight the row
containing the name of the host in which you are interested.
3. Provide host target credentials. Provide information for Sudo (runas) support or
PowerBroker (profile) support.
Note: On the target host, the /etc/sudoers file needs to be present with the target
user information inserted.
4. Type the specific command to be run on the system and view the command output.
• With the appropriate privileges, view and edit any text file present on the remote
host.
• Save a file that has been edited on the remote host by clicking Save.
• Save the contents to a different file on the remote host by clicking Save a Copy.
• Change to another user account or use another set of Host Preferred Credentials by
clicking Change next to User.
• After you have opened a file for editing, select a new file for editing by clicking
Change next to File Name.
• Revert to text at the time of the last successful save operation by clicking Revert.
Accessing Remote File Editor
To navigate to the Remote File Editor, perform the following steps:
1. From the Targets menu, select Hosts. On the Hosts page, click the name of the host
in which you are interested.
2. On the resulting Host home page, select Remote File Editor from the Host menu
(located at the top-left of the page).
3. If the preferred credentials are not set for the host target, the Host Credentials page
appears. Three options are available: Preferred, Named, and New.
4. Once credentials are set, you can perform the following on the Remote File Editor
page:
Notes
Note the following:
• The file must be an ASCII text file and cannot be larger than 100 KB.
• In the case where the credential check is successful, the file exists and you have
read privilege on the file, the file content is loaded for editing.
• If you do not have write privilege, you will not be able to save the file. Click Save a
Copy and save the file to a directory on which you have write privilege.
• In the case the file does not exist but you have write privilege on the directory, a
new empty file is opened for text input.
2. Either type the name of the desired host in the Search field or scroll down to the
name of the host in the Name column.
3. Click the name of the host. Enterprise Manager displays the Home page for the
host.
5. On the Host Credentials page, type the user name and password for the host.
3. Provide host target credentials. Provide information for Sudo (runas) support or
Power Broker (profile) support.
4. Type a specific command to be run on the system and view the command output.
Note: If the credentials (with runas Sudo/PowerBroker) are not set, then you will be
prompted to create credentials. To create credentials, select Security from the Setup
menu, then select either Named Credentials or Preferred Credentials.
• Refine the command, reexecute the command, and view the execution results, all
on the same page.
• Either type operating system commands, load the commands from a script on the
browser machine or on the host, or load host commands from a job defined in the
job library.
• Select hosts individually or through the use of a group. You can also switch to
Single Target Mode where only one host target is acted upon.
• Cancel execution of the command is possible when the Processing: Executing Host
Command page appears.
• Execution history reflects the host commands that have been executed in the
current Execute Host Command session, as well as any host commands executed in
previous sessions that were executed with the 'Keep the execution history for this
command' option chosen.
• Clicking Add launches the Target Selection page with the target type list limited to
host targets and any groups that contain host targets.
• When saving the OS script or execution results, the saved file is located on the
browser machine.
• When switching from multiple to single target mode, the first host in the targets
table will be used.
• To use the Database Instance target type, launch Execute Host Command from a
group containing one or more host targets and switch the Target Type.
Examples:
To execute a Perl script, passing in the target name as an argument, enter the following
in the Command field: %perlbin%/perl myPerlScript %TargetName%
To execute a program in the directory identified by the TEMP environment variable on
a Windows host: %%TEMP%%/myProgram
• Choose the target type. You can choose hosts directly, or choose hosts by way of
database instance targets.
• Refine the command, reexecute the command, and view the execution results, all
without leaving the page.
• Either type operating system commands, load the commands from a script on the
browser machine or on the host, or load host commands from a job defined in the
job library.
• Select hosts individually or through the use of a group. You can also switch to
Single Target Mode where only one host target is acted upon.
• If the current target type is Host, clicking Add launches the Target Selection page
with the target type list limited to host targets and any groups that contain host
targets.
• If the current target type is Database Instance, clicking Add launches the Target
Selection page with the target type list limited to database targets and any groups
that contain database targets.
• Execution history reflects the host commands that have been executed in the
current Execute Host Command session, as well as any host commands executed in
previous sessions that were executed with the 'Keep the execution history for this
command' option chosen.
• Changing the target type reinitializes the host credentials, command, OS script, and
targets table.
• When saving the OS script or execution results, the saved file is located on the
browser machine.
• When switching from multiple to single target mode, the first host in the targets
table will be used.
• Refine the command, reexecute the command, and view the execution results, all
without leaving the page
• Switch to Multiple Target Mode where multiple host targets are acted upon
• Change credentials
Notes
Note the following:
• Context will be preserved if you switch to Multiple Target Mode, including the
host command, host, and credentials.
• Click Browse to launch the browser's file selector window to locate and choose a
script file.
• Click the Host field's search icon to choose which host to search, then click the Host
File field's search icon to locate and choose a script file on that host.
• Click the icon in the 'Load' column of any row to return to the Execute Host
Command page loading the complete execution context of the host command in
that row. The complete execution context includes the host command, OS script,
targets, credentials, and results.
• Click the icon in the 'Load Command And OS Script Only' column of any row to
return to the Execute Host Command page loading only the host command and the
OS script in that row. Any targets, credentials, and most recent results will remain.
• Click the icon in the 'Remove' column of any row to remove the host command in
that row, along with all its execution context, from the Execution History. This will
delete the job that was used to execute the host command.
• Cut the text from the listing and paste it into another script.
• Study the results uninterrupted and separate from all the other executions.
Note: The extent of the editing features is dependent upon the browser displaying the
results.
To configure the collections on systems with WBEM compliant CIM Object Managers,
use the following steps:
2. From the Host menu, select Target Setup, then Monitoring Configuration.
3. Verify that the installation was successful by performing the following steps:
4. Verify the Dell OMSA software is functioning correctly by verifying that the Dell
OpenManage Server Administrator website is up and running.
• Log in using an operating system account. Check that you are able to
successfully log in and navigate in the website.
5. If the SNMP Community String for the SNMP daemon running on the Linux host
is not public, set the SNMP Community String property in Enterprise Manager
using the following steps:
c. From the Host menu, select Target Setup, then Monitoring Configuration.
d. Set the SNMP Community String property to the correct value on this page
7. Note: The following step is not required if the Dell OMSA software was functional
prior to the previous startup of the Management Agent.
Verify that hardware monitoring is working correctly using the following steps:
d. Verify that the following metrics are present on this page: Fans, Memory
Devices, PCI Devices, Power Supplies, Processors, Remote Access Card,
System BIOS, and Temperature
e. You can navigate to the metric data page by clicking on one of the metrics
listed in the previous step and view the data that Enterprise Manager is able
to fetch.
Patching is one of the important phases of the product lifecycle that enables you to
keep your software product updated with bug fixes. Oracle releases several types of
patches periodically to help you maintain your product. Patching is complex and
involves downtime. You can use several approaches to identify and apply the patches.
This chapter describes how Oracle Enterprise Manager Cloud Control's (Cloud
Control) new patch management solution addresses these patch management
challenges. In particular, this chapter covers the following:
• Applying Patches
Custom Scripts User-created scripts developed • Difficult to identify the patches to be rolled out
around OPatch, SQLPlus, and • Can be used only on a single server
so on. • Requires significant maintenance overhead to meet
the new version and configuration needs
• Integrated patching workflow with My Oracle Support (MOS), therefore, you see
recommendations, search patches, and roll out patches all using the same user
interface.
However, Cloud Control does not upload any data to MOS. It only uses MOS to
download the latest updates.
Note:
• Flexible patching options such as rolling and parallel, both in offline and online
mode.
Figure 41-1 shows you how you can access the Patches and Updates screen from
within Cloud Control console.
• Patches (One-Off)
– Interim Patches that contain a single bug fix or a collection of bug fixes provided
as required. It can also include customer specific security bug fixes.
– Patch Set Updates (PSU), contain a collection of high impact, low risk, and
proven fixes for a specific product or component.
• Patch Sets
Note:
– Patch Sets are available for Oracle Database 10g Release 2 (10.2.0.x) and
Oracle Database 11g Release 1 (11.1.0.x). You can apply these patches using
a patch plan. However, Patch Sets for Oracle Database 11g Release 2
(11.2.0.x) are complete installs (full releases), and you must use the
Database Upgrade feature to apply them. The Database Upgrade feature
follows the out-of-place patching approach.
– Using Oracle Enterprise Manager Cloud Control to apply Patch Sets to any
Oracle Fusion Middleware software (including Oracle WebLogic Server,
Oracle SOA Infrastructure, and Oracle Service Bus) and Oracle Fusion
Application software is not supported. Similarly, using Oracle Enterprise
Manager Cloud Control to apply Bundle Patches to Oracle Identity
Management software is not supported.
– Patch Sets are supported for only one specific version of Oracle Database.
Note:
• You cannot add both patch sets and patches to a patch plan. Instead, you
can have one patch plan for patch sets, and another patch plan for patches.
• You cannot add patches that require different database startup modes
(upgrade or normal) in a single patch plan.
A patch can be added to a target in a plan only if the patch has the same release and
platform as the target to which it is being added. You will receive a warning if the
product for the patch being added is different from the product associated with the
target to which the patch is being added. The warning does not prevent you from
adding the patch to the plan.
You can patch any target by using a patch plan. The plan also validates Oracle
Database, Fusion Middleware, and Cloud Control patches against your environment
to check for conflicts with installed patches.
The patch plan, depending on the patches you added to it, automatically selects an
appropriate deployment procedure to be used for applying the patches. For
information on the patching deployment procedures used for various database target
types, see #unique_25/unique_25_Connect_42_BGGDBHJH.
Note:
• Patch plans are currently not available for hardware or operating system
patching.
• Any administrator or role that has view privileges can access a patch plan.
For information on roles and privileges required for patch plans and patch
templates, see Creating Administrators with the Required Roles for
Patching.
Note:
– You cannot add both patch sets and patches to a patch plan. Instead, you
can have one patch plan for patch sets, and another patch plan for patches.
– You cannot add patches that require different database startup modes
(upgrade or normal) in a single patch plan.
• It contains targets that are supported for patching, similarly configured, and are of
the same product type, platform, and version (homogeneous targets).
Error Plans
If your patch plan consists of Oracle Management Agent (Management Agent) targets
and patches, and the analysis fails on some Management Agents, then the patch plan
is split into two plans. The Management Agents and their associated patches on which
the analysis was successful are retained in the original plan. The failed targets and
their associated patches are moved to a new error plan. The deployment options are
also copied into the error plan. The error plan is accessible from the Patches & Updates
page.
checks using the patch information from Oracle, the inventory of patches on your
system (gathered by the configuration manager), and the information from candidate
patches.
– Conflict between the patches added to the patch plan and the patches already
present in the Oracle home
Note:
For Oracle WebLogic targets, instead of the OPatch and OUI check, you must
perform the SmartUpdate version check.
In addition to checking for conflicts, it enables you to check for patch conflicts between
the patches listed in the plan.
Screen 5: Review & Deploy
Enables you to review the details you have provided for the patch plan, and then
deploy the plan. The page also enables you to review all the impacted targets so that
you understand what all targets are affected by the action you are taking.
patch plans out of the templates, add another set of targets, and roll out the patches to
the production environment in a recursive manner.
This way, you reduce the time and effort required to create new patch plans, and as a
Patch Designer, you expose only the successfully analyzed and approved plans to Patch
Operators.
Note:
An administrator or role that has the privileges to create a patch template and
view a patch plan, which is being used to create a template, can create a patch
template.
When you view or modify a patch plan template, the Edit Template Wizard opens.
This wizard has the following screens:
Screen 1: Plan Information
Enables you to view general information about the template, and modify the
description and the deployment date.
Screen 2: Patches
Enables you to view a list of patches part of the patch plan template. The patches listed
here are the patches copied from the source patch plan that you selected for creating
the template.
Screen 3: Deployment Options
Enables you to view the deployment options configured in the patch plan template.
exposed in the Deployment Procedure Manager page, you cannot select and run them
independently; you must always create a patch plan to run them.
Note:
1. Ensure that you have applied the Enterprise Manager for My Oracle
Support 13.2.0.0.0 plug-in on the OMS. This must be applied to all of the
OMS instances in a multi-OMS environment.
2. Ensure that you have applied the Enterprise Manager for My Oracle
Support 13.2.0.0.0 plug-in on the OMS. This must be applied to all of the
OMS instances in a multi-OMS environment.
3. Ensure that you have applied the Enterprise Manager for Oracle Fusion
Middleware 13.2.0.0.0 plug-in on the OMS and the Management Agent
monitoring the Oracle WebLogic Server targets.
4. Ensure that you have applied the Enterprise Manager for Oracle Fusion
Middleware 13.2.0.0.0 plug-in on the OMS and the Management Agent
monitoring the Oracle WebLogic Server targets.
Table 41-2 Supported Targets and Releases for Patching Oracle Database
Oracle Oracle Database In-Place N/A All Platforms Patch Oracle Database
Database (Standalone) 10g to 12c 4
Multitenant Database
(Container and Pluggable
Databases)
Oracle Database Out-of- N/A All Platforms Clone and Patch Oracle
(Standalone) 10g to 12c 4 Place Database
Multitenant Database
(Container and Pluggable
Databases)
Oracle RAC One Node In-Place Rolling All Platforms Patch Oracle RAC
Database 10g to 12c 4 except for Database - Rolling
Microsoft
Windows
Oracle RAC One Node In-Place Parallel All Platforms Patch Oracle RAC
Database 10g to 12c 4 except for Database - Parallel
Microsoft
Windows
Oracle RAC Database 10g In-Place Rolling All Platforms Patch Oracle RAC
to 11.2.0.1 except for Database - Rolling
Microsoft
Windows
Table 41-2 (Cont.) Supported Targets and Releases for Patching Oracle Database
Oracle RAC Database 10g In-Place Parallel All Platforms Patch Oracle RAC
to 11.2.0.1 except for Database - Parallel
Microsoft
Windows
Oracle Oracle RAC Database In-Place Rolling / All Platforms Dynamic Deployment
Database 11.2.0.2 to 12.1.0.2 4 Parallel except for Procedures 1
Microsoft
Windows
Oracle Restart 10g to In-Place N/A All Platforms Patch Oracle Restart
11.2.0.4
Table 41-2 (Cont.) Supported Targets and Releases for Patching Oracle Database
Note:
Symbol Description
1 To override and use the following static deployment procedures for In-
Place patching mode. Set (update/insert) the use_static_dp to true in
mgmt_patching_properties table.
• Patch Oracle RAC Database - Parallel
• Patch Oracle RAC Database - Rolling
2 To override and use the following static deployment procedures for In-
Place patching mode. Set (update/insert) the use_static_dp to true in
mgmt_patching_properties table.
• Patch Oracle Clusterware - Parallel
• Patch Oracle Clusterware - Rolling
3 To override and use the following static deployment procedures for In-
Place patching mode, set (update/insert) the use_static_dp to true in
mgmt_patching_properties table.
• Patch Oracle Clusterware(12.1.0.1.0 onwards) - Rolling
• Patch Oracle Clusterware(12.1.0.1.0 onwards) - Parallel
4 Multitenant Database (Container and Pluggable Databases) is supported.
Table 41-3 Supported Targets and Releases for Patching other target types
Table 41-3 (Cont.) Supported Targets and Releases for Patching other target types
Oracle Service Oracle SOA Infrastructure 11g All Platforms • Patch Oracle SOA Infrastructure
Oriented Release 1 (11.1.1.1.0 - 11.1.1.7.0), 12c In Parallel Mode
Architecture Release 1 (12.1.3.0.0), and 12c Release • Patch Oracle SOA Infrastructure
2 (12.2.1.0.0) In Rolling Mode
Oracle Siebel Oracle Siebel 8.1.1.9, 8.1.1.10, 8.1.1.11, All Platforms Patching Siebel Targets
(only Siebel and 8.2.x
Server and
Siebel
Gateway
Server targets
are supported)
Oracle Fusion Oracle Fusion Applications 11g All Platforms Patch Oracle Fusion Applications
Applications Release 1 (11.1.1.5.1) and (11.1.2.0.0)
(RUP1)
Oracle Oracle Automated Storage All Platforms Patch Standalone Oracle ASM
Database Management (Oracle ASM) 10g
Release 1 to 11g Release 2
Oracle Oracle Software Only Patching (Grid All Platforms Patch Oracle Home
Database Home / DB Home)
Note:
• You can also patch primary and standby databases configured with Oracle
Data Guard. For information on how to patch these targets, see Patching
Oracle Data Guard Targets.
Offline Mode
Offline mode is useful when Cloud Control cannot connect to My Oracle Support.
Using this mode, you can search patches that were manually uploaded to the Software
Library, and add them to your patch plan. In offline mode, you cannot do the
following:
Note:
• By default, the patching mode is set to online. If you want to switch the
mode to offline, then from the Setup menu, select Provisioning and
Patching, then select Offline Patching. For connection, select Offline.
• Ensure to run the necessary OPatch in online mode prior to applying the
patches in offline mode. This can be accomplished either manually
uploading the relevant version of OPatch to the software library or execute
the out of box OPatch update job.
Out-of-Place Mode
Out-of-place mode of patching is a patching mechanism that clones the existing database
home, and patches the cloned home instead of the original home. Once the cloned
home is patched, you can migrate the database instances to run from the cloned home,
which ensures minimal downtime.
Note:
• If the cloned home contains certain additional patches (that are not added
to the patch plan) that include SQL statements, the SQL statements for
these patches are not executed automatically when the database instance is
migrated to the cloned home. For these patches, you must execute the SQL
statements manually.
While migrating the database instances, you can choose to migrate all the instances, or
only some of them, depending on the downtime you can afford to have in your data
center. If you choose to migrate only a few instances in one session, then ensure that
you migrate the rest in the next session. This way, you can control the downtime in
your data center as you divide the migration activity. This is particularly useful when
you have multiple database instances running out of an Oracle home.
Note:
You select the database instances that you want to migrate, while creating the
patch plan using the Create Plan Wizard. The selected database instances are
migrated when the patch plan is in the Deploy state. If you selected only a few
database instances for migration, create another patch plan to migrate the
remaining instances. On the Deployment Options page, select the existing
home, then select the remaining database instances that need to be migrated.
In case of a system patch such as a Grid Infrastructure patchset update, edit
and provide the candidate Oracle Home location. This location is validated
during analysis.
Note:
Figure 41-5 illustrates how multiple database instances running from an Oracle home
get patched in out-of-place patching mode.
Note:
Alternatively, to obtain a new Oracle home that has the required patches, you
can provision it directly from a software image (that has the required patches)
using provisioning deployment procedures, and then use a patch plan for the
analysis and post patching steps. For information on how to do this, see
Analyzing, Preparing, and Deploying Patch Plans.
Rolling Mode refers to the patching methodology where the nodes of the cluster are
patched individually, that is, one by one. For example, if you are patching a
clusterware target that has five nodes, then the first node is shut down, patched, and
restarted, and then the process is rolled over to the next node until all the nodes are
patched successfully.
Note:
The ReadMe of the patch clearly states whether or not you can use the Rolling
Mode to apply your patches. Therefore, use this mode only if it is stated in the
ReadMe. This is validated during analysis.
Figure 41-6 illustrates how a two-node Oracle RAC target gets patched when rolling
mode is used.
Parallel Mode
Parallel Mode refers to the patching methodology where all the nodes are patched at a
time, collectively. In this methodology, all the nodes are shut down and the patch is
applied on all of them at the same time.
Figure 41-7 illustrates how a two-node Oracle RAC target gets patched when parallel
mode is used.
Step 2 Identify the View the recommendations made by Oracle Identifying the Patches to Be
Patches on the patches to be applied, and identify the Applied
ones you want to apply. Access community
information (from innumerous customers).
Step 3 Apply Patches Create patch plans with patches and Applying Patches
associated targets, perform prerequisite
checks, analyze the patches for conflicts and
resolve the issues, and then save the
successfully analyzed plan as a patch
template. Then, create a new patch plan out
of the patch template and use that to deploy
the patches in your environment.
NOT_SUPPORTED:
• Setting Up the Infrastructure for Patching in Offline Mode (Not Connected to MOS)
• Analyzing the Environment and Identifying Whether Your Targets Can Be Patched
Note:
If the targets that you want to patch are deployed on Microsoft Windows
hosts, you must ensure that Cygwin is installed on the target hosts, before
patching the targets.
For information on how to install Cygwin on a host, see Enterprise Manager
Cloud Control Basic Installation Guide.
Table 41-4 Roles and Privileges for Using Patch Plans and Patch Templates
41.2.3 Setting Up the Infrastructure for Patching in Online Mode (Connected to MOS)
If you choose to patch your targets when Cloud Control is online, that is, when it is
connected to MOS, then meet the following setup requirements:
Note:
• This is the default mode for patching in Cloud Control. Therefore, you do
not have to manually set this up the first time. However, if you have set it
to Offline mode for a particular reason, and if you want to reset it to Online
mode, then follow the steps outlined in this section.
• Cloud Control does not upload any data to MOS. It only uses MOS to
download the latest updates.
To patch the targets in Online mode, you must set the connection setting in Cloud
Control to Online mode. To do so, log in as a user that has the Patch Administrator
role, then follow these steps:
1. From the Setup menu, select Provisioning and Patching, then select Offline
Patching.
Note:
To register the proxy details for My Oracle Support (MOS), follow these steps:
1. From the Setup menu, select Proxy Settings, then select My Oracle Support.
2. If you want the OMS to connect to MOS directly, without using a proxy server,
follow these steps:
a. Select No Proxy.
c. If the connection is successful, click Apply to save the proxy settings to the
repository.
3. If you want the OMS to connect to MOS using a proxy server, follow these steps:
b. Specify the proxy server host name for HTTPS and an appropriate port value
for Port.
c. If the specified proxy server has been configured using a security realm, login
credentials, or both, select Password/Advanced Setup, then provide values
for Realm, User Name, and Password.
d. Click Test to test if the OMS can connect to MOS using the specified proxy
server.
e. If the connection is successful, click Apply to save the proxy settings to the
repository.
Note:
• If you are using a proxy server in your setup, ensure that it allows
connectivity to aru-akam.oracle.com, ccr.oracle.com, login.oracle.com,
support.oracle.com, and updates.oracle.com.
NTLM (NT LAN Manager) based Microsoft proxy servers are not
supported. If you are using an NTLM based Microsoft proxy server, to
enable access to the above sites, add the above URLs to the
Unauthenticated Sites Properties of the proxy server.
• The MOS proxy server details specified on the MOS Proxy Settings page
apply to all OMSes in a multi-OMS environment.
41.2.4 Setting Up the Infrastructure for Patching in Offline Mode (Not Connected to
MOS)
If you choose to patch your targets when Cloud Control is offline, that is, when it is
not connected to My Oracle Support, then meet the following setup requirements:
• Downloading Enterprise Manager Catalog Zip File From Another Host With
Internet Connectivity
• Uploading Enterprise Manager Catalog Zip File from your Host With No Internet
Connectivity
1. From the Setup menu, select Provisioning and Patching, then select Offline
Patching.
Note:
Ensure to run the necessary OPatch in online mode prior to applying the
patches in offline mode. This can be accomplished either manually uploading
the relevant version of OPatch to the software library or execute the out of box
OPatch update job.
Note:
Once Cloud Control is running in offline mode, you must download the latest
Enterprise Manager catalog file from a host that has internet connectivity,
transfer the catalog file to your local host, then upload the catalog file to the
Management Repository. For information on how to do this, see Downloading
Enterprise Manager Catalog Zip File From Another Host With Internet
Connectivity and Uploading Enterprise Manager Catalog Zip File from your
Host With No Internet Connectivity .
41.2.4.2 Downloading Enterprise Manager Catalog Zip File From Another Host With
Internet Connectivity
In Offline mode, you must use another host that has an Internet connection, and
manually download the em_catalog.zip file from My Oracle Support. Use the
following URL to download the latest catalog file:
https://updates.oracle.com/download/em_catalog.zip
Information about the targets affected by the latest patches, the patches that you have
to download manually, and so on is available in the catalog zip file.
41.2.4.3 Uploading Enterprise Manager Catalog Zip File from your Host With No
Internet Connectivity
After downloading the catalog zip file as described in the preceding section, ensure
that you transfer the latest downloaded em_catalog.zip file back to your local host
using FTP or any other file transfer methodology. Then, from your local host, you can
log in to Cloud Control to upload the zip file. To do so, follow these steps:
1. From the Setup menu, select Provisioning and Patching, then select Offline
Patching.
2. On the Patches & Updates page, in the Patch Search section, enter the patch
number you want to search for as shown in Figure 41-8, then click Search.
3. On the Patch Simple Search Results page, select the row that has the patch that
you want to download. Click Download. In the File Download dialog, click the
name of the patch zip file to download it to your local host. Click Download
Patch Metadata, and then in the Download Patch Metadata dialog, click
Download to download the patch metadata file. This step is described in
Figure 41-9.
Note:
Oracle recommends that you transfer the patch ZIP file and the metadata XML
file to the Management Agent host, where the Management Agent could be an
agent on an OMS machine, or on the target host. Upload these files from the
Management Agent host to Software Library.
1. From the Enterprise menu, select Provisioning and Patching, then select Saved
Patches.
2. Click Upload.
3. For Patch Zip File, specify the location of the patch zip file you downloaded onto
your local host. If the patch zip file you downloaded contains the
PatchSearch.xml file (a file containing patch metadata information such as
patch ID, product, platform, language etc.), you do not need to specify a value for
Patch Metadata. However, if the patch zip file you downloaded does not contain
the PatchSearch.xml file, and you downloaded the patch metadata file onto
your local host separately, for Patch Metadata, specify the location of the patch
metadata file.
On a Unix based operating system, run the following command to verify whether
the PatchSearch.xml file is contained within a patch zip file:
unzip -l <patch zip file path> | grep PatchSearch.xml
For information on how to download the patch metadata file of a patch, refer
Uploading Patches to Oracle Software Library.
Note:
If you encounter an error mentioning that the patch could not be uploaded as
it is too large, either use EM CLI to upload the patch (as described in
Uploading Patches to Oracle Software Library), or run the following
command, restart the OMS, then retry the patch upload:
emctl set property -name
"oracle.sysman.emSDK.ui.trinidad.uploadedfilemaxdiskspace"
-sysman_pwd sysman -value 2589934592
Ensure that the value you specify for -value is in bytes, and is larger than the
size of the patch that you want to upload.
1. Set up EM CLI on the host on which the downloaded patch files that you want to
upload are located.
EM CLI is set up by default on every OMS host. For information on how to set up
EM CLI on a host that is not running the OMS, refer the Command Line Interface
Concepts and Installation chapter of Oracle Enterprise Manager Command Line
Interface.
For example,
<emcli_install_location>/emcli login -username=sysman -
password=2benot2be
Note:
Ensure that the EM CLI log in user has the ADD_TARGET privilege.
3. Synchronize EM CLI:
<emcli_install_location>/emcli sync
<emcli_install_location>/emcli upload_patches
-location=<patch_location> | -
patch_files=<metadata_file_path;ZIP_file path;second_part_of_ZIP_file_path;>
-from_host=<host_name>
[-cred_name=<credential_name> [-
cred_owner=<credential_owner>]]
This example uploads all the patch zip files and patch metadata files present at /
scratch/aime/patches on the h1.example.com host to Software Library.
Use the -patch_files option to provide the absolute path of a patch zip file and
its patch metadata file. If you use this option, you can specify only one patch zip
file. Hence, you can use this option to upload only a single patch at a time. Also,
use the -cred_name option to specify the named credentials that must be used to
access the specified host, and the -cred_owner option to specify the owner of the
specified named credential. If you do not specify the -cred_name option, the
preferred normal credentials are used to access the host. If you do not specify the -
cred_owner option, the credential owner is assumed to be the current user. For
example:
./emcli upload_patches -patch_files="/scratch/p13741363_112310_Linux-
x86-64_M.xml;/scratch/p13741363_112310_Linux-x86-64.zip" -
from_host=h1.example.com -cred_name=AIMECRED -cred_owner=SYSMAN
Note:
Ensure that you specify either the -location option or the -patch_files
option with the upload_patches verb, but not both. Specifying the -
location option enables you to perform a batch upload of multiple patches,
and is hence the recommended option.
To view more information on the syntax and the usage of the upload_patches
verb, run the following command:
$<OMS_HOME>/bin/emcli help upload_patches
41.2.5 Analyzing the Environment and Identifying Whether Your Targets Can Be
Patched
Before creating a patch plan to patch your targets, Oracle recommends that you view
the patchability reports to analyze the environment and identify whether the targets
you want to patch are suitable for a patching operation. These reports provide a
summary of your patchable and non patchable targets, and help you create deployable
patch plans. They identify the problems with the targets that cannot be patched in
your setup and provide recommendations for them.
Patchability reports are available for Oracle Database, Oracle WebLogic Server, and
Oracle SOA Infrastructure targets.
Note:
To view the patchability report for Oracle Fusion Middleware targets (Oracle
WebLogic Server and Oracle SOA Infrastructure targets), the Enterprise
Manager for Oracle Fusion Middleware 12.1.0.4 plug-in must be deployed in
your setup.
1. From the Enterprise menu, select Reports, then select Information Publisher
Reports.
2. Click Expand All to view all the branches under Information Publisher Reports.
3. To view the patchability report for Oracle Database targets, under the Deployment
and Configuration branch, and the Patching Automation Reports sub branch, select
EM Target Patchability Report.
4. To view the patchability report for Oracle Fusion Middleware (Oracle WebLogic
Server and Oracle SOA Infrastructure) targets, under the Deployment and
Configuration branch, and the Patching Automation Reports sub branch, click EM
FMW Target Patchability Report.
Note:
If you see any missing property errors, then resolve the errors using the
workarounds described in Workaround for Missing Property Errors. If you
see any unsupported configuration errors, then resolve the errors using the
workarounds described in Workaround for Unsupported Configuration
Errors.
NOT_SUPPORTED:
This section is mainly for Patch Designers who want to keep track of the
various patch releases, look for recommendations from Oracle, and identify
the ones they want to roll out in the environment.
• Oracle Database
• Oracle Siebel
• WebLogic Server
Note:
Note:
Starting with Enterprise Manager 12c, the Config Collection that is triggered at
the end of patch application happens asynchronously, which means that
collection may not complete when the plan completes execution. In such cases,
you might need to recalculate the patch recommendations for your enterprise.
Also, if the target collection has not happened properly, then too you might
have to recalculate the patch recommendations.
To do so, follow these steps:
2. Perform the following steps to determine the time when the plan was
deployed on the targets:
b. On the Patches and Updates page, from the Plans section, select the
patch plan.
e. Note down the start time of the job from the Job UI page.
The Recommended Patches region appears by default on the Patch & Updates page.
You can edit this region to filter its contents.
To view details of a recommended patch, follow these steps:
1. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
2. On the Patches & Updates page, in the Patch Recommendations region, click on
the bar graph pertaining to the desired patches.
Note:
If you do not see the Patch Recommendations region, click Customize Page
from the top-right corner, and then drag the Patch Recommendations region
to the page.
1. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
2. On the Patches & Updates page, in the Patch Search region, enter the search
parameters you want to use and click Search.
Note:
If you do not see the Patch Search region, click Customize Page from the top-
right corner, and then drag the Patch Search region to the page.
Alternatively, you can use the Saved tab to search any previously save searches.
You can also use the Recent tab to access any recently performed identical (or
similar) search.
Once the patch search is complete, the results appear in the Patch Search Results
page. On this page, you can select a patch and download it either to the local host
(desktop) or to the Software Library.
Note:
Note:
Ensure to run the necessary OPatch in online mode prior to applying the
patches in offline mode. This can be accomplished either manually uploading
the relevant version of OPatch to the software library or execute the out of box
OPatch update job.
1. From the Setup menu, select Provisioning and Patching, then select Offline
Patching.
Note:
• Access Quicklinks
1. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
2. On the Patches & Updates page, in the Software Library Patch Search region, enter
the search parameters you want to use and click Search.
Note:
If you do not see the Patch Search region, click Customize Page from the top-
right corner, and then drag the Patch Search region to the page.
Once the patch search is complete, the results appear in the Patch Search Results
page.
Note:
• Switching Back to the Original Oracle Home After Deploying a Patch Plan
• Deploying Web Logic Patches Along with SOA or Oracle Service Bus Patches In A
Single Patch Plan
NOT_SUPPORTED:
This section is mainly for Patch Designers who want to create patch plans.
1. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
2. On the Patches & Updates page, identify the patches you want to apply as
described in Identifying the Patches to Be Applied.
4. From the context menu, click Add to Plan, and select Add to New.
Note:
If you have already created a patch plan, and if you want to add patches to
that patch plan, then you can select Add to Existing.
5. In the Create a New Plan window, enter a unique name for your Patch Plan, and
click Create Plan.
The patch you select and the target it is associated with get added to the plan.
Note:
• If the patch you selected impacts other targets, then you are prompted to
add the impacted targets as well.
• When you create a Patch Plan, you can add a target that is a member of any
system target in Cloud Control. When doing so, the other member targets
of that system are automatically added to the Patch Plan. A system is a set
of infrastructure components (hosts, databases, application servers, and so
on) that work together to host your applications. In Cloud Control, a
system and each of the components within it are modeled as individual
target types that can be monitored.
• For Oracle WebLogic Server, using a single patch plan, you can patch only
one domain. However, if it is a shared domain, then the Administration
Servers and Managed Servers running on different domains, which are
shared with the domain being patched, are automatically added into the
same plan.
• For Oracle WebLogic Server, if you have deployed the Enterprise Manager
for Oracle Fusion Middleware 12.1.0.4 plug-in, you can add any number of
WebLogic domain targets to a single patch plan. If you have not deployed
the Enterprise Manager for Oracle Fusion Middleware 12.1.0.4 plug-in, you
can add only a single WebLogic domain target to a single patch plan. For
information about how to deploy a new plug-in or upgrade an existing
plug-in, see Oracle Enterprise Manager Cloud Control Administrator's Guide.
If the WebLogic domain target that you add to a patch plan is a shared
domain, then all the Administration Servers and the Managed Servers that
are running on the domains that are shared with the domain being patched
are automatically added into the same patch plan.
• For Oracle SOA Infrastructure targets, all the SOA WebLogic domains that
must be shutdown to patch the SOA targets are added to the patch plan as
impacted targets. Therefore, the Administration Server and the Managed
Servers running in each of these domains also are affected, and form the
Other impacted targets when creating a patch plan.
NOT_SUPPORTED:
This section is mainly for Patch Designers who want to access the patch
plans they have created.
To access the patch plan you created in Creating a Patch Plan, use one of the following
approaches.
Approach 1: Accessing Patch Plan from Plans Region
To access the patch plan from the Plans region, follow these steps:
1. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
2. On the Patches & Updates page, in the Plans region, click the Patch Plan you want
to view. Alternatively, select the Patch Plan, and in the context menu, click View.
The Create Plan Wizard appears.
To filter the plans table, select All Plan Types or Patch depending on your
preference. To search for a plan, enter a plan name or partial plan name in the
search box, then click the search button.
Note:
• If you do not see the Plans region, click Customize Page from the top-right
corner, and then drag the Plans region to the page.
• To view only the plans that you created, click the icon of a person in the
Plans region.
1. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
2. On the Patches & Updates page, in the Patch Recommendations region, click All
Recommendations.
Note:
If you do not see the Patch Recommendations region, click Customize Page
from the top-right corner, and then drag the Patch Recommendations region
to the page.
3. On the Patch Recommendations page, in the table, a patch that is already part of a
plan is shown with the plan icon in the In Plan column. Click the icon to display all
plans with which the patch is associated, and then click a plan to open the Create
Plan Wizard.
NOT_SUPPORTED:
This section is mainly for Patch Designers who want to analyze the patch
plans and deploy them to roll out the patches.
To analyze the patch plan you created in Creating a Patch Plan and deploy it (or save
it as a patch template), follow these steps:
1. Access the patch plan using one of the approaches described in Accessing the
Patch Plan.
Cloud Control displays the Create Plan Wizard.
a. In the Overview section, validate the Patch Plan name. You can choose to edit
it if you want.
c. (Optional) In the Allow Access For section, click Add to grant patch plan
access permissions to administrators or roles, for the current patch plan.
In the Add Privileges to Administrators window, select an administrator or a
role, the access permission that you want to grant, then click Add Privilege.
Note:
• From within a patch plan, you can only grant patch plan permissions to
administrators or roles that have previously been created in Cloud Control.
You cannot create administrators or roles from within a patch plan. For
information on the roles and privileges required for patch plans and patch
templates, see Creating Administrators with the Required Roles for
Patching.
d. Click Next.
a. Review the patches added to the patch plan. Any recommended patches for
your configuration are automatically added to the patch plan. In addition,
any patches that you added manually are listed.
If you are patching an Oracle Grid Infrastructure target that is part of Oracle
Exadata, then you can add one Exadata Bundle Patch, and any number of
one-off Grid Infrastructure and Oracle Database patches to a single patch
plan, as long as you have the 12.1.0.5 Enterprise Manager for Oracle Database
plug-in deployed. In this scenario, ensure that you add the Exadata Bundle
Patch while creating the patch plan, and then add the one-off Grid
Infrastructure and Oracle Database patches as additional patches.
If you do not have the 12.1.0.5 Enterprise Manager for Oracle Database plug-
in deployed, then you cannot add one-off patches to a patch plan that already
contains an Exadata Bundle Patch.
If you are patching an Oracle Grid Infrastructure target that is not part of
Oracle Exadata, then you can add one Grid Infrastructure Patch Set Update
(PSU), and any number of one-off Grid Infrastructure and Oracle Database
patches to a single patch plan, as long as you have the 12.1.0.5 Enterprise
Manager for Oracle Database plug-in deployed. In this scenario, ensure that
you add the Grid Infrastructure PSU while creating the patch plan, and then
add the one-off Grid Infrastructure and Oracle Database patches as additional
patches.
If you do not have the 12.1.0.5 Enterprise Manager for Oracle Database plug-
in deployed, then you cannot add one-off patches to a patch plan that already
contains a Grid Infrastructure PSU.
To associate additional targets to a patch that is already in your patch plan,
follow the instructions outlined in Associating Additional Targets to a Patch
in a Patch Plan.
To view the details of a patch, select the patch, then from its context menu,
click View. To temporarily remove the patch from analysis and deployment,
click Suppress. This leaves the patch in the patch plan, but does not consider
it for analysis and deployment.
b. Click Next.
Note:
a. In the How to Patch section, select the mode of patching that you want to use.
For standalone (single-instance) database targets, Oracle Restart targets,
Oracle RAC targets, Oracle Data Guard targets, and Oracle Grid
Infrastructure targets (that may or may not be a part of Oracle Exadata), you
can choose between in-place patching and out-of-place patching. Out-of-place
patching is available for Oracle RAC targets and Oracle Grid Infrastructure
targets that are not a part of Oracle Exadata, only if you have the 12.1.0.5
Enterprise Manager for Oracle Database plug-in deployed. Out-of-place
patching is available for Oracle Data Guard targets only if you have the
12.1.0.5 Enterprise Manager for Oracle Database plug-in deployed.
For Oracle WebLogic Server, Oracle Fusion Applications, and Oracle SOA
Infrastructure targets, the only patching mechanism offered is in-place
patching. Out-of-place patching is not offered for these targets. For more
information on in-place and out-of-place patching, see Overview of Patching
in In-Place and Out-of-Place Mode.
If you want to clone the existing Oracle home of the database and patch the
cloned Oracle home instead of the original Oracle home, select Out of Place. If
you want to patch the existing original Oracle home of the target directly,
without cloning the original Oracle home, select In Place.
Also, you can choose to patch Oracle RAC targets, Oracle Grid Infrastructure
targets (whether or not they are part of Oracle Exadata), Oracle Data Guard
targets, Oracle WebLogic Server targets, Oracle Fusion Application targets,
and Oracle SOA Infrastructure targets in rolling mode, or in parallel mode.
For more information on patching targets in rolling mode and parallel mode,
see Overview of Patching in Rolling and Parallel Mode.
If you want to patch a single node of the target at a time, select Rolling. It is
the default option, and it involves very little downtime. While patching your
targets in rolling mode, you can choose to pause the execution of the patching
deployment procedure after each node is patched. For information on how to
do so, see Pausing the Patching Process While Patching Targets in Rolling
Mode.
If you want to patch all the nodes of the target simultaneously, select Parallel.
It involves downtime, as your entire system is shut down for a significant
period. However, this process consumes less time, as all the target nodes are
patched simultaneously.
b. (Appears only for standalone database targets, Oracle RAC targets, Oracle Data
Guard targets, and Oracle Grid Infrastructure targets) In the What to Patch
section, do the following:
If you have selected in-place patching, then review the details of the Oracle
homes that will be patched. By default, all of the database instances are
migrated.
For out-of-place patching, select the Oracle home that will be cloned, click
Create New Location against the Oracle home you want to clone, and enter
the full path name to a new directory that will be automatically created
during the cloning process. The directory cannot already exist, and the
specified operating system user must have write permission for its parent
directory.
For standalone database targets, Oracle RAC targets, Oracle Data Guard
targets, and Oracle Grid Infrastructure targets (that may or may not be a part
of Oracle Exadata), you have the option of migrating either all, or only some
of the database instances created from the specified Oracle home. Select the
ones that you want to migrate.
Note:
• After the cloned Oracle home is patched, and after the database instances
are migrated to the cloned Oracle home, you can choose to retain the
original Oracle home, or delete it if there are no other targets running from
that Oracle home.
• If the cloned home contains certain additional patches (that are not added
to the patch plan) that include SQL statements, the SQL statements for
these patches are not executed automatically when the database instance is
migrated to the cloned home. For these patches, you must execute the SQL
statements manually.
Note:
If you are patching a WebLogic Server 10.3.6 target, or an earlier version, and
you provide a custom stage location, the location you provide is disregarded,
and the selected patches are staged to the default directory configured with
SmartUpdate (which is <WEBLOGIC_HOME>/utils/bsu/cache_dir).
In the Stage Root Component section, specify whether or not you want the
wizard to stage the root component. The root component is a set of scripts
Note:
Note:
If you have the Enterprise Manager for Oracle Database 12.1.0.6 plug-in (or a
higher version) deployed in your system, dynamically generated deployment
procedures are used to patch Grid Infrastructure targets (those that are a part
of Oracle Exadata, as well as those that are not a part of Oracle Exadata),
Oracle RAC database targets, and Oracle Data Guard targets, in both, the in-
place and out-of-place patching modes. However, if you do not have the
Enterprise Manager for Oracle Database 12.1.0.6 plug-in deployed in your
system, dynamically generated deployment procedures are only used to
patch Grid Infrastructure targets (those that are a part of Oracle Exadata, as
well as those that are not a part of Oracle Exadata) and Oracle RAC database
targets in out-of-place patching mode. Static deployment procedures are used
in all other patching operations. You can choose to customize both these types
of deployment procedures, and use the customized deployment procedure to
patch your targets.
If the patching procedure consists of a static deployment procedure, click
Create Like and Edit to customize the default deployment procedure. To use
a customized deployment procedure, select the customized procedure from
the list displayed in the Customization section of the Deployment Options
page. For more information, see Customizing a Static Patching Deployment
Procedure.
If the patching procedure consists of a dynamically generated deployment
procedure based on OPlan, select Specify custom steps to be added to
generated patching deployment procedure, then edit the deployment
procedure. For information on how to edit the dynamically generated
deployment procedure, see Customizing a Dynamic Patching Deployment
Procedure.
If you are patching Oracle WebLogic Server or Oracle SOA Infrastructure
targets, you can set a timeout value after which the server is forcefully shut
down, if it was not shut down by default. By default, the shutdown time is 30
minutes. You can change this by entering a value in the Timeout for
Shutdown (in minutes) field. Oracle recommends that you set a timeout
value, and ensure that it is based on monitoring of real time systems. If the
SOA Infrastructure patch that you are applying involves SQL application
functionality, then you must provide the absolute path to the SQL scripts
used, in the Post-Install SQL Script Metadata field. For information about
the SQL script location, refer to the respective readme documents of each
patch. Ensure that the SQL scripts that you provide are JDBC-compliant.
f. In the Notification section, specify whether or not you want to enable email
notifications when the patch plan is scheduled, starts, requires action, is
suspended, succeeds, and fails.
To enable email notifications, select Receive notification emails when the
patching process, then select the required options. If a warning message,
mentioning that the sender or the receiver email address is not set up, is
displayed, perform the action mentioned in the warning.
g. In the Rollback section, select Rollback patches in the plan to roll back the
patches listed in the plan, rather than deploy them.
The roll back operation is supported only for Management Agent, Oracle
WebLogic Server, Oracle SOA Infrastructure, Oracle Restart, single-instance
Oracle database, Oracle RAC database, Oracle RAC One node database,
Oracle Grid Infrastructure, and Oracle Grid Infrastructure targets that are
part of Oracle Exadata.
For more information on how to roll back a patch, see Rolling Back Patches.
Note:
h. In the Conflict Check section, specify whether you want to enable or disable
ARU Conflict Check, a check that uses Oracle Automated Release Updates
(ARU) to search for patch conflicts within the patch plan during the analysis
stage. Also, specify the action that the patching procedure must take when a
patch conflict is encountered during deployment.
For Conflicts, select Stop at Conflicts if you want the patching procedure to
stop the deployment of the plan when a conflict is encountered, select Force
Apply if you want the patching procedure to roll back the conflicting patches
and apply the incoming patches when a conflict is encountered, or select Skip
conflicts if you want the patching procedure to apply only the non-conflicting
patches, and skip the application of the conflicting patches, when a conflict is
encountered.
i. Click Next.
5. On the Validation page, click Analyze to check for conflicts. For information
about what checks are performed in the validation screen, see About the Create
Plan Wizard .
6. On the Review & Deploy page, review the details and do one of the following:
• If you are patching your database targets in out-of-place patching mode, then
click Prepare. This operation essentially clones the source Oracle home, and
patches it. While this happens, the source Oracle home and the database
instances are up and running.
Once you click Prepare, a Deploy Confirmation dialog box appears, which
enables you to schedule the Prepare operation. Select Prepare. If you want to
begin the Prepare operation immediately, select Immediately. If you want to
schedule the Prepare operation such that it begins at a later time, select Later,
then specify the time. Click Submit.
After the Prepare operation is successful, click Deploy. This operation
essentially switches the database instances from the source Oracle home to the
cloned and patched Oracle home. The Prepare and Deploy operations enable
you to minimize downtime.
Once you click Deploy, a Deploy Confirmation dialog box appears, which
enables you to schedule the Deploy operation. Select Deploy. If you want to
begin the Deploy operation immediately, select Immediately. If you want to
schedule the Deploy operation such that it begins at a later time, select Later,
then specify the time. Click Submit.
Note:
Instead of patching your Oracle homes in out-of-place patching mode, you can
provision an Oracle home directly from a software image stored in Software
Library (that has the required patches). To do this, follow these steps:
b. Create a patch plan. Ensure that you add all the patches that are a part of
the provisioned Oracle home to this plan.
• If you are patching any other target in any other mode, click Deploy.
Once you click Deploy, a Deploy Confirmation dialog box appears, which
enables you to schedule the Deploy operation. Select Deploy. If you want to
begin the Deploy operation immediately, select Immediately. If you want to
schedule the Deploy operation such that it begins at a later time, select Later,
then specify the time. Click Submit.
Note:
Many patching operations use OPlan, an independent tool that creates and
executes dynamic deployment procedures to patch your targets. The OPlan
readme file for a patch plan contains detailed information about the step wise
execution of the patch plan. If you have the 12.1.0.5 Enterprise Manager for Oracle
Database plug-in deployed, you can view this file. To view this file, on the Review
& Deploy page, click the Download link present beside Patching Steps.
Note:
Conflict Patches
When applying WebLogic Server, SOA, or Service Bus patches, there may be conflicts
among the patches. This is encountered during the analysis phase. We can select the
appropriate patch conflict resolution action in the Deployment Options page. Refer to
the following table for details.
Patch Conflict resolution is supported from WebLogic Server 12.1.2.0.0 onwards.
The following tasks or actions can be performed by the user when the analysis results
display any patch conflicts:
Stop at Conflicts This can be selected if you want the patching procedure to stop the
deployment of the plan when a conflict is encountered.
Force Apply This can be selected if you want the patching procedure to roll back the
conflicting patches and apply the incoming patches when a conflict is
encountered.
Skip Conflicts This can be selected if you want the patching procedure to apply only the
non-conflicting patches and skip applying the conflicting patches when a
conflict is encountered.
For WebLogic Server versions earlier than 12.1.2.0.0, the patch conflicts can be
removed using the following steps.
2. Analyze the patch. In case there are any conflicting patches, the Run WebLogic
Home Prerequisite Checks indicates the conflicting patches with the new patches
that are part of the new Patch Plan just created.
3. Make a note of the patch number that is conflicting with the newly created Patch
Plan and that needs to be rolled back.
4. Create a new Rollback plan that includes the conflicting patches for all the targets.
7. All the conflicting patches are now removed from the WebLogic Server Home.
Now the Patch Plan that was initially created with the Patch Set Update can be
analyzed.
41.4.4 Switching Back to the Original Oracle Home After Deploying a Patch Plan
If you had patched an Oracle RAC target, Oracle single-instance database target,
Oracle Restart target, or an Oracle Grid Infrastructure target (that may or may not be a
part of Oracle Exadata), in out-of-place patching mode, and you now want to switch
back to the original home for some reason, then you can use the Switchback option
available in the Create Plan Wizard. The advantage of using this option is that you do
not actually roll back the patches from the cloned and patched Oracle home; you only
switch back to the old, original Oracle home that exists without the patches.
Note:
• The Switchback option is available only for Oracle RAC, Oracle single-
instance database, Oracle Restart, and Oracle Grid Infrastructure targets
(that may or may not be a part of Oracle Exadata), and only when these
targets are patched in out-of-place patching mode.
• You can switch back only if you have successfully analyzed and deployed
a patch plan.
To switch back to the original Oracle home after a patch plan is deployed, follow these
steps:
1. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
2. On the Patches & Updates page, in the Plans region, click the successfully analyzed
and deployed patch plan you used for patching the Oracle RAC, Oracle Restart,
Oracle single-instance database, or Oracle Grid Infrastructure targets.
Alternatively, select the patch plan, and in the context menu, click View. The
Create Plan Wizard appears.
3. In the Create Plan Wizard, on the Review & Deploy page, click Switchback.
NOT_SUPPORTED:
This section is mainly for Patch Designers who want to save the
successfully analyzed or deployed patch plans as patch templates so that
operators can consume them to create fresh patch plans with the approved
patches and predefined deployment options.
To save a patch plan as a patch template, follow Step (1) to Step (5) as outlined in
Analyzing, Preparing, and Deploying Patch Plans, and then for Step (6), on the Review
& Deploy page, click Save as Template. In the Create New Plan Template dialog,
enter a unique name for the patch template, and click Create Template.
Important:
Oracle recommends you to follow this as a best practice if you have to roll
out in a mass scale over a period of time involving more administrators. If
you have a large data center, then as a Patch Designer, create a patch plan
and apply the patches on a few databases, test if the patches are being
applied successfully, then save the plan as a template. Later, have your
Patch Operators create patch plans out of these templates so that they can
roll out the same set of patches to the rest of the databases in the data center.
41.4.6 Creating a Patch Plan from a Patch Template and Applying Patches
Once a successfully analyzed or deployed patch plan is saved as a patch template, you
can create patch plans out of the template, associate targets you want to patch, and
deploy the newly created patch plan.
This is purely an optional step. You do not have to save your patch plans as patch
templates to roll out patches. You can roll out patches directly from a patch plan as
described in Analyzing, Preparing, and Deploying Patch Plans.
NOT_SUPPORTED:
This section is mainly for Patch Operators who want to create patch plans
from patch templates for rolling out the patches.
Approach 1
To create patch plans out of the patch templates, use one of the following approaches:
1. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
2. On the Patches & Updates page, in the Plans region, select the patch template you
want to use to create a patch plan out of it.
4. In the Create Plan from Template dialog, enter a unique name for the patch plan,
select the targets on which you want to patch, and click Create Plan.
5. Return to the Patches & Updates page, and in the Plans region, click the patch
plan you want to use. Alternatively, select the patch plan, and in the context
menu, click View. The Create Plan Wizard appears.
6. In the Create Plan Wizard, go to the Validation page, and click Re-Analyze to
analyze the patch plan with the newly associated targets.
7. After successfully analyzing the patch plan, on the Validation page, click Next.
Approach 2
1. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
2. On the Patches & Updates page, in the Plans region, do one of the following:
• Select a patch template. From the context menu, select View. The Edit Template
Wizard appears.
• Click the name of a patch template. The Edit Template Wizard appears.
4. In the Create Plan from Template dialog, enter a unique name for the patch plan,
select the targets on which you want to patch, and click Create Plan.
5. Return to the Patches & Updates page, and in the Plans region, click the patch plan
you want to use. Alternatively, select the patch plan, and in the context menu, click
View. The Create Plan Wizard appears.
6. In the Create Plan Wizard, go to the Validation page, and click Re-Analyze to
analyze the patch plan with the newly associated targets.
7. After successfully analyzing the patch plan, on the Validation page, click Next.
1. Identify the Oracle Grid Infrastructure Patch Set Update (PSU) or one-off patches
that you want to apply, as described in Identifying the Patches to Be Applied.
3. Analyze and deploy the patch plan as described in Analyzing, Preparing, and
Deploying Patch Plans.
Note:
• You can add one Grid Infrastructure PSU, and any number of one-off Grid
Infrastructure and Oracle Database patches to a single patch plan, as long
as you have the 12.1.0.5 Enterprise Manager for Oracle Database plug-in, or
a higher version, deployed. In this scenario, ensure that you add the Grid
Infrastructure PSU while creating the patch plan, and then add the one-off
Grid Infrastructure and Oracle Database patches as additional patches.
If you do not have the 12.1.0.5 Enterprise Manager for Oracle Database
plug-in, or a higher version, deployed, then you cannot add one-off
patches to a patch plan that already contains a Grid Infrastructure PSU.
Exadata Patching can be performed in two modes: In Place Patching, and Out-of-place
Patching, though Oracle recommends you to use the Out-of-place patching
mechanism as the downtime involved is much lesser.
For information about the supported Exadata releases, see Supported Targets,
Releases, and Deployment Procedures for Patching.
However, note that patch plans do not patch the Exadata Database Machine's compute
node entities such as the operating system and firmware, and also its storage cells.
They patch only the associated Oracle RAC database targets and the Oracle Grid
Infrastructure (Oracle Clusterware) targets running on the machine.
Note:
Therefore, when you create a patch plan with an Exadata Database Machine
recommended bundle patch, make sure you add the cluster or the Oracle RAC
database target running on the Exadata Database machine. The patch plan
automatically recognizes their association with the Exadata Database machine, and
prompts you to add all the impacted targets running on that machine. For example, if
you select the cluster target, it prompts you to add all the Oracle RAC database targets
and the Oracle Grid Infrastructure targets that are part of that cluster running on the
Exadata Database Machine.
To patch an Exadata Database machine, follow these steps:
1. Identify the Exadata Database Machine recommended bundle patch you need to
apply, as described in Identifying the Patches to Be Applied.
3. Add the cluster or the Oracle RAC database target running on the Exadata
Database machine, and analyze and deploy the patch plan as described in
Analyzing, Preparing, and Deploying Patch Plans.
Note:
• You can add one Exadata Bundle Patch, and any number of one-off Grid
Infrastructure and Oracle Database patches to a single patch plan, as long
as you have the 12.1.0.5 Enterprise Manager for Oracle Database plug-in, or
a higher version, deployed. In this scenario, ensure that you add the
Exadata Bundle Patch while creating the patch plan, and then add the one-
off Grid Infrastructure and Oracle Database patches as additional patches.
If you do not have the 12.1.0.5 Enterprise Manager for Oracle Database
plug-in, or a higher version, deployed, then you cannot add one-off
patches to a patch plan that already contains an Exadata Bundle Patch.
• Full Migration: If you choose the migrate all the database instances running in
your data center in one session, then it is termed as Full Migration.
• Partial Migration: If you choose to migrate only some of the instances depending
on the downtime you can afford in your data center in one session, then it is
termed as Partial Migration. However, you must ensure that you migrate the
remaining instances in the following sessions. This approach is particularly useful
when you have multiple instances running out of an Oracle home.
Note:
for steps on how to perform Full Migration or Partial Migration of Oracle Grid
Infrastructure Targets and Oracle RAC Targets running on Exadata Database
Machine, see Analyzing, Preparing, and Deploying Patch Plans.
Switch Back is an option available for Oracle Grid Infrastructure targets and Oracle
RAC targets running on Exadata machine, that enables you to switch the instances
from the newly cloned Oracle homes back to the original Oracle homes in case of any
problems.
For more information on how to perform a Switch Back, see Switching Back to the
Original Oracle Home After Deploying a Patch Plan.
1. Search for the required patch, then create a patch plan by selecting the cluster
target.
4. (Optional) Perform a database switchover, such that the standby database becomes
the new primary database, and vice versa. Performing a switchover can save
database downtime.
For information on how to perform a database switchover, see Oracle® Data Guard
Broker.
5. Search for the required patch, then create another patch plan by selecting the
cluster target that contains the primary database (or the new standby database, in
case you have performed a switchover).
Note:
• Oracle recommends that you patch the standby database first, then patch
the primary database.
Table 41-5 describes the steps to apply an Oracle Grid Infrastructure PSU or an Oracle
Database PSU, when the primary and standby databases are RAC databases:
5. Search for the required Oracle Grid Infrastructure PSU, or the Oracle
Database PSU, then create another patch plan by selecting the cluster
target that contains the primary RAC database (or the new standby
database, in case you have performed a switchover).
4. Search for the required Oracle Grid Infrastructure PSU, or the Oracle
Database PSU, then create another patch plan by selecting the second
cluster target (C2).
Scenario 2: The primary database is a RAC database, and the standby database is a
single-instance database that is managed by a Cluster Ready Services (CRS) target
or a Single Instance High Availability (SIHA) target.
Table 41-6 describes the steps to apply an Oracle Grid Infrastructure PSU or an Oracle
Database PSU, when the primary database is a RAC database, and the standby
database is a single-instance database that is managed by a CRS target or a SIHA
target.
5. Search for the required Oracle Grid Infrastructure PSU, or the Oracle
Database PSU, then create another patch plan by selecting the cluster
target that contains the primary RAC database (or the new standby
database, in case you have performed a switchover).
4. Search for the required Oracle Grid Infrastructure PSU, or the Oracle
Database PSU, then create another patch plan by selecting the second
cluster target (C2).
Scenario 3: The primary and standby databases are single-instance databases that
are managed by CRS or SIHA targets.
Table 41-7 describes the steps to apply an Oracle Grid Infrastructure PSU or an Oracle
Database PSU, when the primary and standby databases are single-instance databases
that are managed by CRS or SIHA targets.
5. Search for the required Oracle Grid Infrastructure PSU, or the Oracle
Database PSU, then create another patch plan by selecting the cluster
target that contains the primary single-instance database (or the new
standby database, in case you have performed a switchover).
4. Search for the required Oracle Grid Infrastructure PSU, or the Oracle
Database PSU, then create another patch plan by selecting the second
cluster target (C2).
1. Search for the required Oracle Grid Infrastructure PSU, or the Oracle Database
PSU, then create a patch plan by selecting the first cluster target (C1).
4. Search for the required Oracle Grid Infrastructure PSU, or the Oracle Database
PSU, then create another patch plan by selecting the second cluster target (C2).
Note:
You can patch the clusters in any order, that is, C1 first and then C2, or C2 first
and then C1.
1. Identify the patches that you want to apply, as described in Identifying the Patches
to Be Applied.
3. Analyze and deploy the patch plan, as described in Analyzing, Preparing, and
Deploying Patch Plans.
1. Identify the patches that you want to apply, as described in Identifying the Patches
to Be Applied.
3. Analyze and deploy the patch plan, as described in Analyzing, Preparing, and
Deploying Patch Plans.
Before you begin patching of Oracle Service Bus, ensure that you meet the following
prerequisites:
• The user needs to ensure that Oracle Home collection is complete. Patching throws
an error if one or more Oracle Homes are not discovered.
Note:
• The user needs to ensure to set the Oracle Homes and Domain administrator
credentials.
• For offline patching, the user needs to ensure to have the necessary OPatch
downloaded from MOS and manually upload to the saved patches.
For patching WebLogic Server 12.2.1.x Enterprise Manager expects Opatch release
13.3.0.0.0 to be available in the software library. At the time of releasing Enterprise
Manager, this version of OPatch was not available for download from MOS. Hence,
when patching WebLogic Server 12.2.1.x, Enterprise Manager uses the Opatch
installed in the WebLogic Server Oracle Home instead of trying to download the
latest OPatch from MOS.
• The target selector needs metadata from My Oracle Support to filter the targets
based on the patch types. The user must run Refresh from My Oracle Support job
in online mode. In case of offline mode the user must upload the catalog.zip file.
– 11.1.1.5.0
– 11.1.1.6.0
– 11.1.1.7.0
– 11.1.1.9.0
– 12.1.3.0.0
– 12.2.1.0.0
• The SOA patch containing SQL script may fail if the metadata is not specified in the
right format. The metadata has to be specified in the Deployment Options page in
the field Location of Post Install SQL Script.
Ensure that the SQL scripts are specified in the following format:
<patch_number1>:<sql_script1_location>:<sql_script2_location>
:<patch_number2>:<sql_script3_location>:...
Example:
14082705:%ORACLE_HOME%/rcu/integration/soainfra/sql/
createschema_soa infra_oracle.sql
Additionally, this information is available in the Post-Installation Instructions in
the README.txt file. In case the Location of Post Install SQL Script field is left
blank or the right SQL script is not specified then the following error message is
mentioned in the log:
One or more SQL files included in this patch are not selected for post-patch
execution. To correct this, update your selection in the "Deployment options"
page. To continue without making any changes, run the required SQL scripts
manually after the patch is deployed successfully.
• The relevant patch, target, and domain details are displayed in the following table:
Patch Targets
2. From the Enterprise menu, select Provisioning and Patching, then select Saved
Patches.
3. Click Upload.
4. Provide the location of the patch file and the patch metadata, and click Upload.
5. Once the patch has been uploaded, select the patch. This takes you to the patch
home page.
8. Enter Oracle Service Bus or Service Cluster Targets As the target type.
Note:
In case a single Service Bus component is selected, all the associated targets
are automatically picked up for patching. Ensure to select one Service Bus
component.
11. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
12. Select the patch plan that you created from the list of patch plans.
14. Enter the rest of the details in the EM User Interface, validate the credentials, and
choose the customization required. Ensure that the data prefilled or selected is
accurate.
16. Once analysis is complete click Review, for the tool to review, analyze, and
validate the details.
18. Click Deploy. The deployment procedure is submitted and patching is in progress.
Note:
For Oracle Service Bus 11g targets patch recommendations are not displayed.
The user can search for the recommended patches through the Recommended
Patch Advisor search in the Patches and Updates.
Related Topics:
2. From the Enterprise menu, select Provisioning and Patching, then select Saved
Patches.
3. Click Upload.
4. Provide the location of the patch file and the patch metadata, and click Upload.
5. Once the patch has been uploaded, select the patch. This takes you to the patch
home page.
8. Enter Oracle Service Bus or Service Cluster Targets As the target type.
Note:
In case a single Service Bus component is selected, all the associated targets
are automatically picked up for patching. Ensure to select one Service Bus
component.
11. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
12. Select the patch plan that you created from the list of patch plans.
14. Choose the Rollback option. Enter the rest of the details in the EM User Interface,
validate the credentials, and choose the customization required. Ensure that the
data prefilled or selected is accurate.
16. Once analysis is complete click Review, for the tool to review, analyze, and
validate the details.
Related Topics:
41.4.14 Deploying Web Logic Patches Along with SOA or Oracle Service Bus Patches In
A Single Patch Plan
This topic contains the procedure to deploy WebLogic Patches as well as SOA or
Oracle Service Bus patches using a single patch plan. Using a single patch plan to
deploy all the required patches would reduce down time during patching.
In case of SOA or Oracle Service Bus domain the user can apply Web Logic patches or
SOA or Oracle Service Bus product patches. These patches can be applied in different
sessions by creating separate patch plans. However each patch deployment session
would involve down time. To reduce the system down time, you can create one patch
plan by selecting the Web Logic platform patches as well as SOA or Oracle Service Bus
patches and then apply this patch plan in one session.
Note:
The combined patching is supported only from Oracle Service Bus or SOA
release 12.1.3.0.0 onwards. This mode of combined patching is supported only
from Oracle Service Bus or SOA release 12.1.3.0.0 onwards. For earlier
versions of Oracle Service Bus and SOA, create separate patch plans to apply
the patches.
Note:
To achieve this, first create the patch plan by selecting the desired Weblogic
patches and selecting the desired domain targets. Add the Oracle Service Bus
or SOA patches to the same patch plan and ensure to reselect the same domain
targets in the target selector window as detailed in the steps below. This will
ensure that all required patches and the targets to deploy them against are
selected.
The user can apply multiple patches through a single patch plan.
For more information on creating a patch plan, see Creating a Patch Plan.
Note:
An Oracle Service Bus or SOA target must have an Oracle Home associated
with it. Else, the following error message is displayed with the resolution:
There is no Oracle Home(OH) target associated for the patch being applied.
To rectify this issue the user should promote the OH being patched, by
choosing the "Discover Promote Oracle Home Target" job in Enterprise
Manager. Ensure the OH target is available after this operation is
completed, and the configuration metrics for the OH target is collected.
Note:
You can also include any SQL scripts that need to be executed while applying
patch.
1. From the Enterprise menu, select Provisioning and Patching, then select Saved
Patches.
2. Click Upload.
3. Provide the location of the patch file and the patch metadata, and click Upload.
4. Once the patch has been uploaded, select the patch. This takes you to the patch
home page.
5. Click Create Plan by selecting the WebLogic patch and WebLogic domain As the
target.
9. The WebLogic Server patch plan is created and can be deployed along with Oracle
Service Bus or SOA patch.
10. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
11. Select the patch plan that you created from the list of patch plans.
13. In the Patches & Updates, under the Patches tab, click on Add Patch.
14. Search and select a SOA or Oracle Service Bus patch and select the same domain
target.
15. Enter the rest of the details in the EM User Interface, validate the credentials, and
choose the customization required. Ensure that the data prefilled or selected is
accurate.
17. Once analysis is complete click Review, for the tool to review, analyze, and
validate the details.
19. Click Deploy. The deployment procedure is submitted and patching is in progress.
Related Topics:
Problem Workaround
Empty target property The target is not properly configured or maybe the target is unavailable.
version Reconfigure the target or check for metric collection errors.
Problem Workaround
Inadequate or incomplete To resolve this issue, recompute the dynamic properties and refresh the host
target information configuration so that the Management Repository is updated with the latest
collected by Oracle configuration of the host. To do so, follow these steps:
Management Agent.
1. To recompute the dynamic properties, do one of the following:
Option A: Stop and restart the Management Agent. This option is simpler
because you do not have to understand the target model.
$ emctl stop agent
$ emctl start agent
For a cluster, restart the Management Agent on all the nodes of the cluster.
Option B: Reload the dynamic properties of all the targets. This option is better
because there is no blackout or downtime for monitoring.
To view the list of targets that are monitored by the Management Agent, run
the following command:
$ emctl config agent listtargets
To reload the dynamic properties of a specific target, run the following
command:
$ emctl reload agent dynamicproperties
[<Target_name>:<Target_Type>]
For example:
$ emctl reload agent dynamicproperties
oradb:oracle_database
$ emctl reload agent dynamic properties
racdb_1:rac_database
$ emctl reload agent dynamicproperties crs:cluster
$ emctl reload agent dynamic properties
wls:weblogic_j2eeserver
$ emctl reload agent dynamicproperties
server1.xyz.com:host
Targets are not properly To resolve this issue, rediscover the domain so that all the targets in the domain are
discovered because of discovered effectively. To do so, follow these steps:
inadequate or incomplete
target information 1. Log in to the Domain Home page using appropriate credentials. For example,
collected during Farm01_base_domain.
discovery.
2. On the (Farm01_base_domain) home page, from the Farm menu, select Refresh
WebLogic Domain, and click Ok on all the following pages to complete the
process. After successful completion of the process the domain home page is
refreshed to discover all the targets afresh.
Problem Workarounds
Oracle RAC Instance does Rediscover the Oracle RAC target and add the Oracle RAC
not have an associated instance to the Oracle RAC database.
Oracle RAC Database
The database is not The target discovery is not appropriate. Remove the target from
mediated by the OMS Cloud Control, and rediscover on all the Management Agents
in the cluster.
• (For Oracle WebLogic Targets only) If there are inherent problems with the
SmartUpdate tool.
• In the header section of the Validation page or the Review page in the Create Plan
Wizard (Figure 41-11)
• In the Plans region of the Patches & Updates page (Figure 41-13).
Figure 41-12 Patch Plan Errors Displayed in the Issues to Resolve Section
Note:
Issues Remain In the Create Plan Wizard, on the Validation page, review the
issues listed in the Issues to Resolve section.
In the Issues to Resolve section, if an error message states
that you must click Show Detailed Results here, then click
it. On the Procedure Activity Status page, in the Status Detail
table, review the status of each of the steps. Click the status
to view the log details.
In case the Issues to Resolve table is empty, then exit and re-
enter the plan.
Conflicts Detected In the Create Plan Wizard, on the Validation page, review the
issues listed in the Issues to Resolve section.
In the Issues to Resolve section, if an error message states
that you must click Show Detailed Results here, then click
it. On the Procedure Activity Status page, in the Status Detail
table, review the status of each of the steps. Click the status
to view the log details.
Preparation Failed On the Validation page, click Show Detailed Results here.
On the Procedure Activity Status page, in the Status Detail
table, review the status of each of the steps. Click the status
to view the log details.
Deployment Failed On the Validation page, click Show Detailed Results here.
On the Procedure Activity Status page, in the Status Detail
table, review the status of each of the steps. Click the status
to view the log details.
1. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
2. On the Patches & Updates page, in the Plans region, do one of the following:
• Select a patch template. From the context menu, select View. The Edit Template
Wizard appears.
• Click the name of a patch template. The Edit Template Wizard appears.
Note:
• You can modify only the description and the deployment date in the patch
template.
1. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
2. On the Patches & Updates page, in the Plans region select a successfully analyzed
deployable Patch Plan.
Note:
You can create a patch template only from one Patch Plan at a time.
3. Select Create Template. The Create New Plan Template dialog appears.
4. Enter a unique name for the template, then click Create Template.
Note:
When you select a plan, the Create Template option is not visible if you:
• Do not have the privileges to view the Patch Plan that you selected.
Approach 2
1. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
2. On the Patches & Updates page, in the Plans region click the name of a
successfully analyzed deployable patch plan. The Create Plan wizard appears.
3. In the Create Plan Wizard, in the Review & Deploy page, click Save as Template.
4. Enter a unique name for the template, then click Create Template.
Note:
2. On the Patches & Updates page, in the Plans region, select a patch template.
3. From the context menu, select View. The Edit Template Wizard appears.
4. In the Edit Template Wizard, on the Patches page, click a patch number. The patch
details page appears.
2. On the Patches & Updates page, in the Plans region, click the Patch Plan you want
to delete. From the context menu, click Remove.
Note:
If you do not see the Plans region, click Customize Page from the top-right
corner, and then drag the Plans region to the page.
1. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
2. On the Patches & Updates page, in the Plans region select one or more patch
templates. The context menu appears.
Note:
An administrator who created the patch template and the super administrator
of Cloud Control can modify a patch template.
Note:
For more information, see Patch Plan Becomes Nondeployable and Fails.
1. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
2. On the Patches & Updates page, in the Plans region, select a Patch Plan to which
the patch belongs. From the context menu that appears, select View.
Note:
If you do not see the Plans region, click Customize Page from the top-right
corner, and then drag the Plans region to the page.
3. In the Create Plan Wizard, on the Patches page, Click Add Patch. The Edit Search
window appears.
4. In the Edit Search window, search the patch to which you want to associate
additional targets.
5. Select the patch that you want to add, then click Add to This Plan. The Add Patch
To Plan window appears.
6. In Add Patch To Plan window, search and select the additional targets that you
want to associate with the patch, and then, click Add Patch to Plan.
Note:
Ensure that you select only homogeneous targets.
For Oracle WebLogic Server, using a single patch plan, you can patch only one
domain. However, if it is a shared domain, then the admin servers and
Managed Servers running on different domains, which are shared with the
domain being patched, are automatically added into the same plan.
For Oracle WebLogic Server, if you have deployed the Enterprise Manager for
Oracle Fusion Middleware 12.1.0.4 plug-in, you can add any number of
WebLogic domain targets to a single patch plan. If you have not deployed the
Enterprise Manager for Oracle Fusion Middleware 12.1.0.4 plug-in, you can
add only a single WebLogic domain target to a single patch plan. For
information about how to deploy a new plug-in or upgrade an existing plug-
in, see Oracle Enterprise Manager Cloud Control Administrator's Guide.
If the WebLogic domain target that you add to a patch plan is a shared
domain, then all the Administration Servers and the Managed Servers that are
running on the domains that are shared with the domain being patched are
automatically added into the same patch plan.
Approach 2
1. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
2. On the Patches & Updates page, in the Patch Recommendations region, click the
graph.
Note:
If you do not see the Patch Recommendations region, click Customize Page
from the top-right corner, and then drag the Patch Recommendations region
to the page.
4. From the context menu, select Add to Plan, then Add to Existing, and select the
plan you want to add the patch to.
The patch you selected and the target it is associated with get added to the
existing plan.
Approach 3
1. From the Enterprise menu, select Provisioning and Patching, then select Patches
& Updates.
2. On the Patches & Updates page, in the Search region, search a patch you want to
add to the patch plan.
Note:
If you do not see the Search region, click Customize Page from the top-right
corner, and then drag the Search region to the page.
3. On the Patch Search page, click the patch to view its details.
4. On the patch details page, click Add to Plan, then Add to Existing, and select the
plan you want to add the patch to.
The patch you selected and the target it is associated with get added to the existing
plan.
• Request for a merge patch of the conflicting patches. To do so, click Request Patch
on the Validation page.
• Roll back the conflicting patches in the Oracle home and forcefully apply the
incoming patches. To do so, on the Deployment Options page, in the Advanced
Patching Options section, from the Conflict Resolution list, select Forcefully apply
incoming patches.
• Skip the conflicting patches. To do so, on the Deployment Options page, in the
Advanced Patching Options section, from the Conflict Resolution list, select Skip
conflicting patches.
1. From the Enterprise menu, select Reports, then select Information Publisher
Reports.
2. On the Information Publisher Reports page, in the table, expand Deployment and
Configuration, then expand Patching Automation Reports, and then select EM
Deployable Patch Plan Execution Summary.
3. Edit the deployment procedure (you can even add manual steps to the deployment
procedure), then save it with a unique, custom name.
For information on how to edit and save a deployment procedure, see Customizing
a Deployment Procedure.
For each placeholder step in the deployment procedure, you can add three custom
steps, a directive step (which enables you to run a directive stored in Software
Library), a host command step (which enables you to run a command script on the
patch targets), and a manual step (which enables you to provide manual
instructions to a user).
3. Select all the custom steps that you want to add under the placeholder steps, then
click Enable.
To disable an enabled custom step, select the custom step, then click Disable.
4. Select a custom step that you want to add, then click Edit.
If the custom step you selected is a directive step, follow the instructions outlined
in Adding Steps.
If the custom step you selected is a host command step, an Edit Custom Step dialog
box appears, which enables you to specify the details of a command script that you
want to run on the patch targets. For Script, specify the commands that you want
to run on the patch targets. For Interpreter, specify the complete path of the
interpreter that must be used to execute the specified command script. For
Credential Usage, select the credentials that must be used to run the custom host
command step. Click OK.
If the custom step you selected is a manual step, an Edit Custom Step dialog box
appears, which enables you to provide manual instructions to the user during the
execution of the deployment procedure. For Instructions, specify the instructions
for the manual tasks that a user must perform in this custom step. Click OK. The
specified instructions are displayed when this custom step of the deployment
procedure is executed.
5. Repeat Step 4 for all the custom steps that you have enabled.
41.6.13 Pausing the Patching Process While Patching Targets in Rolling Mode
While patching your targets in rolling mode, you can choose to pause the execution of
the patching deployment procedure after each node is patched. This feature is useful
when you want to perform a clean up on the host having the patched node, verify
whether the patch is applied successfully on the node, and so on, before starting the
patching process on the next node.
Note:
This feature is available only if you have the 12.1.0.5 Enterprise Manager for
Oracle Database plug-in, or a higher version, deployed.
2. Use the custom deployment procedure while creating a patch plan to patch your
targets in rolling mode (by specifying this custom deployment procedure in the
Customization tab of the Deployment Options page).
The patching deployment procedure is paused after a node is patched, and you can
resume its execution from the Procedure Activity page.
Note:
• Oracle Restart
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
and then click Patches and Updates.
2. On the Patches and Updates page, in the Patch Search region, enter the patch that
you want to rollback.
3. In the Add Patch to Plan dialog box, enter a unique name for the plan, and select
all the targets from where you want to rollback the patches.
6. On the Deployment Options page, in the How to Patch section, select In Place, and
in the Rollback section, select Rollback patches in the plan. Click Next.
7. On the Validation page, click Analyze to validate the plan. After a successful
analysis, click Next.
8. On the Review and Deploy page, review the details you have provided for the
patch plan, and then click Rollback to rollback the selected patch from the
corresponding targets mentioned in the plan.
This chapter explains how you can patch Linux hosts using Oracle Enterprise Manager
Cloud Control (Cloud Control). In particular, this chapter covers the following:
Note:
To understand how you can use Enterprise Manager Ops Center to update or
patch Linux hosts, refer to the chapter on updating operating systems in the
Oracle Enterprise Manager Ops Center Provision and Update Guide.
• Set up a Linux Patching group to update a group of Linux hosts and collect
compliance information
• Manage RPM repositories and channels (clone channels, copy packages from one
channel into another, delete channels)
• Manage configuration file channels (create/delete channels, upload files, copy files
from one channel into another)
The following are concepts related to Linux patching:
Type Description
Linux Host A host target in Cloud Control that is running the Linux operating
system.
Linux Patching A set of managed Linux hosts that are associated with a common list
Group of RPM repositories. Every group is configured with an update
schedule, according to which a recurring job is triggered, that will
update the hosts of the group with the associated RPM repositories.
Unbreakable Linux Unbreakable Linux Network (ULN) is a Web site hosted by Oracle to
Network (ULN) provide updates for Oracle Linux.
ULN Channel A channel is a group of RPM packages on ULN. For example, the
el4_latest channel contains all the packages for Oracle Linux 4.
RPM Repository RPM repository is a directory that contains RPM packages and their
metadata (extracted by running yum-arch and createrepo). The
RPM repository is accessible via http or ftp. An RPM repository can be
organized to contain packages from multiple channels.
For example, /var/www/html/yum/Enterprise/EL4/latest
might contain packages from the el4_latest channel on ULN.
Custom Channel A channel that is created by the user to store a set of custom RPM
packages. Custom channels can be added to the RPM repository.
• Oracle Linux 4
• Oracle Linux 5
• Oracle Linux 6
• Oracle Linux 7
2. Install yum on all your Oracle Linux 6 target hosts. Install yum and up2date on all
your Oracle Linux 5 target hosts.
• /bin/cp
• /bin/rm
• /bin/chmod
• /sbin/chkconfig
• yum
• up2date
• sed
• rpm
Note:
• Identify a Redhat or Oracle Linux host, install a Management Agent, and point to
the OMS. This host must have the sudo package installed.
• Obtain a valid Customer Support Identifier (CSI) number from your Oracle sales
representative.
After obtaining a valid CSI number, ensure that you create a ULN account. To
create a ULN account, access the following URL:
https://linux.oracle.com/register
Note:
You do not need to upload the up2date packages to Software Library if the
host on which you plan to set up the RPM Repository is running on an Oracle
Linux platform.
Note:
For a multi-OMS setup, the following steps only need to be performed on one
OMS.
3. Edit the Patch Software Library entities metadata file swlib.xml present in
the Oracle home of the OMS to upgrade the ExternalID of the Software Library
entity Up2date Package Component.
To do so, follow these steps:
(1) Open the swlib.xml file present at the following location:
$ORACLE_HOME/sysman/metadata/swlib/patch/
(2) Search for the tag <Entity name="Install up2date RPM">, which in
turn has a subtag ExternalID.
(3) Increase the values of the ExternalID by 0.1.
For example, if the original value of the entity in the software library's
ExternalID is 2.0, then update the value by 0.1 to upgrade the ExternalID to 2.1.
4. Upload the zip file to Software Library by running the following command:
$ emctl register oms metadata -service swlib -file
$ORACLE_HOME/sysman/metadata/swlib -core
• Ensure that the /var/www/html/ directory on the host on which you plan to set
up the RPM repository has at least 60 GB of free disk space per channel.
• Ensure that Apache is installed, and listening on port 80. To verify this, you can try
connecting to the URL: http://host.
For example: http://h1.example.com. If this works, then it is confirmed that
Apache is installed and listening on port 80.
• Ensure that the createrepo package is installed on the RPM Repository host. To
obtain this package, subscribe to the el*_addon or the ol*_addon channel.
• If the RPM Repository host is not running on Oracle Linux 6 (OL6), but is
subscribed to an OL6 channel whose name is of the format ol6_*, then you must
import the OL6 public key manually. To do so, follow these steps:
• Ensure that the Oracle GPG keys are installed on the host on which you plan to set
up the RPM Repository.
To install the Oracle GPG keys on a host running on the Oracle Linux 5 or Oracle
Linux 6 platforms, run the following command:
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY
1. In Cloud Control, from the Setup menu, select Provisioning and Patching, then
select Linux Patching.
2. On the Patching Setup page, in the Linux Patching Setup tab, click Setup RPM
Repository.
3. On the Setup RPM Repository page, in the RPM Repository Server section, select
the RPM Repository server by clicking the search icon. Select the host assigned for
subscribing to ULN.
4. In the Credentials section, ensure that the Normal Host Credential user has write
access to the stage location, and the Privileged Host Credential user can sudo
with root privilege. Click Apply.
6. (Optional) If you want to change the refresh mode to 30 seconds, then from the
View Data list, select Real Time: 30 Second Refresh.
7. In the Steps tab of the Status Detail section, check the status of this step. Wait till
the step Installing Up2date is completed or skipped.
8. Click the status of the manual step Register with ULN to verify if your host has
been registered to ULN.
If you have registered your host to ULN, then select the target and click Confirm,
and then click Done to go to the main flow.
If you have not registered your host to ULN, then perform the following steps on
your Linux host:
b. Check if your host can connect to ULN. If your host cannot connect to the
ULN directly, you can configure up2date to use a proxy server. To configure
access to ULN using a proxy server, follow these instructions:
https://linux.oracle.com/uln_faq.html#9
Note:
While registering, you can choose the user name and password. This
credential will be used to log in to http://linux.oracle.com
a. Log in to ULN:
http://linux.oracle.com/
b. Click on the Systems tab to manage subscriptions for each subscribed server.
Note:
10. Once the deployment procedure ends successfully, from the Setup menu, select
Provisioning and Patching, then select Linux Patching.
11. On the Patching Setup page, in the Linux Patching Setup tab, click Manage RPM
Repository to verify if the ULN channels are displayed in the Cloud Control
console.
12. On the Manage RPM Repository page, check if all the subscribed channels are
listed and if all the packages are downloaded.
• Install yum on all your Oracle Linux 6 target hosts. Install yum and up2date on all
your Oracle Linux 5 target hosts.
• Ensure that the Enterprise Manager user logs in to the OMS with super user
privileges.
1. In Cloud Control, from the Setup menu, select Provisioning and Patching, then
select Linux Patching.
4. On the Create Group: Properties page, enter a unique name for the group. Select
the maturity level, Linux distribution, and Linux hosts to be added to the group.
Click Next.
5. On the Create Group: Package Repositories page, select the RPM Repositories that
must be associated with the patching group (click the search icon to select
repository).
In the Check GPG Signatures section, select Check GPG signatures to ensure that
yum or up2date performs a GPG signature check on the packages obtained from
the specified repositories. Sometimes, yum or up2date may require a public GPG
key to verify the packages obtained from the repositories. This key may not be
previously imported into the RPM database. To ensure that this key is imported,
select Import GPG key, then specify the GPG Key URL.
In the Stage Location section, specify the location where you want the Linux
patching configuration and log files to be created.
In the Update Hosts section, select Automatically Update Hosts if you want to
auto-update the host, that is, to schedule an update job (schedule specified as one
of the subsequent step) to update all non-compliant packages from the selected
package repository.
In the Excluded Packages section, for Excluded Packages, specify the list of
packages that you do not want to update while patching the Linux hosts. If the list
of packages that you do not want to update during the patching process is present
in a file, click Import From File to specify the location of the file. The wizard
obtains the required packages from the specified file.
In the Rollback Last Update Session section, select Enable 'Rollback Last Update
Session' to enable the Rollback Last Update Session feature for the group in the
Undo Patching wizard. If this feature is not enabled here, it is not visible in the
Undo Patching wizard for the group.
In the Package Compliance section, you can choose whether to include Rogue
packages in compliance reporting or not.
In the Packages Updated on Reboot section, for Packages updated on Reboot,
specify the list of packages that must be updated only when the host is rebooted.
6. Click Next.
7. On the Create Group: Credentials page, enter the host credentials or choose to use
preferred credentials. Click Next.
8. On the Create Group: Patching Script page, enter any pre/post patching operations
to be done. This is not a mandatory step. Click Next.
Note:
Steps (8) and (9) will be skipped if Automatically Update Hosts was not
selected.
9. On the Schedule page, set the schedule for the update job. Click Next.
10. On the Review page, validate all the parameters. Click Finish.
11. From the Enterprise menu, select Provisioning and Patching, then select Linux
Patching. Verify the compliance report generated. The group created will have at
least one out-of-date package.
Table 42-1 describes the jobs that are submitted for setting up a Linux patching
group.
Job Description
Patching Configuration This job configures all the hosts for patching. It creates
configuration files to be used by the yum and up2date tools
on each host.
This job is executed just once on all the hosts contained in the
Linux Patching group immediately.
Table 42-1 (Cont.) Jobs Submitted for Setting Up Linux Patching Group
Job Description
Package Information Collects the metadata information of each package contained
in the selected RPM Repositories.
This job is executed daily.
Note:
Before patching your Linux hosts, ensure that the Enterprise Manager user has
the EM_PATCH_DESIGNER role and the OPERATOR_ANY_TARGET privilege. If
the Enterprise Manager user does not have these, ensure that the super user
grants them.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Linux Patching.
2. On the Linux Patching page, in the Compliance Report section, select the Linux
patching group that you want to patch, then click Schedule Patching.
3. On the Package Repository page, in the LINUX Distribution section, select the tool
that you want to use to update the RPMs. YellowDog Updater modified (yum), and
up2date are two commonly used tools to patch Linux hosts.
Note:
If the Linux host to be patched is running on Oracle Linux 6 (OL6), then you
must use the yum tool for patching. The up2date patching tool is not
supported for this Linux version. If you do not use the yum tool in this
scenario, the patching process fails on the Configure Host For Patching step with
the following error:
You are not selecting 'yum' as the tool to update the RPMs
in this system. 'yum' is the only supported tool for
updating RPMs in Oracle Linux 6 operating system
(Only if you have selected yum as the patching tool) Ensure that you select the
patching mode that you want to use. Select Package update and new package
installation if you plan to update the existing packages, as well as install new
packages. Select Package update only if you plan to only update the existing
packages, and not install any new packages.
In the Stage Location section, specify the location where you want the Linux
patching configuration and log files to be created.
In the Package Repository section, select the RPM repositories that you want to use.
In the Check GPG Signatures section, select Check GPG signatures to ensure that
yum or up2date performs a GPG signature check on the packages obtained from
the specified repositories. Sometimes, yum or up2date may require a public GPG
key to verify the packages obtained from the repositories. This key may not be
previously imported into the RPM database. To ensure that this key is imported,
select Import GPG key, then specify the GPG Key URL.
In the Advanced Options section, by default, the Hide obsolete updates option is
selected. Selecting this option hides the obsolete packages on the Select Updates
page. If you want to view these packages on the Select Updates page, ensure that
you deselect this option.
(Only if you have selected yum as the patching tool) In the Advanced Options
section, select one of the following patch application modes:
• Most suitable architecture, if you want yum to install the latest version of the
selected package, or update the existing version of the package to the latest
version, for the suitable RPM architectures that are installed on the Linux hosts
that you are patching.
If you select this option, Cloud Control runs the following yum command:
yum install|update packagename
• Specific architecture, if you want yum to install the latest version of the selected
package, or update the existing version of the package to the latest version, on
only those Linux hosts that have the RPM architecture of the selected package.
If you select this option, Cloud Control runs the following yum command:
yum install|update packagename.arch
• Specific version and architecture, if you want yum to install only the specific
version of the package selected on the Select Updates page, or update the
existing version of the package to this specific version, on only those Linux
hosts that have the RPM architecture of the selected package.
If you select this option, Cloud Control runs the following yum command:
yum install|update epoch:packagename-ver-rel.arch
Click Next.
Note:
If the Hide obsolete updates option was selected in the previous step, the
values for Total packages available and Total packages available in this
view may be different. This difference corresponds to the number of obsolete
packages present in the repositories.
Click Next.
5. On the Select Hosts page, select the Linux hosts to be updated. You can also select a
group by changing the target type to group.
By default, every discovered Linux host is displayed on this page, and can be
selected. However, if you want only those hosts that have an older version of at
least one of the packages (that you selected for the update operation in the previous
step) to be displayed on this page, run the following command:
$<OMS_HOME>/bin/emctl set property -name
'oracle.sysman.core.ospatch.filter_uptodate_hosts' -value
'true'
Click Next.
6. On the Credentials page, enter the credentials to be used for the updates.
Click Next.
7. On the Pre/Post script page, enter the scripts that need to be executed before/after
the patching process, if any.
Click Next.
8. On the Schedule page, enter the details of the patching schedule that must be used.
Click Next.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Procedure Library.
2. On the Deployment Procedure Manager page, in the Procedure Library tab, select
Patch Linux Hosts, then click Launch.
3. On the Package Repository page, in the LINUX Distribution section, select the tool
that you want to use to update the RPMs. YellowDog Updater modified (yum), and
up2date are two commonly used tools to patch Linux hosts.
Note:
If the Linux host to be patched is running on Oracle Linux 6 (OL6), then you
must use the yum tool for patching. The up2date patching tool is not
supported for this Linux version. If you do not use the yum tool in this
scenario, the patching process fails on the Configure Host For Patching step with
the following error:
You are not selecting 'yum' as the tool to update the RPMs
in this system. 'yum' is the only supported tool for
updating RPMs in Oracle Linux 6 operating system
(Only if you have selected yum as the patching tool) For the tool operation mode,
ensure that you select Package update and new package installation. Since this
method of patching Linux hosts without using a Linux patching group is meant for
emergencies and is not based on a compliance report, you can only use it to install
new packages, and not update existing packages.
In the Stage Location section, specify the location where you want the Linux
patching configuration and log files to be created.
In the Package Repository section, select the RPM repositories that you want to use.
In the Check GPG Signatures section, select Check GPG signatures to ensure that
yum or up2date performs a GPG signature check on the packages obtained from
the specified repositories. Sometimes, yum or up2date may require a public GPG
key to verify the packages obtained from the repositories. This key may not be
previously imported into the RPM database. To ensure that this key is imported,
select Import GPG key, then specify the GPG Key URL.
In the Advanced Options section, by default, the Hide obsolete updates option is
selected. Selecting this option hides the obsolete packages on the Select Updates
page. If you want to view these packages on the Select Updates page, ensure that
you deselect this option.
(Only if you have selected yum as the patching tool) In the Advanced Options
section, select one of the following patch application modes:
• Most suitable architecture, if you want yum to install the latest version of the
selected package, or update the existing version of the package to the latest
version, for the suitable RPM architectures that are installed on the Linux hosts
that you are patching.
If you select this option, Cloud Control runs the following yum command:
yum install|update packagename
• Specific architecture, if you want yum to install the latest version of the selected
package, or update the existing version of the package to the latest version, on
only those Linux hosts that have the RPM architecture of the selected package.
If you select this option, Cloud Control runs the following yum command:
yum install|update packagename.arch
• Specific version and architecture, if you want yum to install only the specific
version of the package selected on the Select Updates page, or update the
existing version of the package to this specific version, on only those Linux
hosts that have the RPM architecture of the selected package.
If you select this option, Cloud Control runs the following yum command:
yum install|update epoch:packagename-ver-rel.arch
Click Next.
Note:
If the Hide obsolete updates option was selected in the previous step, the
values for Total packages available and Total packages available in this
view may be different. This difference corresponds to the number of obsolete
packages present in the repositories.
Click Next.
5. On the Select Hosts page, select the Linux hosts to be updated. You can also select a
group by changing the target type to group.
By default, every discovered Linux host is displayed on this page, and can be
selected. However, if you want only those hosts that have an older version of at
least one of the packages (that you selected for the update operation in the previous
step) to be displayed on this page, run the following command:
$<OMS_HOME>/bin/emctl set property -name
'oracle.sysman.core.ospatch.filter_uptodate_hosts' -value
'true'
Click Next.
6. On the Credentials page, enter the credentials to be used for the updates.
Click Next.
7. On the Pre/Post script page, enter the scripts that need to be executed before/after
the patching process, if any.
Click Next.
8. On the Schedule page, enter the details of the patching schedule that must be used.
Click Next.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Linux Patching.
4. On the Create Configuration File Channel page, enter a unique channel name and
description for the channel, and click OK.
You will see a confirmation message mentioning that a new configuration file
channel is created.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Linux Patching.
3. In the Configuration Files tab, select the channel that you want to upload
configuration files to, then click Upload Configuration Files.
4. Select an appropriate upload mode. You can either upload files from local host
(where the browser is running) or from a remote host (a Management Agent
should be installed on that host and the Management Agent must be
communicating with the OMS).
5. In the File Upload section, enter the file name, path where the file will be deployed
on the target host, and browse for the file on the upload host.
6. For uploading from remote machine, click Upload from Agent Machine. Click
Select Target and select the remote machine.
Before browsing for the files on this machine, set preferred credential for this
machine.
You will see a confirmation message that states that files have been uploaded.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Linux Patching.
3. In the Configuration Files tab, select the source channel, and click Import Files.
5. From Source channel section, select the files and copy it to the target channel
section. Click OK.
You will see a confirmation message stating that the selected files have been
imported successfully.
• Ensure that the privileged patching user has write permission on the target
machine location where each configuration file will be staged, and has SUDO
privileges too.
• Ensure that there is at least one channel with some files uploaded.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Linux Patching.
3. In the Configuration Files tab, select the source channel, and click Deploy Files.
4. In the wizard that appears, select the files you want to deploy, and click Next.
5. Click Add to select the targets where you want to deploy the files.
7. Enter the Pre/Post scripts you want to run before or after deploying the files.
A deploy job is submitted. Follow the job's link until it completes successfully.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Linux Patching.
3. In the Configuration Files tab, select the channel, and click Delete. Click Yes.
You will see a configuration message stating that the channel was successfully
deleted.
Table 42-2 Oracle Grid Infrastructure and Oracle RAC Configuration Support
Table 42-2 (Cont.) Oracle Grid Infrastructure and Oracle RAC Configuration
Support
Table 42-2 (Cont.) Oracle Grid Infrastructure and Oracle RAC Configuration
Support
• Ensure that you have View privileges on the Linux host comprising the patching
group.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Linux Patching.
2. On the Compliance Home page, from the Related Links section, click Compliance
History.
3. On the Compliance History page, the Groups table lists all the accessible Linux
patching groups and the number of hosts corresponding to each group.
4. If there are multiple Linux patching groups, the Compliance History page displays
the historical data (for a specific time period) for the first group that is listed in that
table.
5. To view the compliance history of a Linux patching group, click the View icon
corresponding to that group.
Note:
By default, the compliance data that is displayed is retrieved from the last
seven days. To view compliance history of a longer time period, select an
appropriate value from the View Data drop-down list. The page refreshes to
show compliance data for the selected time period.
1. In the Patch Linux Hosts Wizard, provide the required details in the interview
screens, and click Finish on the Review page.
2. A deployment procedure is submitted to update the host. Check if all the steps
finished successfully.
Note:
42.7.3.1 Prerequisites for Rolling Back Linux Patch Update Sessions or Deinstalling
Packages
Before rolling back patch update sessions or deinstalling packages, meet the following
prerequisites:
• Ensure that the lower version of the packages are present in the RPM repository.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Linux Patching.
2. On the Linux Patching page, in the Compliance Report section, select a group, and
click Undo Patching.
• Rollback Last Update Session, reverts the effects of the previous patch update
session.
4. Click Next.
5. Provide the required details in the wizard, and on the Review page, click Finish.
7. Examine the job submitted to see if all the steps are successful.
• Ensure that Apache is installed, and listening on port 80. To verify this, you can try
connecting to the URL: http://host.
For example: http://h1.example.com. If this works, then it is confirmed that
Apache is installed and listening on port 80.
• Ensure that metadata files are created by running yum-arch and createrepo
commands.
• Ensure that a Management Agent is installed on the RPM repository host, and
ensure that Management Agent is communicating with the OMS.
• Ensure that the Enterprise Manager User logs in with Super User privileges for
registering a custom channel.
1. In Cloud Control, from the Setup menu, select Provisioning and Patching, then
select Linux Patching.
2. On the Patching Setup page, in the Linux Patching Setup tab, click Manage RPM
Repository.
5. Click Browse and select the host where the custom RPM repository was setup.
6. Enter the path where RPM repository resides. The directory location must start
with /var/www/html/.
7. Click OK.
3. Ensure that the stage location of the source host does not have a directory named
createLikeSrc, and the Directory for the Target Channel does not exist.
4. Ensure that Apache is installed, and listening on port 80. To verify this, you can
try connecting to the URL: http://host.
For example: http://h1.example.com. If this works, then it is confirmed that
Apache is installed and listening on port 80.
5. Ensure that the Enterprise Manager User logs in to the OMS with Super User
privileges.
1. In Cloud Control, from the Setup menu, select Provisioning and Patching, then
select Linux Patching.
2. On the Patching Setup page, in the Linux Patching Setup tab, click Manage RPM
Repository.
3. On the Manage RPM Repository page, select the source channel you want to clone,
and click Create Like.
4. Enter the credentials to use for the source channel. The credentials must have both
read and write access.
7. Enter the directory location of the target channel. This directory should be
under /var/www/html.
8. Enter the credentials to use for the target channel. This credential should have both
read and write access.
9. Click OK.
3. Ensure that the stage location of the source host does not have a directory named
copyPkgsSrc,and the stage location of Target Host does not have a directory
named copyPkgsDest.
4. Ensure that Apache is installed, and listening on port 80. To verify this, you can
try connecting to the URL: http://host.
For example: http://h1.example.com. If this works, then it is confirmed that
Apache is installed and listening on port 80.
5. Ensure that the Enterprise Manager User logs in to the OMS with Super User
privileges.
1. In Cloud Control, from the Setup menu, select Provisioning and Patching, then
select Linux Patching.
2. On the Patching Setup page, in the Linux Patching Setup tab, click Manage RPM
Repository.
3. On the Manage RPM Repository page, select the source channel, and click Copy
Packages.
5. From the source channel section, select and copy the packages to the target channel
section.
6. Enter credentials for the source and target channels. These credentials should have
read/write access to the machines.
7. Click OK.
A Copy Packages job is submitted. Follow the job until it completes successfully.
2. Ensure that the stage location of the source host does not have a directory named
addPkgsSrc, and the stage location of the destination channel does not have a
directory named addPkgsDest.
2. On the Patching Setup page, in the Linux Patching Setup tab, click Manage RPM
Repository.
3. On the Manage RPM Repository page, select the channel name where you want to
add the RPM, and click Add.
4. Select the source target name and the credentials to be used for the host. The
credential you use must have write access on emd_emstagedir directory present
on the source host.
5. On the Upload Files section, click the search icon to browse for the RPM files.
6. Select a normal host credential that has write access on the select channel.
7. Select a privileged host credential that has write access on the select channel, and
has SUDO as root privilege.
8. Click OK.
An Add Package job is submitted. Follow the job until it completes successfully.
2. Ensure that the Enterprise Manager User logs in to the OMS with Super User
privileges.
1. In Cloud Control, from the Setup menu, select Provisioning and Patching, then
select Linux Patching.
2. On the Patching Setup page, in the Linux Patching Setup tab, click Manage RPM
Repository.
3. On the Manage RPM Repository page, select the channel name you want to delete,
and click Delete.
4. If you want to delete the packages from the RPM Repository machine, select the
check box and enter the credentials for the RPM Repository machine. Click Yes.
5. If you have not selected to delete the packages from RPM Repository machine, you
will get a confirmation message stating Package Channel <channel name> successfully
deleted. If you have selected the Delete Packages option, a job will be submitted to
delete the packages from the RPM Repository machine. Follow the job until it
completes successfully.
• Oracle Restart Homes (Grid Infrastructure for Standalone Server) and associated
databases
• Creating gold images using reference environments and versioning them for
additional changes.
• Deploying the image and switch associated targets from the old Oracle Home to
the new Oracle Home.
This maintenance activity can be performed as and when a new version of the gold
image is available.
'end-state definition' is the logical term; the physical software binary component that
represents the end-state is called as a gold image. ("Gold" is the qualifier signifying the
ideal standard or the level for the software configuration.)
The following figure shows how a gold image can be created.
You can create an image for a database or grid infrastructure home using the emcli
command shown below. This image is stored in the Enterprise Manager Software
Library and used for provisioning.
Prerequisites
• The Enterprise Manager Software Library has been setup and configured.
• The database and cluster targets to be patched have been discovered in Enterprise
Manager.
• Reference environment (Database and Oracle Home) representing the target state
must have been discovered in Enterprise Manager. This reference environment is
required to create the gold image.
Verb
emcli db_software_maintenance -createSoftwareImage -
input_file="data:/home/user/input_rac"
where the input_rac file contains the variables explained below. For a complete
description of input variables fordb_software_maintenance family of verbs, refer
to Enterprise Manager Command Line Reference Guide.
• IMAGE_NAME: The name of the gold image. This name must be unique across
images.
• REF_TARGET_NAME: The Oracle home target that will be used to create this gold
image. This is the database or Grid Infrastructure Oracle Home from the existing
environment on which the 11.2.0.4 PSU and all the one-off patches have been
applied. To find the reference target name, enter the following query on the
Enterprise Manager repository:
SELECT distinct target_name FROM mgmt$target_properties WHERE target_name IN
SELECT target_name FROM mgmt_targets WHERE target_type='oracle_home' AND
host_name=<Host Name of this Oracle Home> AND property_name='INSTALL_LOCATION'
AND property_value=<path of Oracle Home>
– <Named Credential> is the named credentials for the host on which the
reference Oracle Home is located. This user must be the owner of Oracle home.
– <Credential Owner> is the Enterprise Manager user who owns this Named
Credential.
Sample Output
The following output is displayed:
'Create Gold Image Profile deployment procedure has been submitted successfully with
the instance name: 'CreateGoldImageProfile_SYSMAN_12_08_2015_05_09_AM' and
execution_guid='25F92D9A00164A45E053D903C40A9B4B
The operation is performed using a deployment procedure, and user needs to wait for
its completion before performing next steps. Status of the gold image creation can
be checked using the following command:
Verb
emcli db_software_maintenance -getImages
Sample Output
Image Id Image Name Description
Sample Output
*************************************************************************************
************************************************
POSITION VERSION ID VERSION NAME
STATUS DATE CREATED
EXTERNAL ID HASHCODE
*************************************************************************************
************************************************
1 277DF28F2D30393BE053D903C40AC6 CUSTOMER 112044
VERSION ACTIVE 2015-12-22 05:54:45.0
ORACLE:DEFAULTSERVICE:EM:PROVI C3448035451:B1400395227
SIONING:1:CMP:COMP_COMPONENT:S 10
UB_ORACLEDB:277DF28F2D2C393BE0
53D903C40AC610:0.1
*************************************************************************************
***********************************************
2 277E39B74D684C6BE053D903C40A59 CUSTOMER 112047
VERSION CURRENT 2015-12-22 06:14:40.0
ORACLE:DEFAULTSERVICE:EM:PROVI C3448035451:B3881579444
IONING:1:CMP:COMP_COMPONENT:S EB
UB_ORACLEDB:277E39B74D664C6BE0
53D903C40A59EB:0.1
*************************************************************************************
************************************************
TOTAL ROWS:2
Verb
Syntax: emcli db_software_maintenance -subscribeTarget -
target_name="RACDB1" -target_type=rac database -
image_id=<image_id>
where:
• target_name is the name of the RAC database target that needs to be patched.
• image_id is the ID of the gold image to which this target is to be patched. You can
get the image_id by running the emcli command specified in the Retrieving a List
of Available Gold Imagessection.
Sample Output
Target 'RACDB1' subscribed successfully.
Verb subscribeTarget completed successfully
Verb
emcli db_software_maintenance -getImageSubscriptions -
image_id=<image_id>
Sample Output
*************************************************************************************
*
*************************************************************************************
*
Total Rows:1
Verb
emcli db_software_maintenance -performOperation -name="Deploy
1120407 GI Home" -purpose=DEPLOY_GI_SOFTWARE -
target_type=input_file -target_list="CLUSTER1" -
normal_credential="NC_HOST_CREDS:TESTSUPERADMIN" -
privilege_credential="HOST_PRIV:TESTSUPERADMIN" -
inputfile="data:/usr/oracle/deploy.txt"
where:
• purpose: There are standard purposes that can be performed by fleet operations
which can be:
– DEPLOY_DB_SOFTWARE
– DEPLOY_GI_SOFTWARE
• target_type: The type of target on which this operation is being performed. This can
be "rac_database" for RAC , "oracle_ database" for single instance databases, or
“cluster" for cluster and Oracle Restart (SIHA).
target_list: This is a comma separated list of targets which need to be patched.
– unique list of hosts based on this target list is displayed and start stage of Oracle
home software on those hosts.
– If targets running from same Oracle home are provided in this list, the stage and
deploy operation will be triggered only once and not for all targets.
– <Named Credential>: Named credential for the host where new Oracle home
will be deployed.
– <Credential Owner>: The Enterprise Manager user who owns this Named
Credential.
– <Named Credential>: Named credential for the host where new Oracle home
will be deployed.
– <Credential Owner>: The Enterprise Manager user who owns this Named
Credential.
These credentials are used to run scripts as root.
Sample Output
Processing target "CLUSTER1"...
Check Passed.
Verb
emcli db_software_maintenance -performOperation -name="Update
Listener" -purpose=migrate_listener -target_type=oracle_database
-target_list="DB1" -normal_credential="NC_HOST_CREDS:SYSMAN" -
privilege_credential="HOST PRIV:SYSMAN" start_schedule
where:
• purpose: There are standard tasks or purposes that can be performed by fleet
operations which can be:
– MIGRATE_LISTENER
• target_type: The type of target on which this operation is being performed. This can
be "rac_database" for RAC and "oracle_database" for single instance databases.
– <Named Credential>: Named credential for the host where new Oracle home
will be deployed.
– <Credential Owner>: The Enterprise Manager user who owns this Named
Credential.
– <Named Credential>: Named credential for the host where new Oracle home
will be deployed.
– <Credential Owner>: The Enterprise Manager user who owns this Named
Credential.
These credentials are used to run scripts as root.
Note: This is an optional parameter. If no date is provided, the fleet operation will
start immediately.
Sample Output
Processing target "CLUSTER1"...
Check Passed.
You can monitor the patch operation status with the following command:
Verb
emcli db_software_maintenance -performOperation -name="Update
Cluster" -purpose=UPDATE_GI -target_type=cluster -target_list=
CLUSTER1 -normal_credential="NC_HOST_CREDS:SYSMAN" -
privilege_credential="HOST PRIV:SYSMAN" [-rolling=<true/false]
where:
• purpose: There are standard purposes that can be performed by Fleet Operations
which can be:
• target_type: The type of target being provided in this operation which can
"rac_database" or "single instance" database.
– <Named Credential>: Named credential for the host where new Oracle home
will be deployed.
– <Credential Owner>: The Enterprise Manager user who owns this Named
Credential.
– <Named Credential>: Named credential for the host where new Oracle home
will be deployed.
– <Credential Owner>: The Enterprise Manager user who owns this Named
Credential.
These credentials are used to run scripts as root.
• rolling: This is an optional flag with the default value as true. The update
procedure works in "Rolling Patch" mode by default but you can override this if
necessary.
Sample Output
Processing target "CLUSTER1"...
Check Passed.
You can monitor the patch operation status with the following command:
specific node will continue to remain shut down after the cluster instance is
switched.
emcli db_software_maintenance -performOperation -name="Update
Cluster" -purpose=UPDATE_GI -target_type=cluster -
target_list= CLUSTER1 -
normal_credential="NC_HOST_CREDS:SYSMAN" -
privilege_credential="HOST_PRIV:SYSMAN" - rolling=true -
node_list="host1.us.oracle.com" –startupDatabase=false
• This step will switch the instances RACDB_112_1 and RACDB_121_1 to the new
home and will restart the same.
For example, consider RAC databases RACDB_112 and RACDB_121 are running
on this cluster. The instances RACDB_112_1 and RACDB_121_1 running on this
specific node will continue to remain shut down after the cluster instance is
switched.
Verb
emcli db_software_maintenance -performOperation -name="Update
RAC DB" -purpose=UPDATE_RACDB -target_type=rac_database -
target_list= RACDB -normal_credential="NC_HOST_CREDS:SYSMAN" -
privilege_credential="HOST_PRIV:SYSMAN" -rolling=true -
node_list="host1.us.oracle.com"
where:
• purpose: There are standard purposes that can be performed by fleet operations
which can be:
• target_type: The type of target being provided in this operation which can
"rac_database" or "single instance” database.
– A unique list of hosts based on this target list is displayed and start stage of
Oracle Home software on those hosts.
– If targets running from the same Oracle home are provided in this list, the stage
and deploy operation will be launched only once and not for all targets.
– <Named Credential>: Named credential for the host where new Oracle home
will be deployed.
– <Credential Owner>: The Enterprise Manager user who owns this Named
Credential.
– <Named Credential>: Named credential for the host where new Oracle home
will be deployed.
– <Credential Owner>: The Enterprise Manager user who owns this Named
Credential.
These credentials are used to run scripts as root.
• node_list: This is a comma separated list of hosts on which the instances need to be
updated.
For example: If RACDB is running on a 4 node cluster host1, host2, host3, and
host4 and you choose to update the instances in only 2 hosts at a time, the value of
this parameter needs to be specified as node_list="host1, host2"
Note: This is an optional parameter. If no date is provided, the fleet operation will
start immediately.
Sample Output
Processing target "RACDB"...
Check Passed.
Verb
emcli db_software_maintenance -performOperation -name="Rollback
RAC DB" -purpose=ROLLBACK_RACDB -target_type=rac_database -
target_list= RACDB -normal_credential="NC_HOST_CREDS:SYSMAN" -
privilege_credential="HOST_PRIV:SYSMAN" [-rolling=true/false] [-
node_list="host1.us.oracle.com"]
where:
• purpose: There are standard purposes that can be performed by Fleet Operations
which can be:
– ROLLBACK_DB
– ROLLBACK_RACDB
– ROLLBACK_GI
• target_type: The type of target being provided in this operation which can
"rac_database" or "oracle_database”.
– A unique list of hosts based on this target list is displayed and start stage of
Oracle home software on those hosts.
– If targets running from the same Oracle home are provided in this list, the stage
and deploy operation will be started only once and not for all targets.
– <Named Credential>: Named credential for the host where new Oracle home
will be deployed.
– <Credential Owner>: The Enterprise Manager user who owns this Named
Credential.
– <Named Credential>: Named credential for the host where new Oracle home
will be deployed.
– <Credential Owner>: The Enterprise Manager user who owns this Named
Credential.
These credentials are used to run scripts as root.
• rolling: By default rollback is performed in rolling fashion. This flag is used when
the current Oracle home has patches that were applied in non-rolling mode
(OJVM) and need to be rolled back.
• node_list: This is a comma separated list of hosts on which the instances need to be
updated.
For example: If RACDB is running on a 4 node cluster host1, host2, host3, and
host4 and you choose to update the instances in only 2 hosts at a time, the value of
this parameter needs to be specified as node_list="host1, host2"
Sample Output
Processing target "RACDB"...
Checking if target is already running from the current version of the image...
Check Passed.
Table 43-1 (Cont.) Steps for Patching Databases with Dataguard Configuration
This chapter explains how you can use Enterprise Manager Cloud Control 13c to
apply Quarterly Full Stack Download Patch (QFSDP) on Exadata and Exalytics targets.
In particular, this chapter covers the following:
Last QFSDP Check The QFSDP check happens at an interval of 24 hours. The time when the last check
happened is recorded in this section.
Targets All the targets configured on the Exadata system and their status is recorded in this
section.
Targets Needing Updates The targets that are part of the system but are not at the latest patch level are the
ones that need to be patched. The information about the number of targets that must
be patched to bring them upto the recommended patch level is available here.
Updates in Progress When you deploy or rollback a patch, the deployment procedure is submitted and
the updates are said to be in progress. The number of targets that are involved in the
update process is mentioned here.
Components The Software Update page enables you to update the following components of the
Exadata targets:
• Oracle Database Server
• Oracle Exadata Storage Server
• Oracle Infiniband Switch
For any of the selected targets, you can view the following details:
• Last QFSDP Applied: This is the latest patch applied on the selected component.
• Update Status: The different status of the targets. For details see Table 44-2.
Status Description
This symbol implies that the selected target(s) of the components require updates to bring
them to the recommended patch level.
This symbol implies that the patch is currently being deployed, and the procedure is in
progress. If it is successful, then a tick mark will appear, if not you will see a cross mark
against the respective targets.
This symbol implies that the selected target of the component is up-to-date with the latest
patch.
This symbol implies that patching failed, and that the patch could not be applied on the
designated targets.
For a selected component, in addition to details like the target name, status, version,
update status, and when the last analyze and deploy happened, you have the
following details:
• Selected Patch Details: Details like the bug fixed and knowledge articles related the
update are displayed here.
– Summary: All the information that you have entered is available in the
summary section. After the Analyze step completes, you can view the patching
steps. Click Download to download and view the Oplan-based steps that are
generated.
– Problems to Resolve: Any issue encountered during analysis and deploy phase
is recorded in this section.
– Log Files and Diagnostics: When the Analyze, Deploy, or Rollback phase is in
progress, a list of static links are displayed. Click on the link to fetch the on-
demand files and view the details of the log files and trace files that contain
information about the patching steps performed on the targets.
Once the patching process is complete, the log files are automatically fetched
and stored in the OMS Repository. Click the link to download the zip file. The
zip contains log and trace files that contain information about the patching steps
are performed on the targets.
To set up the update options for a particular Exadata component, follow these steps:
3. On the Exadata target home page, from the Database Machine menu, select
Software Update.
5. Click Settings.
• Operation Mode: (only for Oracle Exadata Storage Server and Oracle Infiniband
Switch targets) Select the mode of patching that you want to use. You can choose
to patch the component targets in rolling mode or parallel mode. Note that for
an Infiniband Switch target, only the rolling mode of patching is supported.
Out of the component targets selected for patching, if you want to patch a single
target at a time, select Rolling. This is the default option, and it involves very
little downtime. For information on how to do so, see Pausing the Patching
Operation While Patching Targets in Rolling Mode.
However, if you want to simultaneously patch all the component targets
selected for patching, select Parallel. This option involves downtime, as all the
component targets are shut down for a significant period. However, this process
consumes less time, as all the component targets are patched simultaneously.
• Where to Stage: If you want the wizard to stage the patches from Software
Library to a temporary location before the patch is applied on the component
target, select Yes for Stage Patches, then specify the location where you want
the patches to be staged. However, if you have already manually staged the
patches that you want to apply, select No for Stage Patches, then specify the
location where you staged the patches. If the stage location is a shared location,
select Shared Location.
To manually stage QFSDP, download the patch, navigate to the location (parent
directory) where you want to stage the patch, create a subdirectory with the
same name as the patch zip file, then extract the contents of the patch zip file
into this subdirectory.
For example, if you downloaded patch 699099.zip, and the stage location,
which is the parent directory, is /u01/app/example/em/stagepatch, then,
in this parent directory, create a subdirectory titled 699099 and extract the
contents of the zip file. Specify /u01/app/example /em/stagepatch as the
stage location.
• Customization: For patching Oracle Database Server targets, you can select the
following options:
– Restart target after deployment: Select this option to restart the Oracle
Database Server target after applying the Quarterly Full Stack Download
Patch (QFSDP).
– Run script before deployment: Select this option to run a script on the
Oracle Database Server target before deploying the QFSDP. Specify the
location where the script is hosted.
– Run script after deployment: Select this option to run a script on the Oracle
Database Server target after deploying the QFSDP. Specify the location
where the script is hosted.
7. Click Apply to save the settings. Once saved, the setting will be available for the
next set of patching operation.
2. Click the name of the Exadata target that you want to patch.
3. On the Exadata target home page, from the Database Machine menu, select
Software Update.
5. Select the targets that you want to patch, then click Select QFSDP.
The Select Quarterly Full Stack Download Patch window displays a list of QFSDPs
that can be applied on the Oracle Database Server target. The recommended (latest)
patches are marked with a tick. Select the QFSDP that you want to apply, then click
Select.
6. Click Analyze.
Specify a schedule for the analysis. If you haven't specified any schedule, the
Deployment Procedure will run immediately.
Verify the provided options. If you haven't already specified the options (as
described in Configuring Options for Exadata Component Software Updates),
provide the required options.
Click Submit.
7. Once the analysis is complete, click Deploy. Deploy uses the options selected for
Analyze. Once deployed, the Last QFSP Applied is applied to the update, and the
current revision will show the latest version.
2. Click the name of the Exadata target that you want to patch.
3. On the Exadata target home page, from the Database Machine menu, select
Software Update.
5. Select the targets that you want to patch, then click Select QFSDP.
The Select Quarterly Full Stack Download Patch window displays a list of QFSDPs
that can be applied on the Oracle Database Server target. The recommended (latest)
patches are marked with a tick. Select the QFSDP that you want to apply, then click
Select.
6. Click Analyze.
Specify a schedule for the analysis. If you haven't specified any schedule, the
Deployment Procedure will run immediately.
Verify the provided options. If you haven't already specified the options (as
described in Configuring Options for Exadata Component Software Updates),
provide the required options.
Click Submit.
Note:
Target-level patch monitoring has been enabled for Oracle Exadata Storage
Server components of an Exadata target. When there are multiple targets
selected for patching, monitoring the patching status of individual targets is
possible, and does not depend on the deployment procedure completion or
failure.
For example, assuming host 1, host 2, and host 3 are being patched, and the
deployment procedure fails because patching host 3 wasn't successful; target-
level patch monitoring still allows you to monitor the patching status' of host 1
and host 2 independently.
Note:
2. Click the name of the Exadata target that you want to patch.
3. On the Exadata target home page, from the Database Machine menu, select
Software Update.
5. Select the targets that you want to patch, then click Select QFSDP.
The Select Quarterly Full Stack Download Patch window displays a list of QFSDPs
that can be applied on the Oracle Database Server target. The recommended (latest)
patches are marked with a tick. Select the QFSDP that you want to apply, then click
Select.
6. Click Analyze.
Specify a schedule for the analysis. If you haven't specified any schedule, the
Deployment Procedure will run immediately.
Verify the provided options. If you haven't already specified the options (as
described in Configuring Options for Exadata Component Software Updates),
provide the required options.
Click Submit.
Note:
You can not select the patch you want to rollback, instead, you can only
rollback the latest QFSDP applied using Cloud Control.
2. Click the name of the Exadata target on which the Quarterly Full Stack Download
Patch (QFSDP) that you want to rollback is deployed.
3. On the Exadata target home page, from the Database Machine menu, select
Software Upgrade.
5. (Applicable for Oracle Exadata Storage Server and Oracle Infiniband Switch only) From
the Rollback menu, select Prerequisite Only to analyze if the patch is suitable for a
roll back. If the analysis is successful, you can proceed to the next step.
6. From the Rollback menu, select Rollback to rollback to the last deployed patch.
Specify a schedule for the rollback operation. Provide the required options, then
click Rollback.
1. Identify the patches that you want to apply, using patch recommendations, or by
searching for the patches.
For information on how to search for patches, see Searching for Patches on My
Oracle Support and Searching for Patches in Oracle Software Library.
2. Create, analyze, and deploy a patch plan containing the required patches, as
described in Creating, Analyzing, Preparing, and Deploying Patch Plans.
2. Click the name of the Oracle Exalytics target for which you want to set up the patch
options.
3. On the Oracle Exalytics target home page, from the Exalytics System menu, select
Software Update.
• Operation Mode: Select the mode of patching that you want to use. You can
choose to patch the component targets in rolling mode or parallel mode.
Out of the component targets selected for patching, if you want to patch a single
target at a time, select Rolling. This is the default option, and it involves very
little downtime. While patching your component targets in rolling mode, you
can choose to pause the execution of the patching deployment procedure after
each node is patched. For information on how to do so, see Pausing the Patching
Operation While Patching Targets in Rolling Mode.
However, if you want to simultaneously patch all the component targets
selected for patching, select Parallel. This option involves downtime, as all the
component targets are shut down for a significant period. However, this process
consumes less time, as all the component targets are patched simultaneously.
• Stage Patches: If you want the wizard to stage the patches from Software
Library to a temporary location before the patch is applied on the component
target, select Yes for Stage Patches, then specify the location where you want
the patches to be staged. However, if you have already manually staged the
patches that you want to apply, select No for Stage Patches, then specify the
location where you staged the patches. If the stage location is a shared location,
select Shared Location.
To manually stage the patches, download the patch, navigate to the location
(parent directory) where you want to stage the patch, create a subdirectory with
the same name as the patch zip file, then extract the contents of the patch zip file
into this subdirectory.
• Host Credential (only for Compute Node): Provide the required credentials.
You can choose to use preferred credentials, or override them with different
credentials.
• Oracle Home Credentials (only for BI Instance): Provide the required credentials.
You can choose to use preferred credentials, or override them with different
credentials.
– Restart target after deployment: Select this option to restart the Oracle
Database Server target after applying the Quarterly Full Stack Download
Patch (QFSDP).
– Run script before deployment: Select this option to run a script on the
Oracle Database Server target before deploying the QFSDP. Specify the
location where the script is hosted.
– Run script after deployment: Select this option to run a script on the Oracle
Database Server target after deploying the QFSDP. Specify the location
where the script is hosted.
Note:
When you deploy a Patch Set Update (PSU) on an Oracle Exalytics compute
node, the Oracle Integrated Lights Out Manager (ILOM) firmware installed on
the compute node is also patched by default. You do not need to patch the
ILOM firmware separately.
2. Click the name of the Oracle Exalytics target that you want to patch.
3. On the Oracle Exalytics target home page, from the Exalytics System menu, select
Software Update.
5. Select the compute nodes that you want to patch, then select Add Patch.
The Select Patch window displays a list of Patch Set Updates that can be applied on
the compute nodes. The recommended (latest) patches are marked with a tick.
Select the Patch Set Update (PSU) that you want to apply, then click Select.
6. Click Analyze.
Specify a schedule for the analysis. Verify the provided options. If you haven't
already specified the options (as described in Configuring the Options for Oracle
Exalytics Updates), provide the required options .
Click Submit.
Note:
When you deploy a Patch Set Update (PSU) on an Oracle Exalytics compute
node, the Oracle Integrated Lights Out Manager (ILOM) firmware installed on
the compute node is also patched by default. You do not need to patch the
ILOM firmware separately.
2. Click the name of the Oracle Exalytics target that you want to patch.
3. On the Oracle Exalytics target home page, from the Exalytics System menu, select
Software Update.
5. Select the BI Instance that you want to patch, then select Select PSU.
The Select Patch Set Updates window displays a list of Patch Set Updates that can
be applied on the BI Instances. Select the Patch Set Update (PSU) that you want to
apply, then click Select.
6. Click Analyze.
Specify a schedule for the analysis. Verify the provided options. If you haven't
already specified the options (as described in Configuring the Options for Oracle
Exalytics Updates), provide the required options.
Click Submit.
• Managing Compliance
This chapter explains how Oracle Enterprise Manager Cloud Control (Cloud Control)
simplifies the monitoring and management of the deployments in your enterprise.
This chapter covers the following:
• Overview of Parsers
• Overview of Relationships
Middleware such as WebLogic • Node Manager, machine, Web service, and Web service port
Server configurations
• Resource Adapter, including outbound
• Web and EJB modules
• Server information
• JDBC Datasource and Multi Datasource
• Resource usage
• Virtual hosts
• Startup Shutdown classes
• Jolt Connection Pool
• Work Manager
• JMS Topic, Queue and Connection Factory
• Network channels
VM Server Pool • Server Pool configuration details (total disk space and memory available,
for example)
• VM Guest member details
VM Server member details
Client • Hardware
• Operating system (includes properties, file systems, patches)
• Software registered with the operating system
• Network data (includes latency and bandwidth to the Web server)
• Client-specific data that describes configuration for the browser used to
access the client configuration collection applet
• Other client-oriented data items
Non-Oracle Systems • Hardware details including vendor, architecture, CPU, and I/O device
information.
• Operating system details including name, version, software and package
lists, kernel parameters, and file system information.
• OS Registered software including product name, vendor, location, and
installation time.
• Compare configurations
• View latest and saved configurations as well as inventory and usage details
1. Expand Search.
Note:
The search name and owner fields recognize containment, so you can specify a
text string as a partial name to find all searches where the name or owner
contains the string.
6. Select the mode that was used to create the search, such as All, Modeler, or SQL.
1. On the Configuration Search library page, select a configuration search from the
table, and then click Run to execute the search.
2. The Edit/Run Search page displays the search parameters applied in the search
execution, the number of selected configuration items, and the search results.
3. You can edit the configuration items by clicking on the edit icon or the link next to
it. The Apply Configuration Items dialog box appears. You can search for
configuration items in the left panel, and then refine the configuration items using
the right panel. Once you have selected the configuration items that you want to
apply to the configuration search, click Apply. Else, click Reset to obtain the
previous configuration items, or Cancel.
4. You can also export, print, and detach the configuration search page.
2. The Run/Edit Search page displays the search parameters and the results of the
search.
4. You can edit the configuration items by clicking on the edit icon or the link next to
it. The Apply Configuration Items dialog box appears. You can search for
configuration items in the left panel, and then refine the configuration items using
the right panel. Once you have selected the configuration items that you want to
apply to the configuration search, click Apply. Else, click Reset to obtain the
previous configuration items, or Cancel.
5. Click Save to overwrite the existing search. Click Save As to save the edited
search under a new name. If you are working with an Oracle-provided search, use
Save As.
1. From the Enterprise menu, select Configuration, and then select Search.
2. On the Configuration Search Library page, select Create..., and then select
Configuration Search.
3. On the Configuration Search page, in the New Search section, select the target
type of the configuration search. The table containing the targets gets refreshed to
display the target type that you have selected.
b. In the Apply Configuration Items dialog box, in the left panel, in the search
field, specify the name of the configuration item. You can view a list of the
configuration items in this panel in a flat or hierarchical view.
c. Select a configuration item from the left panel. You can refine the selected
configuration item using the search criteria options in the right panel. Ensure
that you deselect the configuration items in the right panel that you do not
want to see in the search results.
d. Click Advanced Search Options if you want to further refine your search.
This search method enables you to view the configuration items in groups,
and add configuration items to each group. This search also provides OR and
AND operators, unlike the Simple search which just gives you the option of
OR operator.
You can further refine each group, by clicking the refine icon next to the
group name. This enables you to select any of the following conditions:
None - Displays results based on specified property values.
Exists - Displays targets that contain the configuration item identified by the
specified property values. For example, display database instances that
contain patch 13343438.
Does not exist - Display targets that do not contain the configuration item
identified by the specified property values. For example, display database
instances that do not contain patch 13343438.
The first selection option returns not only matching entities but also actual
property values. The rest return only the matching entities.
e. Click on Related Target Types link to associate the target type with other
targets. For example, you may want to know the Management Agent that is
monitoring a host you have selected.
f. Click Apply to add the configuration items to the configuration search. Click
Reset if you want to revert to the previously saved configuration items.
The number of configuration items selected will now be displayed next to
Configuration Items. The configuration items get added to New Search
section and are displayed in the table against the targets.
5. As you add criteria, click Search to see the results. Continue to revise the search
by adding and removing filters until the results are satisfactory.
Notice in the search results table that the column names are a concatenation of the
filters you specify and elect to display. So, for example, If you filter on hardware
vendor name for target type host, the column name in the search results table
reads Host Hardware Vendor Name.
6. Click Advanced if you want to specify more specific search criteria such as target
name, member of, and the host that the target is on. You can also choose to Reset
all the changes that you have made.
7. Click Save As. In the Create Configuration Search dialog box, specify a name for
the configuration search, and click OK.
1. From the Enterprise menu, select Configuration, and then select Search.
2. On the Configuration Search Library page, search for and select the configuration
search that you want to copy from the table.
4. In the Copy Configuration Search dialog box, specify a name for the new
configuration search that you are creating.
5. Click OK.
6. Select the new row in the table and click Edit. Make the desired changes to the
search parameters and then, save the configuration search.
1. From the Enterprise menu, select Configuration, and then select Search.
2. On the Configuration Search Library page, click Create and then select Search
Using SQL.
3. On the Search Using SQL page, you can create a SQL Query statement and then
click Search to run the search. You can also edit the SQL Query statements for an
existing configuration search by selecting the search from the table on the
Configuration Search Library page and then clicking Search Using SQL.
Note:
You use views in this case. You cannot access the underlying tables. Your SQL
edits apply only to the current search execution. If you want to preserve the
edited statement, you can choose to Export as an XML file or Print the SQL
statement.
4. Click Save As. In the Create Configuration Search dialog box, specify a name for
the configuration search, and click OK.
• System configuration data as well as all system members and their configuration
data
• System and target relationships (immediate, member of, uses, used by, and so
forth)
2. In the table of returned targets, right-click in the row of the desired target.
3. In the popup menu, select Configuration, then select Last Collected or Saved. In
the case of saved configurations, select in the table of saved configurations the one
you want to browse, then click View. The browser opens to display the (latest or
saved) configuration data for the selected target.
Note that these same selections (Last Collected and Saved) are available in the
Configuration menu on a target's home page that appears in the top-left corner
and typically takes the name of the target type, for example, Host or Web Cache.
• For standard targets, the tree hierarchy on the left shows the target node at the
top, beneath which appear configuration item categories and nested
configuration items. Select the target node and the tabs on the right show target
properties and various relationships (immediate, member of, uses, used by).
Immediate relationships indicate direction: source and destination. Thus, for
example, a source target type of database has an immediate relationship (hosted
by) with a destination target type of host.
As you traverse the tree on the left, the tab on the right becomes the tree
selection and displays the properties and values for the selection in table rows.
So, for example, if the target type is host and you select Hardware in the tree on
the left, the tab on the right becomes Hardware, and the table row displays
values for Host Name, Domain, Vendor Name, and so forth. As the table view
changes, look to the lower-right corner to see the number of rows the table
contains. For multirow tables, use the search filter to drill down to specific
properties and values. Add additional search filters as needed.
• When target type is a system, the tree hierarchy on the left shows the following:
– A nested node one level down for each configuration item associated with
the root target
– A folder at the same level as the nested node for each member type
– A node for each member within the member type beneath the member folder
Select the root target and the tabs on the right show target properties, a system
topology table, and various relationships (immediate, member of, uses, used
by). Select a configuration item in the tree on the left, and the tab on the right
displays the item's properties and values. Note that this applies only to
configuration items associated with the root target. Select a member target on
the left and the tab on the right displays the member target properties. Note,
however, that configuration data for the target does not display.
To see the member target's configuration data, you have to right click on the
member, and then select Latest Configuration.The browser display then
becomes the same as for a standard target. There is a bread crumb above the
tree hierarchy on the left that enables you to return to the system view. If you
subsequently save the member configuration, the link to the configuration data
changes to Saved Configuration.
• Select a configuration extension file in the tree on the left; separate tabs for a
parsed view and a raw text view of the file appear in the tables on the right.
5. To view the configuration details of all the members of the target, click
Configuration Report. This exports all the configuration details into a zip file
which gets downloaded. Extract the XLS file to view all the configuration details of
the members.
6. (Optional) If you want to save this configuration snapshot, select Save Latest in the
Actions drop-down menu above the tabs. In the dialog that opens, enter a
description by which to distinguish the configuration, then click Submit Job. Click
OK to exit the dialog. The save action is also available on the right-click menu
while selecting a target tree node. Saving a configuration saves all the configuration
and relationship data for the selected target. It also saves the relationship and
configuration data for all member targets.
• Export–opens a dialog where you can browse to a file location and save the
configuration as a CSV file.
• Search–displays the configuration search page where the viewed target is the
search object.
• While viewing a table of all targets, right-click a target and select Configuration,
then select Save.
3. Click Go.
2. In the table of saved configurations, select the configuration you want to browse,
then click View.
• System structures
2. In the table of saved configurations, select the configuration you want to compare
against, then click Compare.
3. In the dialog that opens, browse to the location of the exported configuration data
and click Import.
• See trends in inventory counts charted across a time line. Chart bars are color-
coded to match the view selection.
• Switch to a pie chart to break down the inventory data for the rollup option by
color-coded percentages.
• For Hosts (OS Patches) and Databases (Patches Applied), click a patch indicator to
link to patch details.
• Repeatedly revise selections to refresh chart and details based on new selections.
1. From the Enterprise menu, select Configuration, then select Inventory and Usage
Details.
Alternatively, you can click See Details in the Inventory and Usage region of the
Grid Summary page.
2. Select the entity you want to examine and choose a rollup option. For example,
show all deployed hosts rolled up by platform. Note that the page refreshes
automatically upon selection.
4. Select the radio button to specify how to display the inventory chart.
• The trend chart shows inventory counts across a time line. Use the magnifier
icon to zoom the view. You can adjust the date range by sliding the horizontal
scroll bar under the chart.
• The pie chart breaks down the inventory data for the selected rollup option by
percentages in an appealing color-coded visual.
5. Click Table View to convert the trend chart to table format. Close the table to
return to the chart view.
6. Select one or more rows in the deployments table and click the View Details
button to refresh the chart and details table based on the selected rows.
7. In any given row in the top table there is a count bar next to the count that
represents a percentage of the maximum count. For example, if the maximum
number of hosts by platform is four, the bar for hosts represented on two platforms
would be half as long. Click the bar to refresh the details table and chart for the
row.
Note that you can export either the master (deployments) table or the details table. In
either case, click the Export button to open a dialog where you can browse to a file
location and save the table as a CSV file.
Note:
• You have noticed that an Oracle RAC system has been underperforming for the
past month. As an administrator it would be useful to know what changes have
occurred during that time. Have members been added or removed? Have there
been configuration changes to the system itself?
• The daytime administrator notices that detected changes are the result of a patch
having been applied and adds an annotation to that effect. The overnight
administrator is alerted to the changes and sees the annotation upon follow-up.
• Track changes to targets over time by specifying and refining search criteria.
• Annotate change records with comments that become part of the change history.
Annotations have a timestamp and an owner.
• Schedule a history search to capture future changes based on the same criteria.
• From the Enterprise menu, select Configuration, then select History. Proceed with
a configuration history search.
• Perform a search of all targets. Right-click in a row of returned targets and select
Configuration, then select History in the popup menu. View the results for the
selected target, identified by type and name in the respective search criteria fields;
change the filtering criteria to see a different result. Select a specific configuration
item, for example, or change the date range.
• On a target home page, select Configuration, then select History in the target type-
specific menu (top left corner). View the results for the target, identified by type
and name in the respective search criteria fields; change the filtering criteria to see a
different result. Select a specific configuration item, for example, or change the date
range.
1. From the Enterprise menu, select Configuration, and then select History.
2. In the New Search section, select the target type. The Include Member Target
Changes check box is active only if you select a composite target type (system or
group).
3. Select and specify the search criteria for the target name.
In the Apply Configuration Items dialog box, you can search for configuration
items in the left panel, and then refine the configuration items using the right
panel. Once you have selected the configuration items that you want to apply to
the configuration search, click Apply. Else, click Reset to obtain the previous
configuration items, or Cancel.
5. Limit the scope of the search to a specific type of change, such as Change, Deleted
Item, or New Item. All types of change is selected by default.
6. Specify the number of days for which you want the changes to be discovered. The
default time is the last 7 days.
• Add Relationship Items by clicking the Add link. Select a relationship type in
the dialog box and click OK. This link is only enabled if a specific target type
has been selected and if the Include Member Target Changes checkbox is not
selected.
• You can refine the time and date range of the changes discovered by specifying
the Before and After time ranges.
8. Click Search to trigger the operation. A progress indicator verifies ongoing search
activity. Results appear in the table at the bottom.
Note:
One search strategy to consider is perform a gross-level search to see the
volume of changes, then go back and refine the search by adding filters.
Numbers in parentheses on the tabs reflect the number of respective configuration and
relationship changes detected. A search on target name for relationships returns
matches on all source targets, destination targets, and targets that contain the target
name.
• Click Export to save the search results to a CSV file such as a spreadsheet. The
value in each column represents a comma-separated value.
• Click the number in the History Records column to display the changes detected
for the selected target.
In the change details table, select a table row and do any of the following:
• Click Details to see the change details in a pop-up window, including old and new
values, and the specifics of any annotations. The Change link in the Type of
Change column pops up the same window.
• Click Export to save the search results to a CSV file such as a spreadsheet. The
value in each column represents a comma-separated value.
1. Select the change row in the results table. To add the same annotation to multiple
lines, use the multiselect feature (Ctrl+click or Shift+click).
3. In the window that pops up, type your comment and click OK. Your comment
appears in the designated column. Your login name and a timestamp are associated
with your comment and available in the pop-up window that opens when you
view the change details.
Note that you can also remove an annotation, provided you are the one who entered
the comment (or have super administrator privileges). Select the row that contains the
annotation and click the Remove Annotation button. Confirm the removal in the
popup message that opens.
• If not now, when. Click Later to activate the calendar widget where you can
select a date and time.
• How often. Select report frequency in the drop-down list. Default is once-only.
• Wait how long. If the job fails to run as scheduled, cancel within a specified time
frame.
• Keep going. Maintain the job schedule for the specified period.
2. Enter the e-mail addresses of those to be directed to the change history search
results. Use a comma to separate addresses.
• Specifying Rules
• About Comparisons
Templates can be used as is, or as a guideline. So, for example, you might decide that
an existing comparison template, with just a few tweaks, can meet your requirements.
Perhaps the template ignores property differences that you are concerned about. In
this case, use the create-like feature to make the adjustments to an existing template
and save it under another name.
For systems, you design a system template that references member templates, based
on the target types that make up the system. Create the member templates before you
create the system template.
1. From the Enterprise menu, select Configuration, then select Comparison & Drift
Management. Click the Templates tab.
2. To search for a template, click Search. You can search for multiple target types
from the target type list. Select the target types that you want to search for. You
can also specify the Template Name, the name of the Owner, and specify if the
template is a default template or an Oracle provided template. Click Search.
3. Each template has a lock beside the template name. A closed lock represents an
Oracle provided template. These templates cannot be edited. The templates that
have an open lock are user defined comparison templates, and can be edited.
4. For a new template, click Create and provide a name and target type. To base a
template on an existing one, select the template row, click Create Like, and
provide a name. In either case, the action creates a new template row.
5. Select the appropriate template row in the table and click the Edit button. The
Template Details page appears.
The compared configurations' target type drives the hierarchy of configuration
item types and configuration items on the left. The settings in play for the
respective properties on the right derive from the selected template, unless you
are creating a new template from scratch, in which case there are no settings.
A system comparison takes an overall template and a template for each system
member. Thus there is an additional tab for Member Settings. Edit the tab as
follows:
• Optionally select the member template to use for each system member type.
• For any given member type, you can elect to compare configurations by
checking the check box.
• For member types that you are comparing, select a target property to use as a
matching key. The default is target name, but typically you would want to use
a distinguishable property to align comparison entities, such as department or
location.
6. To create or edit the comparison template for each member, select the Member
Settings tab.
You can choose to view the mapping display in a tree or table format. If you set
the Mapping Display to Tree, the View mapping and Comparison results will
display the system members in a hierarchical tree format. If you set the Mapping
Display to Table, the View mapping and Comparison results will display the
system members in a table format. You can edit the following:
• Optionally select the member template to use for each system member type.
• For any given member type, you can elect not to compare configurations by
clearing the check box.
Note:
When you clear a check box for a system member, the children instances of the
system member will automatically be ignored during the comparison that
uses this template.
• For member types that you are comparing, select a target property to use as a
matching key. The default is target name, but typically you would want to use
a distinguishable property to align comparison entities, such as department or
location.
7. In the Template Settings tab, select a configuration item type or item in the left
pane to expose its properties in the right pane. A key icon denotes a property that
is defined as a key column in the configuration item type's metadata.
Tip:
Notice the Compare check box column on the Template Settings tab. This is a
powerful feature that enables you to streamline the comparison by selecting
only those items you want to compare. When you select the check box, the
comparison engine includes the corresponding configuration item type and all
of its descendants.
Contrast this with the ability to compare individual columns and rows on the
Property Settings tab, in which the settings are stored as part of comparison
results, giving you the option to view the compared properties on the results
page.
So, for example, in comparing host configurations, you may decide that any
differences in CPU properties are immaterial. Simply expand the Hardware
configuration item type and deselect the CPUs check box to exclude all
properties associated with the item.
8. Click the Property Settings tab and check boxes for property differences to be
compared and alerted if different. They are mutually exclusive. When you
compare differences in a property value in this fashion, you are doing so
unconditionally for all differences detected in the property value for the
configuration item type.
Use a value constraint rule to filter the property value. In this case, the
comparison engine compares the property value in the configurations being
compared (the second through n configurations) to the constrained value. A
property value that satisfies the constraint constitutes a difference. For example,
test for a version equal to or greater than 6. Any instance in the compared
10. Optionally, select an item in the left pane and click the Rules for Matching
Configuration Items tab. For a given property, specify a rule expression to be
evaluated to determine when a match exists between configuration instances. In
other words, if the expression resolves to true, compare the instances. See
Specifying Rules for details.
Match rules are column-based; they apply an AND logical operator. If you specify
rules for multiple properties, they must all resolve to true to constitute a match.
11. Optionally, select an item in the left pane and click the Rules for Including or
Excluding Configuration Items tab. For a given property, specify a rule
expression to be evaluated:
• Excludes items that match these rules - Compares everything except the
properties listed.
• Includes items that match these rules - Only compares the properties listed,
that is, ignores everything else.
The rules for including or excluding configuration items are row-based; they
apply an AND logical operator within a subset of rules and an OR logical operator
between rule subsets. So, if you specify two rules for property A and two rules for
property B, either both rules set on property A OR both rules set on property B
must resolve to true to constitute a match.
• Share templates by exporting them in XML file format and importing them into
other Cloud Control systems
1. Select a template in the Comparison Templates page and click the View button.
2. Expand items in the tree on the left and peruse the settings and rules on the
various tabs.
• You cannot delete a comparison template unless you have the proper permissions.
1. Select a template in the Comparison Templates page, select the Actions menu and
click Export.
A platform-specific file dialog opens. For example, if you are using Firefox, the
dialog notes that you have chosen to open the named template, which it identifies
as an XML file. The dialog asks what you want Firefox to do, open the file in an
XML editor or save the file.
3. Browse to the desired location in the file system and save the file, changing the
name if applicable. You cannot change the name of a template provided by Oracle
on export.
1. In the Comparison Templates page, select the Actions menu, and click Import.
An exported template is associated with its owner. A template whose owner is not the
same as the login ID of the person importing the template retains its original
ownership. If you want to be the owner of the imported template, you have to edit the
owner attribute in the template XML file prior to import, changing the value to your
login ID. Or, you can simply remove the attribute, in which case the default owner will
be set to the ID of the person initiating the import operation.
The Template Manager disallows import of a template provided by Oracle of the same
name. Similarly, you could change the name attribute in the template XML file prior to
import to allow the import to occur.
2. Click the Property Settings tab in the right pane and select the property on which
you want to set a value constraint.
When the Property Settings tab is selected, keys are displayed in the column to the
left of the Property Name.
3. Click the Edit Rule button in the toolbar. In the dialog that opens:
b. Type an operands expression, then click OK. An operand is a value that you
want to either include or exclude from the constraint. For example, if you
want to exclude Patch ID 12,34,56,78, you would enter operand as '12', '34',
'56', '78'.
To clear a rule, select the table row and click the Remove Rule button in the
toolbar.
See About Rules Expression and Syntax for details on the formation of a rules
expression.
2. Click the Rules for Matching Configuration Items tab in the right pane, then
click New.
3. Select a property in the drop-down list that appears under Property Name.
4. To create the rule, select the table row and click the Edit Rule button in the
toolbar. In the dialog that opens:
See About Rules Expression and Syntax for details on the formation of a rules
expression.
You can enter additional rules for the same or for a different configuration item.
When there are multiple rules, they resolve in the order specified. Matching rules
take an AND logical operator, which means all conditions must resolve to true to
constitute a match.
2. Click the Rules for Including or Excluding Configuration Items tab in the right
pane.
3. Choose one of the following options: Compare all, Exclude those that satisfy rules,
Include only those that satisfy rules. Click New.
4. Select a property in the drop-down list that appears under Property Name.
5. To create the rule, select the table row and click the Edit Rules button in the
toolbar. In the dialog that opens:
6. Select New Or to indicate the end of one rule subset and the beginning of another.
Operator Operands
is equal to* An optional literal value to match; string values are case-sensitive; if
unspecified, expression evaluates value of the property to which the rule
applies
Note that a matching rule compares the values of the configuration items in
the respective configuration to one another, not to a third specified value, so
the operator does not take an operand in this case.
[match-literal]
is case-insensitive equal to* An optional case-insensitive string literal; if unspecified, expression evaluates
value of the property to which the rule applies
Note that a matching rule compares the values of the configuration items in
the respective configuration to one another, not to a third specified value, so
the operator does not take an operand in this case.
['match-literal']
is one of† A comma-separated list of literal values, at least one of which must be
specified, but only one of which need match
match-literal-1[,match-literal-n,...]
is between† A range specified as start and end literal values; both must be specified; range
is inclusive
start-range-literal , end-range-literal
FALSE (default) means string must comply with Oracle LIKE operator syntax;
TRUE means string must comply with Posix regular expression syntax
Operator Operands
replace‡ A string literal to match and replace with a second string literal
[FALSE|TRUE,]'pattern-literal'[,'replacement-literal'][,position-
integer][,occurrence-integer]
FALSE (default) means string must comply with Oracle LIKE operator syntax;
TRUE means string must comply with Posix regular expression syntax
TRUE enables optional positional integer argument to indicate where within
the column value to extract the string, and optional occurrence integer
argument to indicate the position count to replace
Mandatory pattern literal represents the string value to match
If the replacement string literal is unspecified, replace the matched string
literal with nothing
FALSE (default) means string must comply with Oracle LIKE operator syntax;
TRUE means string must comply with Posix regular expression syntax
Mandatory positional integer argument indicates where to begin string
extraction:
• If 0 or 1, returns all characters
• If positive integer, starts extraction from beginning
• If negative integer, starts extraction counting backwards from end
Optional length integer argument indicates character count starting at position
integer
pattern literal represents the value to match; optional if the first argument is
FALSE; required if TRUE
occurrence integer argument indicates character count to match; valid only if
pattern literal is specified
3. Select the table row and click the Edit Rules button in the toolbar to open the rule
dialog.
• Click OK.
Suppose you want to compare WebLogic Servers, aligning on server name, where the
names are different: ManagedServer1 and ManagedServer2, for example. To ensure
the comparison occurs, you need to fashion a match on server name.
2. In the Rules for Matching Configuration Items tab, click New. In the Property
Name drop-down list, select Machine Name.
3. Select the table row and click the Edit Rules button in the toolbar to open the rule
dialog.
• Click OK.
Effectively, the rule says use the first 13 characters of the name (ManagedServer),
thus excluding the qualifying integer.
• Click OK.
This example uses a regular expression (TRUE) to resolve all characters prior to the
qualifying integer.
For a more advanced example, consider a database instance comparison that requires
a match on Datafiles file names within a Tablespace, where file names are of the form:
/u01/jblack_abc2d/oracle/dbs/dabc2/mgmt_ad4j.dbf
1. In the Template Settings tab, highlight the Control files configuration item.
Note: For this example, ensure you are using a Database Instance target type
template.
4. Select the table row and click the Edit Rules button in the toolbar to open the rule
dialog.
• Click OK.
Effectively, the rule says use a regular expression (TRUE) to construct a matching key
from the value between /u01/ and oracle, combined with what remains of the
original filename after dabc2 /, or jblack_abc2d/mgmt_ad4j.dbf.
1. In the Rules for Including or Excluding Configuration Items tab, click New.
3. Select the table row and click the Edit Rules button in the toolbar to open the rule
dialog.
• Click OK.
The rule ensures that the comparison ignores any row in the collection data that
contains either of the specified values.
Now consider an ignore rule that demonstrates how the comparison engine applies
the logical operators AND and OR against the same configuration item type. In this
example the objective is to ignore rows in configuration extension parsed data when
any of three rule sets satisfies the following conditions:
Data Source = ‘sqlnet.ora' AND Attribute = ‘ADR_BASE'
OR
Data Source = ‘tnsnames.ora' AND Attribute = ‘HOST'
OR
Data Source = ‘resources.xml' AND Attribute =
‘authMechanismPreference'
Notice that the comparison engine applies the AND operator to rules within a set and
the OR operator between rule sets. Rules for ignoring instances support inheritance;
thus, in this case, the Data Source property is available in rules creation, as
demonstrated in the example.
1. In the Rules for Including or Excluding Configuration Items tab, click New.
3. Select the table row and click the Edit Rules button in the toolbar to open the rule
dialog.
• Click OK.
5. Select the table row and click the Edit Rules button in the toolbar to open the rule
dialog.
• Click OK.
6. Click New Or to insert a logical OR operator to signal the end of the first rule set.
7. Add two new rules where Data Source is equal to 'tnsnames.ora' and
Attribute is equal to 'HOST'.
8. Click New Or to insert a logical OR operator to signal the end of the second rule
set.
9. Add two new rules where Data Source is equal to 'resources.xml' and
Attribute is equal to 'authMechanismPreference'.
The comparison ignores any row in the collection data that satisfies any of the three
rule sets.
• Configuration drift
Enables you to compare configurations of a target with configurations of another
target of the same type.
• Configuration consistency
Reflects the changes of target members within a system. For example, you would
use configuration consistency to ensure that the configuration parameters for all
databases within a cluster database are the same
The comparisons can be done on the current configuration or on configurations
previously saved (for example, just before applying a patch or doing an upgrade).
Comparisons allow you to:
• Select the first configuration in the comparison (the one to compare against)
• For system comparisons, map members as needed. It's a way to selectively indicate
how members of respective systems should match up in a comparison.
• Set up email notification by setting up mail servers and then creating incident rules
• Review BI Publisher Reports (From the Enterprise menu, select Reports, then select
BI Publisher Reports.)
A follow-on step would be to review the results and drill down to differences details.
5. Set up email notification by setting up mail servers and then creating incident
rules. See Creating Notifications for Comparisons.
6. View the results from the Comparison & Drift Management Drift Results tab or
the Comparison & Drift Management Consistency Results tab. You can also view
the results as a BI Report. To view BI Reports: from the Enterprise menu, select
Reports, then select BI Publisher Reports.
1. From the Enterprise menu, select Configuration, then select Comparison & Drift
Management.
– Comparison Template
Template or pattern to be used for the comparison. This template can
contain property settings, rules for matching configuration items, and rules
for including and excluding configuration items.
• The Advanced One-Time Comparison provides more options than the Basic
One-Time Comparison. For example, you can use saved configurations and
perform a consistency comparison within a system.
The fields are:
Use this option to use an existing configuration. The benefit is that you have
(hopefully) already tested this configuration and it meets your
requirements.
– Comparison Template
Select a template that has gone through rigorous testing or use this one-
time comparison to fine tune the comparison template to compare only
what you need.
An example of an existing template is one that you created or a template
provided by Oracle. If you don't supply a template, there will be a one-to-
one comparison between the fields in the reference target and the compared
targets.
4. Click Add to select targets or Add Saved to select saved configurations for the
comparison. Remember that you can only compare targets of the same target type.
Note that the more targets you add, the longer it will take for the compare
operation to complete.
After you have added the targets, and you decide to minimize the number of
targets, on the Target menu, highlight the targets to eliminate and click Remove.
5. Click OK. The comparison begins immediately and results are displayed on the
Comparison Results page.
Depending on the options you chose, it may take a while for the results to display.
Click the Enterprise Manager Refresh button until the comparison is completed
and the In Progress icon disappears. Click the i icon located by Comparison
Results name for a listing of the options chosen for the comparison.
1. From the Enterprise menu, select Configuration, then select Comparison & Drift
Management.
2. In the Drift Management section of the Dashboard page, click Create Definition.
3. On the Compare Configurations dialog box, select the Target Type and Template.
For example, select Database Instance and the template of choice, in this case
Database Instance Template. Then click OK.
On the Drift Definition page, provide the following information:
• Definition Name - Make the name meaningful. The information from the Compare
Configurations popup is replicated.
• Source Configuration
Select either Latest Configuration or Saved Configuration. Either one is the gold
configuration against which the other targets will be compared. When using the
Latest Configuration, choose the source target. When using the Saved
Configuration, choose the appropriate configuration.
• Advanced
Expand this section to provide additional factors on the comparison.
– Choose the severity of the drift. Options are: Minor Warning, Warning, and
Critical.
– Description
Describe this drift definition and provide as much detail as possible. This
information provides information for others who will be using this definition in
the future.
– Rationale
Explain the reason for this comparison, for example: This content will detect
configuration drifts for the targets.
– Keywords
Enables you to categorize this drift definition for quick reference.
After you have provided the information, select one of these options:
you are satisfied with the comparison results, select the target and click Associate
to permanently associate the target with the drift comparison, enabling automatic
re-comparisons, notifications, and visibility of the target in reports.
• Cancel
Aborts the operation. None of the input will be saved.
1. From the Enterprise menu, select Configuration, then select Comparison & Drift
Management.
• Overview tab
3. On the popup, select the Target Type and Template. For example, select Cluster
Database and the template of choice. The template can be one that you have
defined, or as in this case, Cluster Database Template. Then click OK.
• Compliance Standard Name is the same name as used for the Consistency
Definition Name. This is the name to search for when using compliance
standards.
• Choose the severity of the drift. Options are: Minor Warning, Warning, and
Critical. When a consistency is in a Critical state, it needs to be addressed in a
timely manner. For example, if the space on a database is getting very low, it
needs to be addressed before it crashes.
• Description
Describe this consistency definition and provide as much detail as possible.
This information provides information for others who will be using this
definition in the future.
• Rationale
Explain the reason for this comparison, for example: This content will detect
configuration consistency for the systems.
• Keywords
Enables you to categorize this consistency definition for quick reference.
6. For consistency comparisons, Oracle chooses one target of each member target
type as the reference target. All other members will be compared against the
reference target of the same target type.
Note: Since all members of the same type should be the same, it should not matter
which target is selected as the reference target. However, if you would prefer to
choose specific targets as reference targets, click the Edit icon in the Reference
Targets when associating targets, or click the Reference Targets button when
creating a one-time comparison. This allows you to choose your own reference
targets for each member target type.
After you have provided the information, select one of these options:
• Cancel
Aborts the operation. None of the input will be saved.
• Associate targets or groups to a definition (perform the association after you have
verified test results from the Test Association page)
• Delete a definition
1. From the Enterprise menu, select Configuration, then select Comparison & Drift
Management.
4. After you create (or create like) the template, edit the template to delete or modify
configuration items.
When the Save Only Differences option is checked, only differences will be saved
when this template is used in drift, consistency, and one time comparisons. If the
box is not checked, then all information will be stored in the comparisons in
which this template is a part.
Note: When creating a system template, for example, a cluster database, the
template page provides more information when there is a target with members
like a Cluster Database.
6. After you associate targets to the comparison definition, the comparison will
rerun automatically whenever there is a change to the configuration for a target,
when the system members change, or when the template changes.
7. If you choose Table for the Mapping Display, and do not override the default flat
map when defining the comparison, the members of the systems will be matched
for comparison without regard to their level in the system hierarchy. Thus, when
viewing the results, the original system hierarchy cannot be retrieved.
Chose the Table option only if you are not concerned about how the members of
the system are related to each other.
1. From the Setup menu, select Notifications, then select Mail Servers.
2. On the Mail Servers page, provide the Server Identity information then add the
outgoing mail (SMTP) server.
After you have set up the Mail Servers, set up the Incident Rules as follows:
1. From the Setup menu, select Incidents then select Incident Rules.
2. On the Incident Rules - All Enterprise Rules page, click Create Rule Set.
Provide a name and description and apply the rule set to Targets. This should
apply to All Targets.
d. Add a rule - If you selected a compliance standard in the previous step, add a
compliance standard of type Configuration Drift or Configuration
Consistency. If you selected a compliance rule, select either a Configuration
Drift Rule or a Configuration Consistency Rule.
e. Add Actions - Basic Notifications (E-mail to). Enter the e-mail addresses of
those who are to be notified when the comparison detects a difference. Use a
comma to separate addresses. Remember that the properties for which
differences are alerted were specifically selected in the comparison template.
f. Click Next and specify rule name and description. Click Continue.
4. Click Save.
• Select Only Differences in the Show drop-down list to eliminate the "noise" of the
same results.
The icons that appear in the view are mostly intuitive: equal–same, not equal–
different.
The table displays a hierarchy of system and member target types where:
• The Target Type column displays the system and member tree hierarchy.
• The Result column shows comparison results based on the mappings established as
part of comparison setup. A boxed 1 (left only) or 2 (right only) means there was
nothing to compare to the first or second member target, respectively. Note that if
the parent target configurations are the same, but one or the other parent has child
members marked as left only or right only, the parents are marked as different.
• To resolve unmatched members, rerun the comparison, this time ensuring in the
mapping step that the left and right member pairs appear in the mapped members
table. Select an appropriate system comparison template with target matching rules
defined, such that these members are mapped, or map the pairs manually.
• When the Member column displays both an equal and a not equal icon, it indicates
equality at the parent level, but a difference in some member.
• To view a summary of all the differences found when comparing the system target
and any member targets, click Export, located at the top of the table that displays
the system members. An XLS report will be downloaded.
• Select Only Differences in the Show drop-down list to eliminate the "noise" of the
same results.
• Select Left Only to display items that are only present on the target displayed on
the left and NOT present on the target displayed on the right.
• Select Right Only to display items that are only present on the target displayed on
the right and NOT present on the target displayed on the left.
The icons that appear in the view are mostly intuitive: equal–same, not equal–
different. The key icon denotes the key properties of the configuration item type. An
indication of Out of Range means that the property value failed a value constraint set
on the property. A boxed 1 (left only) or 2 (right only) means that the comparison did
not find a matching item to compare to the first or second configuration, respectively.
System Drift Comparison Results
When a system drift comparison is completed, the system results page displays the
system and its members along with its comparison results. Drill down from the system
results to the simple target results to view additional configuration comparison result
details.
Note:
This feature is available only for file-based configuration extensions.
Differences resulting from comparisons of command-based or SQL query-
based configuration extensions cannot be synchronized.
1. From the Enterprise menu, select Configuration, then select Comparison & Drift
Management. Click the Drift Results tab on the left. On the Drift Results page,
click the Drift Definition of interest.
the Configuration Tree. You can select multiple files, indicating that you'll be
updating all of them in the same direction.
Click the Synchronize icon located to the right of the Configuration Item. This icon
is present only for configuration extension (CE) nodes, and only for CE nodes that
are eligible for synchronization.
Note: File synchronization is also available from the results of one-time
comparisons.
2. The Synchronize File page displays the files selected on the Comparison Results
page. If there are files that cannot be synchronized, such as those that have no
differences, they are not submitted for synchronization.
3. Optionally, use the Preview feature to view the effect of the update on a file-by-file
basis. Click the eyeglasses icon to view the file before and after the update in raw
format.
• Specify the login credentials as necessary. You must have login access to the
target destination and write permission on the directory or directories to be
updated.
• Select the appropriate radio button for how to proceed on conflict. The
comparison is performed using data from the repository. A conflict arises when
the file to be updated has changed on the target and is different from the data
used for the comparison. Indicate what you want to do in this case—proceed or
stop.
• Indicate the desired backup options (both are selected by default when the
update target is the original directory):
– Mark the appropriate check box if you want to save a snapshot of the
configuration to be updated prior to synchronizing (give it a descriptive
name so you can easily retrieve the file from saved configurations; defaults
to a generic name—CCS Synchronization Saved Snapshot—which applies
even if you blank the field).
– Mark the appropriate check box if you want to make a backup copy of the
configuration file before it's updated. Browse to a directory on which you
have write permission.
These are not mutually exclusive options. With the former, you are saving time-
stamped collection data in the OMS repository; whereas, with the latter, you are
storing a copy of a file in a file system.
The check box is selected by default when the original destination directory is
the update target. The check box is disabled if you specified an alternate
directory, as there would be nothing to refresh in this case.
On the Synchronize Files popup, click the link to track the synchronization job. When
the job completes, you can rerun the comparison to verify the update, assuming you
requested a refresh. You can also open the configuration extension in the
Configuration Browser and confirm the update there.
Not All Configuration Files Can Be Synchronized
You may notice in the comparison results differences view that some files, though
different, cannot be selected for synchronization (their check boxes are disabled).
There are several possible reasons, including:
• During the configuration extension definition, the file was associated with a parser
that does not support a process called reverse transform, which is, effectively, the
ability to return the parsed form of a file to a syntax tree structure that can then be
rendered back into a physical representation. Not all parsers support reverse
transform.
Note: It is on the File Synchronization page where the files are marked as eligible or
ineligible for synchronization. It is on this page where you can determine whether the
selections are valid.
• Drift Report for Systems - Drift results report for system targets. It includes Drift
definition summary, various roll-ups and Drift Comparison results. Examples of
system targets are databases and fusion applications.
• Drift Report - Drift results report. It includes Drift definition summary, various
roll-ups and Drift Comparison results. Use this report to view simple targets, for
example, host.
– Comparison template for Fusion Instance and its corresponding Oracle Home
target type is available.
– Comparison template for Fusion Instance and its corresponding Oracle Home
target type is available.
– Drift definition was created using the proper Fusion Instance template.
To access the Comparison and Drift Management BI Publisher reports:
1. From the Enterprise menu, select Reports, then select BI Publisher Enterprise
Reports.
• To see information in these reports, you must have had to run a comparison
• Modify and fine-tune the specification and redeploy, perhaps across a wider
spectrum.
2. In the dialog that opens, provide a name for the custom target type, then click OK.
As noted, it may take a while to complete the process.
3. When done, a message confirms target type creation and asks if you want to add a
sample target instance. A sample target provides the basis for collecting
configuration data. Click Yes.
4. A dialog opens associated with the custom target type you just added. Click the
search icon to select a Management Agent to monitor the target you are adding,
then click Add Target.
5. In the dialog that opens, provide target properties appropriate to the instance
target type. In particular, the pertinent target property is the path to install home,
as this is the likely location of configuration files relevant to the custom target type.
Optionally, provide global properties such as cost center and lifecycle status. Click
OK.
The target is now available as a sample target when you create a configuration
extension for the custom target type.
It's not imperative that you add a new target instance during custom target type
creation.You can do so subsequently by selecting Add New Custom Target from the
Actions menu and following Steps 4 and 5 in the process above, this time selecting a
custom target type from the drop-down list.
Note:
1. In the Configuration Extensions library, click the Create button; or, select an
existing specification in the library and click Create Like or Edit.
2. On the Create Configuration Extension page, enter a name for the configuration
extension and an optional description. The create like action requires minimally
that you rename the specification.
4. Optionally, set up a sample target. A sample target resides on the host from which
you intend to collect configuration data. If you do not set up a sample target, you
cannot browse the file system or use the preview feature in entering your
specifications.
Click the search icon. A dialog opens containing known instances of the target type.
Use the filtering criteria as necessary to locate the instance you want, then click
Select.
5. See Using the Files & Commands Tab for instructions on how to complete the Files
& Commands tab.
6. See Using the SQL Tab for instructions on how to complete the SQL tab.
7. After you complete the specification definition and have mapped credentials to the
target type, use the preview feature to validate your entries, in particular, to ensure
the parsed view is what you expect.
8. Save the new or edited specification. Remember that configuration extensions are
in the public domain. Use the save-as-draft feature to keep the specification private
while you test and refine it. See About Configuration Extensions and Versioning
for more information on the ramifications of save actions.
• Save implies that you are creating the next version of the draft.
When done, you can begin collecting configuration data by deploying the
configuration extension to target instances. See About Configuration Extensions and
Deployment for more information.
1. Click the search icon to browse to a default base directory location. This is where
the configuration files reside, or where the commands you specify are to execute.
Click the Use Property button to open a dialog where you can select a target
property to include as part of the directory path. These properties serve as
variables, denoted by curly braces, to be substituted with actual values at runtime.
You can type additional text in the box to supplement your selection. So, for
example, you might select OracleHome and append a directory–{OracleHome}/
config–to collect files on the target located in the config subdirectory under the
Oracle Home path. Note that the target type definition determines available target
properties. User-defined properties do not appear in the list, as they are not
available at the Management Agent.
For a file specification, enter a file name in the space provided or browse the base
directory to select a file on the target. Use of wildcards (* and **) is allowed, where
** indicates 0 or more subdirectories. In using wildcards (and as a general caveat),
ensure that collections do not result in too many (or too large) files, and that the
files collected be configuration-related, that is, files under administrative control
that change relatively rarely, so as not to overload Cloud Control.
For a command specification, enter command syntax in the space provided or
browse the base directory to a script. You must assign a unique alias to the
command. The alias you assign appears in the Configuration Browser as a link
when viewing the configuration extension hierarchy. When you click the link, it
opens the command specification in the tab on the right. The same caveats as
mentioned for files apply to command output; that is, that their results are
constrained in number and size, and to configuration-related data.
Select a parser to convert the configuration file or command output into a standard
format for storing in the repository. There is no default. If you do not specify a
parser, only the raw data format is stored and available for viewing. See Managing
Parsers for more information.
Optionally, specify post-parser rules to align tree nodes. See Setting Up Rules for
information on entering rules.
1. Select credentials to use to connect to the database. If the customized credential set
does not appear in the drop-down list, click Create to identify the credential set to
use. Note that you must then specify the credentials that map to the credential set
name you create (see Setting Up Credentials When Creating a Configuration
Extension). Configuration extensions only support database credentials with
NORMAL roles, not those with SYSDBA, SYSOPER, or other roles.
2. Specify a JDBC connection to an Oracle database from which to extract data via an
SQL query. The connection string can be either a URL or an abstraction of database
target properties. It cannot be a combination of the two; that is, partial URL and
some target properties.
The URL must contain the name of the target database host, applicable port
number, and the Oracle Service name (SID); for example,
mydatabase.example.com:1521:ORCL.
If you want to use target properties, leave the field blank. At runtime the
application will substitute values for these target properties— {MachineName}
{Port}{SID}—to make the connection.
3. Click Add and type or paste a SQL query in the provided text box. Ensure that the
query is sufficiently selective to return only pertinent configuration-related data of
manageable size and scope.
You must assign a unique alias to the query. The alias you assign appears in the
Configuration Browser as a link when viewing the configuration extension
hierarchy. When you click the link, it opens the SQL query in the tab on the right.
Database Query Parser should be preselected in the drop-down list.
Optionally, specify post-parser rules to align tree nodes. See Setting Up Rules for
information on entering rules.
1. From the Setup menu (top right of the page next to the Help menu), select
Security, then select Monitoring Credentials.
2. Select the applicable target type in the table and click Manage Monitoring
Credentials.
3. Select the row with the credential set name you created during the configuration
extension definition for the given target type and click Set Credentials.
4. Enter the username and password for the credential set and click Save (or Test and
Save for database credentials).
5. Return to the Files & Commands tab (Using the Files & Commands Tab) or SQL tab
(Using the SQL Tab) description.
1. Click the Parser Rules button. The Edit Parser Rules page displays.
2. To define a custom rule, click Add. In the table row that appears, enter a condition
and an expression as valid XPath expressions.
You can define multiple rules; they are applied to the parsed content in the order
specified. Click Return when you are done.
Select a table row to delete a custom rule.
Rules appear in table rows, provided the parser you selected has default parser
rules. Edit and delete default rules as appropriate to your purposes. Remember that
you are working with a copy of these rules; the originals remain safely intact.
Note that if you delete all rules, you are merely removing the copies you imported.
Default parser rules will still fire unless overridden by custom rules.
• Synchronize the selected specification with facets in the Compliance Library for
real-time facet monitoring
1. In the Configuration Extensions library, select the specification table row and click
View Details.
1. In the Configuration Extensions library, select the specification table row, then
select Enable Facet Synchronization from the Actions menu.
2. The Facet Synchronization column displays a Use Facet link in the configuration
extension table row. Click the link to go to the Real-time Monitoring Facets tab in
the Compliance Library where you can manage the synchronization of facets with
the configuration extension.
1. In the Configuration Extensions library, select the specification table row, then
select Export from the Actions menu.
2. Browse to a file system location where you want to save the specification as an
XML file. The saved file takes the name of the configuration extension by default.
1. In the Configuration Extensions library, select the specification table row, then
select Import from the Actions menu.
2. Browse to the file location. Select the file and click the Import button on the
dialog.
The imported specification appears in the Configuration Extensions library.
1. In the Configuration Extensions library, select the specification table row and click
Delete.
2. The system validates permissions and otherwise checks for dependencies that
might prevent the deletion, although some dependencies cannot be verified until a
job submission involving the configuration extension.
• You create and save a configuration extension; this is public version 1. You
subsequently edit public1 and save as a draft; this becomes draft1. Public1 is still
generally available. You edit draft1 and publish; this becomes public2. Note that in
parallel, someone else with the proper permissions can also edit public1 and save
as a draft to create version 1 of draft2.
• You create and save a configuration extension as a draft; this is version1 of draft1.
You edit and save again; this becomes version 2 of draft1. Repeat the edit-and-save
operation; this becomes version 3 of draft1. Edit version 3 of draft1 and publish;
this becomes public version 1.
Create or import configuration extension "Manage configuration extensions owned by user" (or the more
powerful "Manage configuration extensions owned by any user")
Edit or delete configuration extension Differs, depending on the specific activity within the realm of
editing:
• Configuration extension owner requires "Manage configuration
extensions owned by user"; nonowner requires "Manage
configuration extensions owned by any user"
• Schedule redeployment jobs for already deployed targets
requires "Create" privilege for Job System resource type
• For configuration extensions associated with real-time
monitoring facet, EM_COMPLIANCE_DESIGNER
Deploy or undeploy configuration "Manage target metrics" privilege on the target instance; "Create"
extension on a target privilege for Job System resource type (to schedule deployment/
undeployment;) EM_PLUGIN_AGENT_ADMIN (to deploy a plug-
in to a Management Agent)
View configuration extension collected data Regular "target instance view" privilege
Note that editing an imported configuration extension may be restricted to edits that
do not change the version, depending on options set during export. One such
permissible edit would be to credential set information.
targets. You must have sufficient privileges to deploy and undeploy configuration
extensions.
To deploy a configuration extension:
1. In the Configuration Extensions library, select the specification table row and click
Manage Deployments.
2. On the Deployments page, click Add. In the dialog that opens, search for and
select targets of the specified target type where you want to deploy the
configuration extension.
3. When you close the dialog (click Select), a new column appears denoting a
pending action of Deploy and the status becomes Selected for Deployment.
4. Proceed as follows:
• Click Apply to confirm the action while remaining on the Deployments page.
The action column disappears, and the status becomes Deployment job in
progress.
3. Proceed as follows:
• Click Apply to confirm the action while remaining on the Deployments page.
The action column disappears, and the status becomes Undeployment job in
progress.
When viewing configuration extensions in the library, a green check mark in the
Deployments column denotes currently deployed configuration extensions. The
number in the column indicates how many targets the configuration extension has
been deployed to. Click the number to navigate to the relevant deployments page.
1. In the Configuration Extensions library, locate the appropriate table row and click
the numerical link in the Deployments column. In addition, you can, after selecting
the configuration extension table row, click the Manage Deployments button in the
toolbar.
2. On the Deployments page, select the deployment in the table and click Edit.
• Click Apply to confirm the action while remaining on the Deployments page.
The action column disappears, and the status becomes Redeployment job in
progress.
• Click Save to initiate the redeployment and navigate back to the Configuration
Extension library page
Note that the edit applies to the deployment of the specification; it does not change the
configuration extension definition.
1. In the Configuration Extensions library, locate the appropriate table row and click
the deployments link.
2. On the Deployments page, select the deployment in the table and click View
Configuration.
• The root node represents the target instance being monitored. The right pane
displays target properties and immediate relationships.
• The next level down in the tree represents a template for the specification. The
right pane displays specification details such as configurations being collected
and the base directory from which they are collected.
• The remaining leaf nodes in the tree represent the configuration data collected.
The right pane displays the configuration data in both parsed and raw format.
You can also view the collected data from the target home page: from the target type
menu, select Configuration, then select Last Collected.
3. Give the configuration extension an appropriate name and select Listener as the
target type.
4. Click Select Target to choose a listener instance that is already deployed, so you
can browse to the file location. Note that clicking this link selects a Sample Target
for the configuration extension.
6. You are now ready to build the collection data specifications. Click Add, then click
the search icon to log in to the remote file browser. Set the credentials
appropriately.
7. In the Oracle home directory of the listener instance, browse to the network/admin
subdirectory and select the sqlnet.ora file. Add it to the selection table and click
OK.
8. With the file added to the Files & Commands tab, select an appropriate parser
from the drop-down list, in this case, the Oracle ORA parser. Click Preview if you
want to see the file attributes in parsed and raw form as it will appear in the
collected data.
9. In the Configuration Extension library, select the new configuration extension and
click Manage Deployments.
10. On the Manage Deployments page, click Add. In the dialog that opens, select the
targets where you want to deploy the configuration extension.
11. When the status displays Selected for Deployment, click Apply. Refresh the view
until status is successful, then click Save.
12. To verify the added data collection, go to the target instance home page. From the
Oracle Listener menu, select Configuration, then select Latest Collected.
The Configuration Browser should display the configuration extension in the tree
structure on the left, where you can drill down the directory structure to display
the parsed and raw forms of the sqlnet.ora attributes and values on the right.
Use this description as a template for extending existing configuration data collections.
3. In the dialog that opens, enter a target type name, MyApache, for example. Click
OK.
4. After a while, a message confirms target type creation. Click Yes to add a sample
target instance.
5. Click the search icon to select a Management Agent on a host where the application
(Apache Tomcat) already resides. Choose the Management Agent and click Select
to close the dialog, then click Add Target.
6. In the target properties dialog that opens, enter the name (MyApache) and set the
install home path to the start location of the application (Apache Tomcat) on the
Management Agent. Click OK.
• Select the custom target type MyApache from the drop-down menu.
8. You are now ready to build the collection data specifications. Note that the
{INSTALL_LOCATION} variable populates the Default Base Directory field. Click
Add, then click the search icon to log in to the remote file browser. Set the
credentials appropriately.
9. In the Apache install home on the Management Agent, browse to the conf
directory and select the httpd1.conf file. Add it to the selection table and click
OK.
10. With the file added to the Files & Commands tab, select an appropriate parser
from the drop-down list, in this case, the Apache HTTPD parser. Click Preview if
you want to see the file attributes in parsed and raw form as it will appear in the
collected data.
11. In the Configuration Extensions library, select the new configuration extension and
click Manage Deployments.
12. On the Manage Deployments page, click Add. In the dialog that opens, select the
targets where you want to deploy the configuration extension, for example, the
host on which the configuration extension was based.
13. When the status displays Selected for Deployment, click Apply. Refresh the view
until status is successful, then click Save.
14. To verify the new data collection, do an all targets search and locate the custom
target type under the Others category on the left and click it to display all
deployments of that type on the right.
15. Click a target instance (MyApache) in the deployments list on the right. The
Configuration Browser should display the configuration extension in the tree
structure on the left, where you can drill down the directory structure to display
the parsed and raw forms of the httpd1.conf attributes and values on the right.
Use this description as a template for extending configuration data collections through
custom target types.
• Apache Tomcat
• GlassFish
• iPlanet
• JBoss
• JRun
• Tuxedo
You can download these blueprints, also called configuration extensions, from the
Configuration Management Best Practice Center, where you can also check for new
platform support.
• XML
• Format-specific
• Columnar
• Properties
Some parsers have default rules provided by Oracle. These rules address well-known
instances where nodes need to be aligned. Specifically, the WebLogic and WebSphere
parsers contain default rules to address such instances. You can leave these rules as is,
execute a subset of them, or replace them with your own custom rules.
This section covers the following topics:
• Managing Parsers
1. In the Configuration Extensions library, select Manage Parsers from the Actions
menu. A list of available parsers appears in a table. The column on the right (Base
Parsers) denotes a general parser category, Properties for example, which implies
file types that contain name/value pairs.
2. Select a parser and click Details. This dialog also shows default rules, if any.
• Click the Parameters tab to see the parameter defaults in effect. You can then
judge if you need to edit the parser to conform with your file format
conventions.
• Click the Default Rules tab to see the post-parsing rules that ship with certain
parsers. This is a good way to get exposure to rules formation.
b. In the dialog that opens click Save and navigate to a file system location. Save
the XML file with an appropriate name.
c. In making your edits, be sure to change the parser ID and parser name in the
XML, as you are creating a customized version of a parser provided by
Oracle.
4. Assume you now want to import the new parser you saved for use in creating
configuration extensions.
b. In the dialog that opens, browse to the file location where you saved the
exported parser file. Select it and click Import on the dialog.
The new parser now appears in the Parsers table where it can be used in
configuration extension creation.
• XML elements with no XML attributes or child elements become parsed attributes;
all other elements become containers.
• Element text content becomes a parsed attribute, with its name dependent on
whether or not the tag contains any XML attributes. If the tag contains XML
attributes, the parsed attribute name takes the value specified in the
STORE_CONTENT_AS parameter; otherwise, the parsed attribute name takes the tag
name.
The default XML parser accepts the following parameters:
Parameter Description
MULTIKEY_DELIMITER Delimiter that separates a list of XML attribute names in the
CONTAINER_NAME parameter; default is tilde (~)
Parameter Description
CONTAINER_NAME A list of XML attribute names delimited by the value of the
MULTIKEY_DELIMITER parameter. If an attribute name in
this list appears in a tag in the original file, the tag becomes a
container named for the value of the XML attribute. All other
XML attributes become parsed attributes as usual. The tag
name itself is discarded.
For example, the list includes attribute names Moe and Larry
in this order. The original file contains an XML tag Stooges
that has attributes Moe, Larry, and Curly. As Moe appears
first in the delimited list, its value, leader, becomes the parsed
container name; Larry and Curly become parsed attributes.
The tag name Stooges is discarded. The original XML
fragment might be as follows:
<?xml version="1.0" encoding="UTF-8"?>
<Comedy>
<Stooges Moe="leader", Larry="zany", Curly="bald">
</Stooges>
</Comedy>
• Element text content becomes a parsed attribute that takes the name text_value,
where the text content becomes the parsed attribute value.
The generic XML parser accepts no parameters.
• As parsed using the default XML parser, with parameter values provided by
Oracle
• As parsed using the default XML parser, with modified parameter values
• The element contents of the AppName and Server tags become parsed attributes.
• Since the AppName tag contains no XML attributes, the parsed attribute name
takes the tag name.
• Contrast with the Server tag, which has XML attributes (name and os). This results
in a container named for the tag (Server), with three parsed attributes, one for each
of the XML attributes, and a third for the text content of the Server tag, which is set
to the value of the STORE_CONTENT_AS parameter (text_value).
When parsed using the default XML parser with modified parameter values, the
parsed version appears as follows:
Application
AppName = foo
ajax
os = linux
myVal = production
• The AppName tag remains the same; that is, it has no XML attributes so it becomes
a parsed attribute.
• Since the Server tag has an XML attribute that matches the value of
CONTAINER_NAME, the container takes the value of the attribute (ajax), obviating
the name=ajax parsed attribute. Remember that the CONTAINER_NAME parameter
provided by Oracle has a placeholder but no actual default value; thus, the
difference in this version of the parsed representation.
• The remaining Server tag attribute (os) becomes a parsed attribute as usual, and the
text content associated with the tag becomes the value of the attribute myVal, per
the edited STORE_CONTENT_AS parameter.
Refer to About the Default XML Parser for a reminder of how parsing occurs.
Parser Description
Blue Martini DNA Parser for Blue Martini DNA files (no parameters).
Database Query (see Parser for configuration extension database query output. Cloud
Sample SQL Query Control automatically transforms query results into a format the
Parsing and Rule parser accepts, organizing results into sections similar to a
Application for an Windows .ini file. Each section represents one record; each line in a
example) section contains a table column name and a value. See Database Query
Parser Parameters.
Parser Description
Database Query Parser for configuration extension database query output. Cloud
Paired Column Control automatically transforms query results into a format the
parser accepts, organizing results into sections similar to a
Windows .ini file. Each section represents one record; each line in a
section contains a name and value, where both the name and the value
are values of returned columns. As such, the parser requires an even
number of columns to be returned by the query in order to parse the
data. A query which returns an odd number of columns will result in
a parsing error.See Database Query Paired Column Parser Parameters
Db2 Parser for the output of the DB2 GET DATABASE CONFIGURATION
command (no parameters).
Directory Parser for files containing multiple name value pairs on the same line,
where each line may have varying numbers of pairs. For example, the
first line might be a=b j=k, the second line c=d m=n y=z, and so forth.
See Directory Parser Parameters.
E-Business Suite Parser for E-Business Suite .drv files. The parser converts
IF...THEN...ELSE structures in the file into containers in the
parsed representation, and the rest of the lines into a container with a
fixed number of parsed attributes. These lines can be of two types:
directory specifications, whose parsed attribute names are specified in
the DIR_HEADER parser parameter; configuration file specifications,
whose parsed attribute names are specified in the HEADER parser
parameter. See E-Business Suite Parser Parameters.
Galaxy CFG Parser for Galaxy .cfg files. See Galaxy CFG Parser Parameters.
Oracle ORA Parser for Oracle .ora files, such as tnsnames.ora (no parameters).
Siebel Parser for Siebel siebns files. The parser creates a container for each
unique path in the file, and attributes for name value pairs, except
where a line contains the string Type=empty, in which case the parser
does not create a parsed attribute for the line. See Siebel Parser
Parameters.
UbbConfig Parser for BEA Tuxedo files (no parameters). The parser converts
sections prefixed with an asterisk (*), and names in double quotes at
the start of a new line, into containers. It converts all other data into
attributes.
Unix Installed Parser for Unix installed patches data. The parser creates one
Patches container per (non-comment) line of the file. It treats every field
ending with a colon (:) on each line as a property name field and the
value following, if any, as the property value. Note that a property
does not have to have a value. See Unix Installed Patches Parser
Parameters.
Parser Description
Unix Recursive Parser for output of Unix recursive directory listing (ls -l -R). The
Directory List parser converts each subdirectory line into a container, and each file
information line into a container with a fixed set of attributes. See
Unix Recursive Directory List Parser Parameters.
Parameter Description
CELL_DELIMITER Character that separates name value pairs; default is =.
PROPERTY_DELIMITE Character that separates the length of a name or value from the
R value itself; default is _.
COMMENT Character that tells the parser to ignore the line that follows;
default is #.
USE_INI_SECTION Flag that tells the parser to use Windows .ini type sections;
default is true.
Parameter Description
CELL_DELIMITER Character that separates name value pairs; default is =.
PROPERTY_DELIMITE Character that separates the length of a name or value from the
R value itself; default is _.
COMMENT Character that tells the parser to ignore the line that follows;
default is #.
Parameter Description
USE_INI_SECTION Flag that tells the parser to use Windows .ini type sections;
default is true.
Parameter Description
CELL_DELIMITER Character that separates one property from another; default is a
space.
EXTRA_DELIMITER Character that separates a property name from its value; default is
=.
COMMENT Character that tells the parser to ignore the line that follows;
default is #.
Parameter Description
DIR_HEADER A tilde-delimited list of attribute names for directory specifications.
LAST_FREE_FORM Flag that tells the parser to ignore cell delimiters in the last value of
a directory or file specification; default is true.
Parameter Description
COMMENT Character that tells the parser to ignore the line that follows;
default is !.
Parameter Description
LINES_TO_SKIP Tells the parser the number of lines to ignore at the beginning of
the file; default is 4.
USE_INI_SECTION Flag that tells the parser to use Windows .ini type sections;
default is true.
Parameter Description
CELL_DELIMITER Character that separates name value pairs; default is a space.
EXTRA_DELIMITER Character that separates a property name from its value; default
is :.
COMMENT Character that tells the parser to ignore the line that follows;
default is #.
Parameter Description
LINES_TO_SKIP Tells the parser the number of lines to ignore at the beginning of
the file; default is 4.
LAST_FREE_FORM Flag that tells the parser to ignore cell delimiters in the last value of
a line; default is true.
Parser Description
Cron Access Parser for cron.allow and cron.deny files.
Parser Description
Hosts Access Parser for hosts.allow and hosts.deny files.
Linux Directory List Parser for Linux directory listing data format (for example, output of a ls -l
command).
Unix Directory List Parser for Unix directory listing data format for example, the output of a ls -l
command).
Unix Groups Parser for Unix etc/group files. The parser ignores group name and password
information.
Unix Passwd Parser for Unix etc/passwd files. The parser ignores password values.
Unix System Crontab Parser for Unix system crontab files. System crontab files are very similar to crontab
files, but may contain name value pairs such as PATH=/a/b.
Parameter Description
COMMENT A tilde-delimited list of regular expressions that denote comment
characters or sequences. For example, #[^\r\n]* specifies that
everything on a line following the # character is a comment.
Default is an empty list; that is, parse all file contents.
ALTERNATE_DELIMIT An alternate delimiter for property names and values. Default is '/'
ER (used only if ALTERNATE_FIELD parameter is nonempty).
HEADER_FLAG A flag specifying whether or not the file has a header line that
specifies the column names. Default is false.
Parameter Description
LAST_FREE_FORM A flag that specifies whether the last column is free form. The
parser ignores all delimiters in a free form column value. Default is
false.
Parser Description
AIX Installed Parser for AIX installed packages files.
Packages
Custom CFG Parser for custom .cfg files. This syntax defines an element with E =
{} syntax, where the brackets may contain name value pairs, nested
elements, or both.
Sectioned Properties Parser for files containing name value pairs organized into sections,
such as a Windows .ini file.
Parser Description
Unix PROFTPD Parser for Unix etc/proftpd.conf files.
Windows Checksum Parser for Windows checksum output generated with fciv.exe.
Parameter Description
COMMENT A tilde-delimited list of regular expressions that denote comment
characters or sequences. For example, #[^\r\n]* specifies that
everything on a line following the # character is a comment.
Default is an empty list; that is, parse all file contents.
Parameter Description
QUOTE_DELIMITER A tilde-delimited list of regular expressions that define how
quoted values begin and end (usually either a single or double
quote character). The beginning and end quote delimiter must be
the same. Default is an empty list; that is, parser does not recognize
quoted values.
ALLOW_NAME_ONLY_P A flag that indicates whether the parser allows property names
ROPERTIES without a delimiter or a value. Default: false.
REVERSE_PROPERTY A flag that indicates whether the parser allows the value to come
before the delimiter and property name. Default: false.
Parameter Description
PROPERTY_DELIMITE A tilde-delimited list of regular expressions denoting property
R delimiters. For example, the text "a=b : x=y" could be interpreted in
either of two ways:
• As a single property "a" with value "b : x=y"
• As two separate properties, "a=b" and "x=y"
If a colon (:) is the property delimiter, the parsing engine interprets
the text as containing two separate properties. Default is an empty
list; that is, parser does not recognize property delimiters.
Parameter Description
STRUCTURE_END A tilde-delimited list of regular expressions denoting the end of a
structure. Default is an empty list.
XML_STYLE_TAG A flag that indicates whether structures in the file are XML style
tags. Default: false.
USE_INI_SECTION A flag that indicates whether INI style sections are present.
Default: false.
ALLOW_ELEMENT_CEL A flag that indicates whether the file format supports element cell
L structures. Default: false.
In these constructs, the name value pairs p1 v1 and p2 v2 are explicit properties. A Sun
ONE Obj file typifies this data format.
Implicit Property
An implicit property is a property value without an associated property name. Like an
explicit property, an implicit property is bound to a container construct, usually a
reserved directive. The DIRECTIVE_PROPERTIES parser parameter contains the
property names of implicit properties.
Examples:
[SectionName myName myPath]
In these constructs, the implicit properties have the values myName and myPath, with
the presumed property names name and path, as declared in the
DIRECTIVE_PROPERTIES parser parameter. An Apache HTTPD file typifies this data
format.
Reserved Function
A reserved function is a keyword followed by one or more explicit properties. The
RESERVED_FUNCTIONS parser parameter specifies keywords that denote reserved
functions.
Example: Error fn="query-handler" type="forbidden", where Error is the
reserved function keyword specified in the RESERVED_FUNCTIONS parser parameter.
A Sun ONE Magnus file typifies this data format.
Reserved Directive
A reserved directive is a keyword followed by one or more implicit properties. The
RESERVED_DIRECTIVES parser parameter specifies keywords that denote reserved
directives.
Example: LoadModule cgi_module "/bin/modules/std/cgi", where
LoadModule is the reserved function keyword specified in the
RESERVED_DIRECTIVES parser parameter. An Apache HTTPD file typifies this data
format.
XML Structure
An XML structure is a standard XML tag that can contain a name only, a name
followed by explicit properties, or a name followed by implicit properties.
Examples:
<Name>
...
</Name>
Delimited Structure
A delimited structure consists of the following (in the specified order):
• Structure name
• Delimiter
• Structure contents
Explicit and implicit properties are not allowed. Java Policy and Custom CFG files
typify this data format.
Structure
A structure consists of the following (in the specified order):
• Structure name
• Structure contents
Explicit and implicit properties are not allowed. A Unix XINETD file typifies this data
format.
INI Section
And INI section resembles a section heading in a Windows .ini file, characterized
by:
• Section name
HKEY_LOCAL_MACHINE\SOFTWARE\X\Y\Z=123
These are two delimited section headings where the common pattern is HKEY_.
SiteMinder Registry and LDAP files typify this data format.
Element Cell
An element cell consists of an element cell name and a property name value pair of the
form A = B = C. Element cells typically use line continuation sequences and nesting
to clarify the structure. An element cell that has multiple properties uses a property
delimiter to separate them.
Example 1:
EC = \
B = C, D = F
This example is an element cell named EC with two property name value pairs, B = C
and D = F, separated by a comma. The structure uses the backslash character (\) to
indicate line continuation. The advanced properties parser parameters
PROPERTY_DELIMITER and CONTINUE_LINE define the respective format
characters.
Example 2:
EC = \
EC2 = \
A = B, \
C = D
This example is an element cell named EC that has a nested element cell named EC2
that contains two property name value pairs, A = B and C = D. This example uses
the same delimiter and line continuation characters.
Its parsed form, using the default XML parser, appears in the user interface in the
following tree structure:
dir
name = /a/b/c
file
name = file1
size = 120
file
name = file2
size = 350
Notice that two containers have the same name (file), which makes it impossible to
distinguish between the two, at the container level, at least. Thus, this file is a
candidate for a post-parsing rule. As mentioned, there is a special internal XML format
against which to apply a rule's XPath condition and expression. This format treats
nodes and attributes as XML elements, and converts attribute values into
corresponding element text content. It also adds a root element that doesn't appear in
the original file:
<root>
<dir>
<name>/a/b/c</name>
<file>
<name>file1</name>
<size>120</size>
</file>
<file>
<name>file2</name>
<size>350</size>
</file>
</dir>
</root>
Given the problem in the parsed form of having two containers with the same name, a
rule resolution might consist of the following:
Condition: /root/dir/file
Expression: name/text()
Effectively, this says: for each file evaluate name/text() to produce an identifier
that distinguishes one file from another within the dir node.
After applying the post-parsing rule, the parsed tree structure appears as follows:
dir
name = /a/b/c
file[file1]
name = file1
size = 120
file[file2]
name = file2
size = 350
The rule resolves to an identifier appended in square brackets to the container name.
The combination (file[file1], for example) enables various operations such as
compare, search, change history, and so forth, to distinguish between file containers.
Its parsed form, using the Oracle ORA parser, appears in the user interface in the
following tree structure:
acme
DESCRIPTION
SOURCE_ROUTE yes
ADDRESS
PROTOCOL tcp
HOST host1
PORT 1630
ADDRESS_LIST
FAILOVER on
LOAD_BALANCE off
ADDRESS
PROTOCOL tcp
HOST host2a
PORT 1630
ADDRESS
PROTOCOL tcp
HOST host2b
PORT 1630
ADDRESS
PROTOCOL tcp
HOST host3
PORT 1630
CONNECT_DATA
SERVICE_NAME Sales.us.acme.com
Notice that the address containers, both standalone and within ADDRESS_LIST are
indistinguishable. Thus, this file is a candidate for a post-parsing rule. As mentioned,
there is a special internal XML format against which to apply a rule's XPath condition
and expression. This format treats nodes and attributes as XML elements, and converts
attribute values into corresponding element text content. It also adds a root element
that doesn't appear in the original file:
<root>
<acme>
<DESCRIPTION>
<SOURCE_ROUTE>yes</SOURCE_ROUTE>
<ADDRESS>
<PROTOCOL>tcp</PROTOCOL>
<HOST>host1</HOST>
<PORT>1630</PORT>
</ADDRESS>
<ADDRESS_LIST>
<FAILOVER>on</FAILOVER>
<LOAD_BALANCE>off</LOAD_BALANCE>
<ADDRESS>
<PROTOCOL>tcp</PROTOCOL>
<HOST>host2a</HOST>
<PORT>1630</PORT>
</ADDRESS>
<ADDRESS>
<PROTOCOL>tcp</PROTOCOL>
<HOST>host2b</HOST>
<PORT>1630</PORT>
</ADDRESS>
</ADDRESS_LIST>
<ADDRESS>
<PROTOCOL>tcp</PROTOCOL>
<HOST>host3</HOST>
<PORT>1630</PORT>
</ADDRESS>
<CONNECT_DATA>
<SERVICE_NAME>Sales.us.acme.com</SERVICE_NAME>
</CONNECT_DATA>
</DESCRIPTION>
</acme>
</root>
Given the problem in the parsed form of having containers with the same name, a rule
resolution might consist of the following:
Condition: //ADDRESS
Expression:/HOST/text()
Effectively, this says: for each address element evaluate /HOST/text() to extract the
host name as the address identifier.
After applying the post-parsing rule, the parsed tree structure appears as follows:
acme
DESCRIPTION
SOURCE_ROUTE yes
ADDRESS[host1]
PROTOCOL tcp
HOST host1
PORT 1630
ADDRESS_LIST
FAILOVER on
LOAD_BALANCE off
ADDRESS[host2a]
PROTOCOL tcp
HOST host2a
PORT 1630
ADDRESS[host2b]
PROTOCOL tcp
HOST host2b
PORT 1630
ADDRESS[host3]
PROTOCOL tcp
HOST host3
PORT 1630
CONNECT_DATA
SERVICE_NAME Sales.us.acme.com
The rule resolves to an identifier appended in square brackets to the container name.
The combination (ADDRESS[host2a], for example) enables various operations such
as compare, search, change history, and so forth, to distinguish between address
containers.
webserver-200 PERFORMANCE 6
webserver-500 PRODUCTION 3
The SQL query expressed as part of the configuration extension creation is as follows:
select * from SERVER_DETAILS
The Configuration Browser Source tab renders the data the same way.
Its parsed form, using the Database Query parser, appears in the user interface in the
following tree structure:
row
SERVER_NAME=webserver-100
ENVIRONMENT=QA
HOSTED_APPLICATIONS=5
row
SERVER_NAME=webserver-200
ENVIRONMENT=PERFORMANCE
HOSTED_APPLICATIONS=6
row
SERVER_NAME=webserver-500
ENVIRONMENT=PRODUCTION
HOSTED_APPLICATIONS=3
Notice that the row containers are indistinguishable. Thus, this query result is a
candidate for a post-parsing rule. As mentioned, there is a special internal XML format
against which to apply a rule's XPath condition and expression. This format treats
nodes and attributes as XML elements, and converts attribute values into
corresponding element text content. It also adds a root element that doesn't appear in
the original file:
<root>
<row>
<SERVER_NAME>webserver-100</SERVER_NAME>
<ENVIRONMENT>QA</ENVIRONMENT>
<HOSTED_APPLICATIONS>5</HOSTED_APPLICATIONS>
</row>
<row>
<SERVER_NAME>webserver-200</SERVER_NAME>
<ENVIRONMENT>PERFORMANCE</ENVIRONMENT>
<HOSTED_APPLICATIONS>6</HOSTED_APPLICATIONS>
</row>
<row>
<SERVER_NAME>webserver-500</SERVER_NAME>
<ENVIRONMENT>PRODUCTION</ENVIRONMENT>
<HOSTED_APPLICATIONS>3</HOSTED_APPLICATIONS>
</row>
</root>
Given the problem in the parsed form of having three containers with the same name,
a rule resolution might consist of the following:
Condition:/root/row/SERVER_NAME
Expression:SERVER_NAME/text()
The rule resolves to an identifier appended in square brackets to the container name.
The combination (row[webserver-100], for example) enables various operations
such as compare, search, change history, and so forth, to distinguish between row
containers.
• From the Setup menu, select Add Target, then select Generic System
• From the Targets menu, select Systems, then click the Add button
General
Provide general details of the generic system:
• Set system properties such as cost center and life cycle status
1. Click Add.
a. Select a member target in the left table. This populates the right table.
b. Select an associated target in the right table. This populates the association
drop-down list.
Availability Criteria
Use this page to declare the key members of the system; that is, the members that must
be running for the system to be considered available. You can select any one, some, or
all members, but you must select at least one.
When done, click Next.
Charts
Customize how you want charts to appear on the System Charts page:
• Deselect the suggested charts check box and customize the page entirely
• Alter the appearance of the Members page by adding and removing columns and
abbreviations
When done, click Next.
Review
Verify the makeup of the generic system target. If everything appears in order, click
Finish.
Upon confirmation that the target was successfully created, use the Configuration
Topology Viewer to review and traverse the relationships you created.
• Determine the source of a target's health problems, that is, detect which targets
might be causing the failure. For example, a database is down because its host is
down.
• Analyze the impact of a target on other targets. For example, the payroll and
finance applications will be impacted if the database goes down.
• Determine the system's structure by viewing the members of a system and their
interrelationships.
• Customize your configuration topology views to focus on the targets for which you
have responsibility.
• Share custom topology views that you have created with other Cloud Control
users.
• Clicking the small arrow icon in the bottom right corner of the window to bring up
a navigator, which allows you to select which portion of the topology is in view.
• Decreasing the size of the nodes in the display using the zoom control in the top
left of the display.
Perform the following steps:
From the Targets menu on the Cloud Control home page, select All Targets. In the
table, click the appropriate target. On the resulting page, select Configuration then
select Topology from the dynamic target menu.
• Uses
This view helps you determine the targets that the selected target depends on. If
a target is having problems, this view can be useful in helping you determine
whether its problems have been caused by another target it depends on.
• Used By
This view shows you the targets that depend on the selected target. This can be
useful, for example, if you are planning on shutting down the target and need to
know what other targets will be affected
• System Members
This view shows the members of the system (available only for targets that are
systems).
• Custom views that have been defined and shared by end users (custom views
must be explicitly shared before they are available to others).
The Uses, Used By, and System Members views are topology views provided by
Oracle. They cannot be modified.
2. In the View menu, select System Members (available only if the target is a system).
The view displays the relationships between the targets. The target type controls
the default view that is shown.
To see the specific relationship between two targets, hover over the link between them
and the relationship name will pop up.
Note the following:
• The topology feature is available any time you are in the context of a target: select
Configuration from the target type menu, then select Topology.
• Not all target types have configuration data. For these target types, the
Configuration menu and topology graphs are not available.
2. In the Uses view on the Configuration Topology Viewer page, icons indicate
whether the target is down. You can choose a particular view, for example, Uses or
Used By. In addition, icons indicate whether targets have associated incidents.
2. Zoom in on the target that has problems. Problems are represented by icons
indicating a problem target status, and icons indicating target incidents. The target
you selected in the All Targets page will always be highlighted.
3. When you click a target, properties for the target are available in the Configuration
tab in the Properties section. The Configuration tab shows information about target
compliance, configuration changes in the past week, and recommended patches.
Links from these values lead to more detailed reports.
If incidents are reported in the Incident Summary tab, resolve the reported events
and incidents. Compliance information is available through the Configuration tab.
If the target is not compliant, resolve the issue. Also if patches are missing, apply
them.
4. Repeat the process of analyzing the various targets until all targets are functioning
properly.
From the Targets menu on the Cloud Control home page, select All Targets. In the
table, click the appropriate target. On the resulting page, select Configuration then
select Topology from the dynamic target menu.
To view target data, place the mouse over the node and continue to move the
mouse to >>. The popup containing data appears. For additional information,
select Properties located at the right. The links associated with the data lead to the
detail pages.
From the dynamic target menu, select Configuration, then select History. On the
Configuration History page, determine whether there has been a history change in
the last 24 hours. If so, view those changes in detail for that particular target.
Another way to access Configuration Changes from a node is to select the node,
click on Properties, click the Configuration tab, and click the value associated with
Configuration Changes.
On the Topology page, select a node. Select Properties then select Configuration.
Click the value associated with Patch Advisory.
2. In the View list, select Uses. This shows a topology of the targets that the selected
target depends on.
Paths to the target or targets potentially causing the problem are colored.
If your target is not up, paths to the target or targets that may be causing the
problem are colored. Red links lead from your target to targets that are down, and
yellow links lead to targets whose status is not known.
By default the topology includes all depths of the tree, including the dependency
relationships between those targets.
2. On the Topology page, analyze the Used By view. The topology will show the
targets that depend on the selected target.
2. From the Customize menu, select Create Custom View.... Provide the name and
description for the topology and select one of the Initial Contents:
• Copy Current View to create a topology view similar to the one you are
viewing.
• Create Empty View to create a topology view that starts with the root node.
Also, choose one of the following expose options:
• Expose the custom view for all targets of the current target type. For example, if
you are creating at a topology view for a database target, the new view will be
available for all database targets.
3. Reduce the unwanted information in the topology by highlighting the target and
selecting Hide Relationships... in the Customize menu.
You can also display relationships that are not being displayed by selecting a
target. From the Customize menu, select Target, then select Show More
Relationships to Target Type....
Privileged users can also choose to share their custom views with other users. To
share a custom view, select the checkbox labeled Share this custom view with
other users.
4. Click OK.
2. From the View list, select the topology view you want to delete.
2. From the View list, select the topology view you want to change then select the
target.
4. The list of relationships that are displayed in the graph are listed in the Hide
Relationships page. You can multi-select the relationships to exclude from the
graph. Click OK.
2. From the View list, select the custom topology view you own or have the privileges
to change. System views such as Uses, Used By, and System Members cannot be
modified.
3. Highlight a target from which to expand the topology. From the Customize menu,
select Target, then select Show More Relationships to Target Type....
4. The resulting dialog shows a list of the relationships that the selected target type
can participate in. Select the relationships of interest, and click OK. Any targets
that are related to the selected target type using the selected relationships will be
added to the topology view.
2. From the View list, select the topology view you want to change then select the
target.
4. On the Create Custom View page, provide a name and description, choose the
initial contents, and determine how this custom view should be exposed. Click OK.
5. From the Customize menu, select Target, then select Create Relationship to
Target....
6. On the Create Relationship to Target page, select the related target and the
relationship between targets. Only relationships that the target type can participate
in are shown in the list. Not all target types can be related to each other.
Note: Created relationships are independent of the view. You can see and use
created relationships in other areas of Cloud Control, such as System templates,
topology views, and configuration comparisons. Deleting a custom view will not
delete the new relationship.
3. Select the link to the relationship you want to delete. You can either right click the
node to view the context menu that allows you to delete the relationship or, from
the Customize menu, select Relationship, then select Delete Relationship....
Relationships are used in various places in Cloud Control, such as System templates,
topology views, configuration comparisons, and so on. Deleting a relationship from
this topology can impact these other areas.
If you create a relationship, you can later delete it by using the Delete Relationship...
menu item.
3. To control highlighted paths to targets that are down, toggle the "Highlight
'Down' Root Cause" menu item.
When this menu item is selected and the root target is down, paths from the root
node to other down targets are highlighted. By visually following the highlighted
paths, you may determine which targets are causing the root target's down status.
Note:
When this option is selected, you will not be able to group nodes together.
4. To manipulate tiers:
b. Select either Specify Tiers or Use Default Tiers. If you choose to specify tiers,
drag target types to their desired tier.
5. To turn the coloring of the links on and off, on the Customize menu, select
Highlight "Down" Root Cause.
6. To group targets:
a. Select a link that represents one or more associations between the source and
destination target types.
Note:
• Overview of Compliance
• Evaluating Compliance
• Examples
• Provides real-time monitoring of a target's files, processes, and users to let Oracle
Enterprise Manager Cloud Control (Cloud Control) users know where
configuration change or unauthorized action are taking place in their environment.
• Compliance Framework
A compliance framework is an organized list of control areas that need to be
followed for a company to stay in compliance in their industry. Enterprise Manager
uses compliance frameworks as a foldering structure to map standards and rules to
the control areas they affect. Compliance frameworks are hierarchical to allow for
direct representation of these industry frameworks.
A single framework control area maps to one or more compliance standards. The
outcome of these compliance standard evaluations results in a score for the given
framework area.
• Compliance Standard
A compliance standard is a collection of checks or rules that follow broadly
accepted best practices. It is the Cloud Control representation of a compliance
control that must be tested against some set of IT infrastructure to determine if the
control is being followed. This ensures that IT infrastructure, applications, business
services and processes are organized, configured, managed, and monitored
properly. A compliance standard evaluation can provide information related to
platform compatibility, known issues affecting other customers with similar
configurations, security vulnerabilities, patch recommendations, and more. A
compliance standard is also used to define where to perform real-time change
monitoring.
A compliance standard is mapped to one or more compliance standard rules and is
associated to one or more targets which should be evaluated.
– Repository Rule
Used to perform a check against any metric collection data in the Management
Repository
– Agent-Side Rule
Used to perform configuration checks on the agent and upload violations into
the Management Repository.
– Manual Rule
Checks that must be performed but cannot be automated. For example: "Plans
for testing installations, upgrades, and patches must be written and followed
prior to production implementation."
• Importance
Importance is a setting that the user can make when mapping compliance
frameworks, standards, and rules. The importance is used to calculate the affect a
compliance violation will have on the compliance score for that framework control
area or compliance standard.
For compliance frameworks, when mapping a compliance standard, the
importance for this compliance standard indicates the relative importance to other
compliance standards in this framework.
For compliance standards, when mapping a compliance standard rule, importance
indicates the relative importance of a compliance standard rule to all other
compliance standard rules in the compliance standard.
• Score
A target's compliance score for a compliance standard is used to reflect the degree
of the target's conformance with respect to the compliance standard. The
compliance score is in the range of 0% to 100% inclusive. A compliance score of
100% indicates that a target fully complies with the compliance standard.
• Real-time Facets
The real-time monitoring rule definition includes facets that specify what is
important to monitor for a given target type, target properties, and entity type. A
facet is a collection of patterns that make up one attribute of a target type. For
example, the networking configuration files for your operating system could be
defined by one facet containing multiple file names or file patterns.
• Real-Time Observations
Observations are the actions that were seen on a host or target that were configured
to be monitored through real-time monitoring rules. Each distinct user action
results in one observation.
Every observation has an audit status that determines if the observation was
authorized, or unauthorized, or neither (unaudited). The audit status can be set
manually or automatically through the real-time monitoring compliance standard
rule configuration.
• Observation Bundles
Single observations are not reported from the Management Agent to the server.
They are instead bundled with other observations against the same target, rule, and
user performing the action. Bundles help combine like observations and make it
easier to manage the observations in Cloud Control.
• Dashboard
The dashboard provides a very high level view of results that show how compliant
or at risk your organization or your area is. The dashboard contains dials
representing the compliance score for a selected framework, least compliant
systems and targets, and unmanaged discovered hosts.
• Results
Compliance results include evaluation results and errors for compliance
frameworks and compliance standards, as well as target compliance.
• Library
The Compliance Library page contains the entities used for defining standards.
From the Compliance Library page you can manipulate compliance frameworks,
compliance standards, compliance standard rules, and real-time monitoring facets.
Note: The real-time monitoring facets are only for real-time monitoring rules.
• Real-time Observations
Observations are the actions that were seen on a host or target that were configured
to be monitored through real-time monitoring rules. Each distinct user action
results in one observation. Observations are additionally bundled if there are
multiple observations done in a short period of time by the same user on the same
target and against the same real-time monitoring rule.Multiple UI-based reports
are provided to allow users to analyze the actions that are being observed.
Manage any Target Metric Target EM_COMPLIAN Enables you to manage a metric for any target.
CE_DESIGNER
View any Target Target EM_COMPLIAN Allows you to view all managed targets in
CE_DESIGNER Enterprise Manager.
View any Compliance Target EM_COMPLIAN Allows you to view compliance framework
Framework CE_OFFICER definition and results
EM_COMPLIAN NOTE: This privilege is part of the Compliance
CE_DESIGNER Framework resource privilege. This privilege is
granted by default for
EM_COMPLIANCE_OFFICER role but it is not
granted by default for the
EM_COMPLIANCE_DESIGNER role.
Create Compliance Entity Resource EM_COMPLIAN Allows you to create compliance frameworks,
CE_DESIGNER compliance standards, compliance standard rules,
and real-time monitoring facets.
This privilege is part of the Compliance
Framework resource privilege.
Full any Compliance Resource EM_COMPLIAN Allows you to edit and delete compliance
Entity CE_DESIGNER frameworks, compliance standards, compliance
standard rules, and real-time monitoring facets
This privilege is part of the Compliance
Framework resource privilege.
Job System Resource EM_COMPLIAN Job is a unit of work that may be scheduled that
CE_DESIGNER an administrator defines to automate the
commonly run tasks.
This privilege contains the following privileges:
• Create (granted by default)
• Manage View Access
The following table lists the compliance tasks with the roles and privileges required.
Edit and delete compliance framework Full any Compliance Entity privilege
View any Compliance Framework privilege
Note: In addition, ensure you have privileges to access the target you will be
associating with a compliance standard. In particular, you need the Manage any
Target Compliance privilege on the target.
• Regularly monitor the compliance dashboard to find areas that may indicate your
organization has a low compliance score or is at risk
3. On the Evaluation Results page, choose the compliance standard you want to
investigate and click Show Details.
• Ensure your environments match baselines (or each other) by creating rules on top
of configuration compare capabilities. Then monitor for configuration drift using
real-time monitoring.
• Continually test your systems, services, and targets, ensuring the best possible
protection and performance your system can have
• Keep an eye on hosts in your environment that are not monitored for compliance as
these introduce a large amount of compliance risk in your environment.
The following sections provide additional details:
• Managing Violations
is part of the Secure Configuration for Host compliance standard. But you will not
know the details just by looking at the Compliance Summary regions.
2. Select Dashboard.
The Compliance Dashboard is also one of the pages available from the "Select Your
Home" page and can be set as your home page when you log in to Cloud Control.
The Compliance Dashboard includes the following regions:
• Compliance Summary
This region has a view for frameworks and a view for standards. In the Framework
view, this region shows you the list of all defined compliance frameworks and their
overall score and violation details. In the standard view, this region will list the
worst scoring compliance standards along with their violation details. Clicking on
a framework or standard name will take you to a screen showing you more details
of that framework or standard.
From this region, you can also click on the View Trends link to see a historic trend
graph of the compliance score
intent of this region is to highlight the hosts that have recently been discovered but
may not be under compliance control.
1. From the Targets menu, select the target type, and click the target.
2. On the target's home page, select the target menu located at the top-left of the page.
3. Select Compliance, then select Results. On the Results page, click Target
Compliance.
Compliant Target meets the desired state and there are no unauthorized real-time
monitoring observations.
Non-Compliant Target does not meet the desired state. At least one test in the
compliance standard detected a deviation from the desired state or
there is at least one unauthorized real-time monitoring observation.
To view results using Cloud Control home page, follow these steps:
2. Click the Target Compliance tab. The page displays the targets with their
Average Compliance Score.
To view compliance evaluation results from a target's home page, follow these steps:
3. On the target menu located at the top-left of the page, select Compliance, then
select Results.
4. Click the Target Compliance tab. The page displays the targets with the Average
Compliance Score.
Use the page or region to get a comprehensive view about a target in regards to
compliance over a period of time. Using the tables and graphs, you can easily watch
for trends in progress and changes.
Note: Trend overview data might take up to six hours after initial compliance
standard to target association to display in the time series charts.
2. On the Compliance Results page, click the Compliance Frameworks tab and
highlight the compliance framework of interest.
Since compliance frameworks are a hierarchical structure, each folder or node of the
framework will have its own score. The bottom most children of the hierarchy will
have their score roll up to the parent folder and so on. If one person viewing these
reports is primarily interested in one control area of the framework they follow, they
can focus on the score for that specific control area as represented by the folder they
look at under the framework.
• Unsuppressed Violations
• Suppressed Violations
3. On the Violation Suppressed Confirmation popup, you can suppress the violation
indefinitely or provide a date by which the suppression will end. Optionally, you
can provide an explanation for the suppression.
4. Click OK.
This submits a job to do the suppression asynchronously and returns you to the Result
Library page. A suppression adds an annotation to the underlying event stating that
the violation is suppressed along with the reason (if a reason was provided). Note: The
job results are not instantaneous. It may take a few minutes for the results to be
displayed.
Suppressed Violations Tab
Use this tab to unsuppress violations.
4. Click OK.
This submits a job to do the unsuppression asynchronously and returns you to the
result library. An unsuppression adds an annotation to the underlying event that the
violation is unsuppressed along with the reason (if a reason was provided). Note: The
job results are not instantaneous. It may take a few minutes for the results to be
displayed.
Manual Rule Violations Tab
3. On the Clear Violations Confirmation popup, you can clear the violation
indefinitely or provide a date by which the clear will end. Optionally, you can
provide an explanation for the clear.
4. Click OK.
This submits a job to do the manual rule violations clearing asynchronously and
returns you to the Result Library page. Clearing manual rule violations also clears the
underlying violation event. Note: The job results are not instantaneous. It may take a
few minutes for the results to be displayed.
• Monitor the compliance framework scores along with the systems and targets that
have the lowest scores on the compliance dashboard.
• Ensure that recently discovered hosts are either being monitored using Cloud
control for compliance risk or are not possibly introducing risk in your IT
compliance.
• Study the statistics on the Enterprise Summary Home page. In particular, look at
the statistics in the Compliance Summary region. The compliance violations with
"Critical" severity should be dealt with first.
• Address generic systems (IT business applications) and targets that have the lowest
compliance scores.
• For the compliance violations of a particular target, examine the home page for that
target. The Compliance Standard Summary region provides overview information,
but it also gives you access to the Trend for that target.
• Navigate to the Trend Overview page to see charts relating to the number of
targets evaluated, the average violation count per target, number of targets by
compliance score, and the average compliance score.
Note:
Only results from those targets for which you have View privilege will be
available for viewing.
2. In the Evaluations Results tab for Compliance Standards, highlight the Secure
Configuration for Host compliance standard. Click Show Details.
3. In the Summary tab on the Compliance Standard Result Detail page, you can look
at the results either by target or compliance standard rule. For this example, we
will use Result by Compliance Standard Rule.
4. In the navigational list, click the Secure Ports compliance standard rule. In the
resulting Secure Ports Summary tab, you will get a list of all the targets that are
violating the Secure Ports rule. This is a security issue that needs to be addressed.
– Click the Target Compliance tab for a roll-up view of all violations across all
targets, that is, all those targets that are out of compliance.
– Click the Compliance Standards tab to view the list of compliance standards
against which there are violations. From this tab, you can also access the Errors
tab to view the errors against the compliance standard.
• Navigate to the Home page for a particular target. The Compliance Standard
Summary region lists the compliance violations according to severity level. Click
the name of the compliance standard of interest to view the details of the
violations.
• Compliance Results page. From the Enterprise menu, select Compliance, then
select Results.
The following are examples of how to find violation details.
Example 1 - Accessing Violation Details of a Compliance Framework
Again, click the number in the Violation Count column and the Violations pop-up
appears. All the Compliance Standard Rules, for example Security Recommendations,
are listed.
You continue the process by clicking the number in the Violation Count column again
in the Violations pop-up. The subsequent pop-up displays the Violations Details. For
example, the Violations Details pop-up displays the name of the patch that is causing
the problem.
Example 3 - Accessing Violations of a Target
When you click the Target Compliance tab, the Violations columns report how many
violations exist for each target.
When you click the number in a Violations column, the Violations pop-up appears
listing all the targets violating the standard. See Figure 46-2.
Again, click the number in the Violation Count column and the Violations pop-up
appears. All the Compliance Standard Rules, for example Security Ports, are listed.
You continue the process by clicking the number in the Violation Count column again
in the Violations pop-up. The subsequent pop-up displays Violations Details. For
example, the Violations Details pop-up displays the numbers of the ports violating the
compliance standard.
Example 4 - Violations Using Show Details on Compliance Standards Page
You can also drill-down on violations using the Show Details option on the
Compliance Results page. Highlight a standard and click Show Details. See
Figure 46-3.
On the resulting page, you have the option of seeing violations by target or by
compliance standard rule.
When you click the Violations tab, details regarding the compliance standard are
listed including Event Details and Guided Resolution. See Figure 46-4.
On the Compliance Standard Result Detail page, when you click the Summary tab
then the Result By Target tab, the number of violations against the target display.
When you click a number in the violations columns, the Violations pop-up appears
listing the compliance standard rules that are causing the violation. In turn, when you
click the number in the Violation Count column, the name of the offending metric or
patch displays.
Note: Similar drill-downs are available from the Target Compliance tab.
Tip: To get to the end result of a Violation, continue clicking the number in the
Violation Count column. More and more details are presented, narrowing the cause of
the problem.
Note: Target evaluations are only one level while Violations are multi-level.
• Use the Evaluation Errors page to view the errors that occurred as a result of metric
collection, as well as those that occurred during the last evaluation.
• Use the search filter to view only those evaluation errors that meet a set of search
criteria that you specify.
• Click the message in the Message column to decide what your course of action
should be to resolve the error.
• Descriptions reports
The Descriptions reports list all the available compliance standards, compliance
frameworks, and compliance standard rules available in the Compliance Library.
These reports enable you to decide whether additional compliance standards and
compliance frameworks need to be defined for your enterprise to attain and
maintain its compliance to the standards.
• Results reports
The Results reports provide details of the various evaluations against compliance
standards and compliance frameworks. Using the Results reports you can view, in
one place, all the statistics regarding the compliance of your enterprise against the
defined standards. To view the target that is most likely in need of your immediate
attention, view the Target with Lowest AVG COMPLIANCE SCORE report. The
following are examples of the reports provided:
The following table provides the combination of the severity and importance values
used to calculate a compliance score.
Importance Critical Severity (1) Warning Severity (1) Minor Warning Severity (1)
High 0-25 (2) 66-75 95-96
In Figure 46-8:
was good or bad. The authorized status means that some review has happened for the
observation and it should be treated as expected to occur (it was a good change). The
unauthorized status means that this observation has been reviewed and has been
found to be against policy. This may result in either a corrective fix, a change to policy,
or a compensating control being put in place. The audit status for observations can be
automatically set by a rule so that all observations triggered by the rule get a default
audit status. The status can also be set manually through the UI reports discussed
below. The most advanced capability involves integrating with a Change Management
Request server through a Cloud Control connector to automatically determine on a
per-observation basis if that action was supposed to happen.
The following sections provide additional details regarding real-time monitoring
observations:
• Viewing Observations
(generic systems) because an IT manager and compliance auditor may not know what
a target is used for. A business application is modeled in Cloud Control as a generic
system.
If you are more technical, you still may want to start at this business application level
if this is the business application you are working on.
To view observations by systems, follow these steps:
1. From the Enterprise menu, select Compliance, then select Real-time Observations.
Cloud Control displays the Select Root Target(s) page that lists the Target Name for
each system target. There is also a link for all targets not belonging to a system
target.
3. You can begin viewing a report for a given system target by selecting one or more
system targets and clicking on the View Details for Selected Systems button.
You will see counts for each system target selected by the time range selected. For
instance if you are looking at the monthly time range, each column in the table will
represent one day from the month. The count will be the count of observations for
that day and system target.
Click on the system target name to drill down and show the counts by each target
that comprises the system target. You can continue to click on the links in the first
column of the table to drill down until you get to the entities that had observations
(for example: file names, process names, user accounts, and so on).
Clicking on the count displays a screen that shows the actual observations that
occurred during that time period.
Clicking on the count displays a screen that shows the actual observations that
occurred during that time period.
Cloud Control displays the Select Compliance Frameworks page that lists each
defined Compliance Framework.
3. You can begin viewing a report for a given framework by selecting one or more
frameworks and clicking on the View Details for Selected Frameworks button.
You will see counts for each framework selected by the time range selected. For
instance if you are looking at the monthly time range, each column in the table will
represent one day from the month. The count will be the count of observations for
that day and framework.Click on the framework name to drill down and show the
4. Clicking on the count displays a screen that shows the actual observations that
occurred during that time period.
This drill-down capability provided by these screens makes it easy for you to easily
find where observations are occurring. When you have an environment with tens of
thousands of targets across hundreds of business applications, it is impossible to view
observations simply using a table and search unless you know exactly the search
conditions they are looking for. In a matter of an hour, with this large of an
environment even with little activity, there can be thousands of observations.
Cloud Control displays the Search observation page which has search filters on the
top half of the page and search results on the bottom half
3. You can set any number of filters in the search area. You can also click on the Add
Fields button to add any fields that are available in the search results table.
4. With the options available in search, you can find observations performed over a
time range, by a specific user, against a specific target, changes to a specific entity,
and so on. Nearly every use case for finding observations can be solved using a
combination of search fields.
• Authorized: The observation has been determined to be good, some action that was
desired to occur.
• Unauthorized: The observation has been determined to be bad, some action that
was not wanted.
• When you manually set the observation to be authorized and enter a change
request ID and the rule has change management integration enabled, no attributes
of the change request are compared with the observation. The change request is
simply updated with the observation details.
• When rolling back annotations in the change management server, the observation
annotations are marked as rolled-back instead of actually removing the annotation.
This occurs to avoid user confusion not knowing possibly why the annotations
were removed. Also, if the observation later becomes authorized again, the rolled-
back marking can simply be removed to bring the annotation back.
3. Highlight the compliance framework you want to manage and choose the action
you want to perform.
Frameworks Provided by Oracle and User-Defined Compliance Frameworks
There are compliance frameworks provided by Oracle and user-defined compliance
frameworks.
• Help in IT audits by identifying which compliance controls are at risk and may
need compensating controls based on the violations. Without mapping your
compliance checks to the control areas affected, it is hard to identify what the real
impact would be in a compliance audit.
5. Once you have provided the information on the definition page, look at the options
available when you right-click the name of the compliance framework (located at
the top-left of the page). From this list you can create subgroups, include
compliance standards, and so on.
6. Click Save.
Usage Notes
– Development
Indicates a compliance framework is under development and that work on its
definition is still in progress. While in development mode, all management
capabilities of compliance frameworks are supported including editing of the
compliance framework and deleting the compliance framework. Results of
development compliance standards will NOT be viewable in target and console
home pages, and the compliance dashboard.
Lifecycle status default is Development. It can be promoted to Production only
once. It cannot be changed from Production to Development.
– Production
Indicates a compliance framework has been approved and is of production
quality. When a compliance framework is in production mode, its results are
rolled up into a compliance dashboard, target and console home page.
Production compliance frameworks can only refer to Production compliance
standards. A production compliance framework can be edited to add/delete
references to production compliance standards ONLY!
• All compliance frameworks with the same keyword will be grouped together when
sorted by the Keyword column.
Ensure that the Compliance Framework name is different from the original
compliance framework and any other existing compliance frameworks.
5. Click Save.
6. You can then edit this newly created framework and add or remove standards,
subfolders, or modify importance levels.
3. Highlight the compliance framework you want to edit and click the Edit button.
To add standards and subgroups, right-click the name of the framework located at
the top left of the page.
5. Click Save.
Usage Notes
• The importance impacts the way the compliance score is calculated for this
compliance standard in this framework folder.
• A compliance standard can be added to more than one compliance framework, and
can have a different importance when added to a different compliance framework.
For example, you could have a compliance standard called Check Password
Expired which flags user accounts with expired passwords. This compliance
standard may be a member of two compliance frameworks: All System Passwords
Secure and 30-day Password Validation. The All System Passwords compliance
framework verifies a password's security, whereas the 30-day Password Validation
compliance framework checks the date that this password was last set.
– The Check Password Expired compliance standard could have Extremely High
importance for the 30-day Password Validation compliance framework, since
this check is warning users that their passwords are about to expire.
other added compliance standards that do security checks could have a higher
importance within the compliance framework.
3. Highlight the compliance framework you want to delete, click Delete button.
4. Confirm that you want to delete the compliance framework by clicking OK.
Usage Notes
Note:
4. Provide the file name from which the compliance framework definition (as per
Compliance Framework XSD) will be imported. Specify whether to override an
existing definition if one already exists. Specify whether to import referring content
as well where all leaf level rules and compliance standards are imported. Real-time
monitoring facets are also imported for real-time monitoring type of rules.
5. Click OK.
3. In the Search portion of the page, provide criteria to use to narrow the search.
4. Click Search.
2. Click the Compliance Frameworks tab and then the Evaluation Results tab.
3. Highlight the compliance framework and click Show Details to view the details of
a particular compliance framework.
• Average compliance score for different targets evaluated for compliance standards
referred to by the compliance framework
2. Click the Compliance Frameworks tab and then the Evaluation Results tab.
3. In the Search portion of the page, provide criteria to use to narrow the search.
4. Click Search.
2. Click the Compliance Frameworks tab and then the Errors tab.
Usage Notes
The error may be an unexpected internal error or an error in the test.
Evaluation errors can often be due to configuration and installation issues. See the
following manuals for information:
• Oracle Enterprise Manager Cloud Control Advanced Installation and Configuration Guide
If the installation and configuration are correct and the errors persist, call Oracle for
assistance.
2. Click the Compliance Frameworks tab and then the Errors tab.
3. In the Search portion of the page, provide criteria to use to narrow the search.
4. Click Search.
Usage Notes
• Oracle Enterprise Manager Cloud Control Advanced Installation and Configuration Guide
If the installation and configuration are correct and the errors persist, call Oracle for
assistance.
He then enables and disables the appropriate compliance standard rules and
creates new compliance standard rules.
5. IT Administrator logs in to Cloud Control and associates the targets for which he
has target privileges with the appropriate compliance standards.
6. IT Administrator sets up the correct configuration parameters and settings for the
compliance frameworks, compliance standards, and compliance standard rules for
a particular target.
He then creates a monitoring template from this target and applies it to the other
targets, to which he has privileges, that require compliance standards.
7. Compliance Auditor logs in to Cloud Control to view the violations and errors at
the Enterprise level, for which he has view privileges, and at each target level.
He would then take the necessary actions to rectify the errors and violations.
• Rule folders that can include nested rule folders and individual compliance
standard rules.
Rule Folders are hierarchical structures that contain compliance standard rules. A
rule folder has an importance attribute that denotes the importance of the rule
folder relative to its siblings at the same level. This importance is considered when
determining compliance scores being rolled up from other sibling rule folders. A
certain rule folder may have multiple tests that occur, in this way a certain test can
be given more weight than other tests.
Key:
CS - compliance standard
CS
Nested
Rule CS Rule
Folder
You can override an existing compliance standard by checking the Overwrite existing
compliance standards check box. As a result, evaluations of compliance standards
require that the compliance standard is associated to one or more targets.
5. IT Administrator logs in to Cloud Control and associates the targets for which he
has target privileges with the appropriate compliance standards.
6. IT Administrator sets up the correct configuration parameters and settings for the
compliance frameworks, compliance standards, and compliance standard rules
for a particular target.
He then creates a monitoring template from this target and applies it to the other
targets, to which he has privileges, that require compliance standards.
He would then take the necessary actions to rectify the errors and violations.
3. Click the Create button. You will prompted for the Name, Author, target type to
which the standard is applicable., and the standard type. The standard types are:
• Repository
• Real-time Monitoring
• Agent-side
Click Continue.
5. To further define the compliance standard, right-click the name of the compliance
standard located at the top left of the page. From this menu, you can create rule
folders, add rules, and included compliance standards.
By using rule folders, you can view the summary of results, categorized by the
targets that were evaluated against the selected rule folder and the Compliance
Standard Rules evaluated for the selected rule folder.
6. Click Save.
Once you define the compliance standard, associate the standard with a target and
define the target type-specific settings.
1. While on the Compliance Standards Library page, ensure the correct compliance
standard is highlighted.
3. On the Target Association for Compliance Standard page, click Add to choose
the target to be evaluated against the standard.
4. In the Search and Select: Targets popup, choose the appropriate targets.
5. Click Select.
After you associate the targets with the compliance standard, you can edit the
parameters associated with the target.
1. While on the Target Association for Compliance Standard page, click Edit.
Note:
You can also associate a compliance standard with a target from the target
home page. At the top left of the target's home page, right click the name of
the target. On the resulting menu, select Compliance, then select Standard
Associations.
1. From the Compliance Standard Library page, highlight the compliance standard to
which you want to add another compliance standard.
3. On the Properties page, right-click the node, located at the top left of the page.
When you include a compliance standard within another top level compliance
standard, the included standard must be of the same target type as the top level
compliance standard. For composite target types, one of the member target types of
the composite target type of the top level standard is a member target type within
the top level composite target type.
Note that a root compliance standard is associated to a root target (of composite
target type). Compliance standards are associated to member targets of the same
applicable target type and target filter criteria.
6. On the Properties page, choose the Importance for the compliance standard you
just included. Click Save.
7. After the compliance standard is included, highlight the root compliance standard.
The Properties page displays a set of parameters.
Usage Notes
• Because compliance standards are hierarchical, the top node in the tree is known as
the root node.
– Development
Indicates a compliance standard is under development and that work on its
definition is still in progress. While in Development mode, all management
capabilities of compliance standards are supported including complete editing
of the compliance standard, deleting the compliance standard, and so on.
However, while the compliance standard is in Development mode, its results
are not viewable in Compliance Results nor on the target or Cloud Control
home page.
– Production
Indicates a compliance standard has been approved and is of production
quality. When a compliance standard is in production mode, you have limited
editing capabilities, that is, you can add references to production rules, and you
can delete references to rules ONLY from a compliance standard. All other
management capabilities such as viewing the compliance standard and deleting
the compliance standard will be supported. Results of production compliance
standards are viewable in target and console home pages, and the compliance
dashboard. Production compliance standards can only refer to production
compliance standards and production compliance standard rules.
Once the mode is changed to Production, then its results are rolled up into
compliance dashboard, target home page, and Cloud Control home page.
Production compliance standards can only refer to other production compliance
standards and production compliance standard rules. A production compliance
5. Click Save.
3. Highlight the standard you want to edit and click the Edit button.
5. Click Save.
3. Highlight the compliance standard you want to delete, click Delete button.
5. Provide the file name to which the standard definition is to be exported. All leaf
level rules and compliance standards are exported.
6. The XML representation of the compliance standard is generated. The file is located
in the directory you specify.
4. Provide the file name from which the compliance framework definition (as per
Compliance Framework XSD) will be imported. Specify whether to override an
existing definition if one already exists. Specify whether to import referring content
as well.
5. Click OK.
You can override an existing compliance standard by checking the Overwrite existing
compliance standards check box. As a result:
• If you override a compliance standard, the override deletes all target and template
associations, as well as evaluation results for that compliance standard.
3. To view the details of a particular standard, highlight the standard and click Show
Details.
3. In the Search portion of the page, provide criteria to use to narrow the search.
4. Click Search.
2. Click the Compliance Standards tab and then the Evaluation Results tab.
3. Highlight the compliance standard and click Show Details to view the details of a
particular standard.
2. Click the Compliance Standards tab and then the Evaluation Results tab.
3. In the Search portion of the page, provide criteria to use to narrow the search.
4. Click Search.
2. Click the Compliance Standards tab and then the Errors tab.
2. Click the Compliance Standards tab and then the Errors tab.
3. In the Search portion of the page, provide criteria to use to narrow the search.
4. Click Search.
Usage Notes
• Use the Evaluation Errors page to view the errors that occurred as a result of metric
collection, as well as those that occurred during the last evaluation.
• Use the search filter to view only those evaluation errors that meet a set of search
criteria that you specify.
• Click the message in the Message column to decide what your course of action
should be to resolve the error.
• On initial display, the Evaluation Errors page shows all the evaluation errors.
Before you associate a compliance standard with a target, ensure you have privileges
to access the targets you want to associate compliance standards to.
To associate a compliance standard with a target, follow these steps:
3. Highlight the compliance standard you want to associate with various targets.
Click the Associate Target button.
4. Select the targets you want to associate with this compliance standard. Click OK.
5. With the compliance standard still highlighted, click the Override Target Type
Settings button.
By changing critical and warning thresholds, you signify how the Compliance
standard score event is generated. For example, if the actual score is less than the
critical threshold, then a critical score event is raised.
Changing the importance can change the compliance score. The importance
denotes how important the compliance standard is in the hierarchy.
7. Click OK.
To further customize the evaluation of a compliance standard against a target, you can
alter compliance standard parameters: importance, critical threshold, and warning
threshold. Customizations can also be made on the compliance standard rules used
within the compliance standards. For example, for the Secure Ports compliance
standard rule, DFLT_PORT is an override parameter. You can change the default
value of the port. You can also exclude objects from the evaluation, for example a
particular port from the evaluation.
Note: For real-time monitoring, you can change parameters that are used in facet
patterns. You can also change Automatic Change Management reconciliation settings.
By changing critical and warning thresholds, you signify how the Compliance
standard score event is generated. For example, if the actual score is less than the
critical threshold, then a critical score event is raised.
Best Practices
You can perform compliance association in two ways: for testing and editing, and
production and mass associations.
• For testing and editing a standard/target and standard rule, or rule folder/target
association settings purposes, associate the target with a compliance standard as
previously described in this section.
Using the Compliance UI, you can:
• For production and mass associations, associate the target using the
Administration Groups and Template Collections page:
From the Setup menu, select Add Target, then select Administration Groups.
Click the Associations tab.
Because each Administration Group in the hierarchy is defined by membership
criteria, a target is added to the group only if it meets the group's membership
criteria. Therefore, when a target is successfully added to a group, it is
automatically associated with the eligible compliance standards for that group.
This makes it easier to associate a target to a large number of compliance standards.
• You have privileges to access the group target you want to associate the
compliance standards to
See Privileges and Roles Needed to Use the Compliance Feature.
Perform the following steps:
3. Highlight the compliance standard you want to associate with the group target.
Click the Associate Groups... button.
4. Select the group target you want to associate with this compliance standard. Click
OK.
After you click OK, the group target is associated to the compliance standard and
all eligible targets with the group are associated to the compliance standard. In the
future when new targets are added to the group target, and if they have the same
target type and match the target property filter criteria, they will then be
automatically associated to the compliance standard.
can pick one of three types of reports to view your observations. The bottom half of
the screen shows all active warnings across all targets and compliance standards
related to real-time monitoring.
1. From the Enterprise menu, select Monitoring, then select Monitoring Templates.
2. In the Search area, select Display Oracle provided templates and Oracle Certified
templates and click Go.
5. On the Search and Select: Targets page, select the database instances in which you
are interested and click Select.
After you click OK, a confirmation message on the Monitoring Templates page
appears.
If compliance standards are structured in a granular way so that they can map to
existing and future compliance frameworks, then violations in a rule can be rolled up
to impact the score of the compliance framework properly.
3. On the Compliance Standard Library page, highlight the compliance standard and
click Edit.
4. On the Properties page, right-click the name of the compliance standard. The name
of the standard is located in the top-left corner of the page.
• Edit the tree structure by re-ordering the Rule Folder, Rule Reference, and
Compliance Standard Reference nodes in the tree or by deleting any of these nodes.
• Select any node (except the top-level Compliance Standard node) object and then
click Remove menu item from context menu. The Remove option is disabled on the
root node. You can also select multiple objects and click Remove to delete multiple
nodes.
• Agent-side Rules
Used for detecting configuration problems on the agent. This enables the
implementation of the Security Technical Implementation Guide (STIG) security
specifications. Agent-side rules generate violations for a target which is based on
the results data collected for the underlying configuration extension target.
• Manual rule
Enables you to account for checks that cannot be performed automatically, thus
allowing you to account for these types of checks in the compliance framework.
For example, a common security check is "To ensure secure access to the data
center". When a standard is associated to a target, each manual rule will have one
violation. A user must manually attest to the positive status of the rule. In other
words, a person responsible for the task ensures he has performed the task. The
compliance framework records when and who clears the violation of the manual
check so it can be reported.
– If the rule is based on a list of patches, then the rule checks if none of the patches
are applied to the target. If any of the patches are applied, then no violation is
generated. If none of the patches are applied, then one violation is generated
listing the patches that are not applied.
– After you create the Missing Patches rule, you can add missing patches rules to
compliance standards of type Repository. You can then associate the standard to
targets by selecting a standard, and clicking the Associate Target button. Upon
association, the missing patch rule will be evaluated on the applied targets.
– If a standard with the missing patches rule is associated to a group, when new
targets are added to the group, the new target is automatically evaluated for
missing patches.
• Repository Rules
Used to perform a check against any metric collection data in the Management
Repository.
Used for checking the configuration state of one or multiple targets. A rule is said
to be compliant if it is determined that the configuration items do in fact meet the
desired state and the rule test failed to identify any violations. Otherwise, a rule is
said to be non-compliant if it has one or more violations. The data source that is
evaluated by a compliance standard rules test condition can be based on a query
against the Cloud Control Management Repository. A compliance standard rules
test condition can be implemented using a threshold condition based on the
underlying metrics (or queries) column value or SQL expression or a PLSQL
function. To use a rule, it must be associated to one or more compliance standards.
The compliance standard then will be associated to one or more targets. This
effectively enables this rule to be evaluated against these targets.
5. Click Continue.
6. On the next screen, you are asked to fill out several key attributes of the rule:
• Rule Name
Provide a unique name for the rule.
• Severity
The rule can have a severity level, which could be Critical (serious issue if this
rule is violated), Warning (not a serious issue if violated), or Minor Warning (a
minor issue if violated). Severity impacts the compliance score along with the
importance that may be set for this rule when it is added to a compliance
standard.
• Applicable To
Target type this rule works against.
• Description
Description of the rule
• Rationale
Text describing what this rule is checking and what the effect of a violation of
this rule may be.
• Recommendation
Recommendation text describing how to fix a problem when a violation occurs.
• Reference URL
URL to a document that describes the compliance control in more details. Many
times these documents may be stored in a content management system.
• Keywords
Keywords can be assigned to a rule so that you can control how data is
organized in various reports.
7. Click Next.
8. On the next screen, you need to provide a SQL query that will execute against the
Cloud Control Management Repository. You can directly enter the SQL query, or
click the Model Query button to enter a screen that will guide you through
choosing the query content.
9. Enter Compliant and Non-Compliant Message. These are the messages that will be
shown in regards to the evaluation. When a violation occurs, the Non-Compliant
message will be the string describing the event under the Incident Management
capabilities.
10. Enter the Recommendation. The recommendation describes how to fix a problem
when a violation occurs.
12. On the next screen, you will see the columns that will be returned from this query
as part of the evaluation results. You can modify the display name of each column
as needed.
13. On this screen, you also need to set the condition you are checking against the
returned query results to look for a violation. Your condition check can be a simple
one based on the column name and a comparison operator of the value. Or you can
compose a SQL condition by providing parameter names and providing a where
clause to add to the evaluation query.
14. If you are using the SQL condition, you can click the Validate Where Clause
button to check for any issues with your condition.
16. The next screen will allow you to test your rule. You can choose a target in your
environment and click the Run Test button. Any issues with the rule will be
displayed and you can resolve them before saving the rule.
18. The final page allows you to review everything you have configured for this rule.
Ensure that everything is correct and click the Finish button to save the rule.
• All rules are visible in the global rule library and are visible to all users.
• To share this user-defined compliance standard rule with other privileged users,
provide the XML schema definition (using the Export feature) so they can import
the compliance standard rule to their Management Repository.
• You can minimize scrolling when reading the Description, Impact, and
Recommendation information by restricting the text to 50 characters per line. If
more than 50 characters are needed, start a new line to continue the text.
• Look at the context-sensitive help for information for each page in the Compliance
Standard Rule wizard for specific instructions.
• If you manually type a WHERE clause in the compliance standard rule XML
definition, then the < (less than) symbol must be expressed as <, to create a valid
XML document. For example:
<WhereClause>:status < 100</WhereClause>
4. In the Create Rule popup, select the WebLogic Server Signature rule type.
5. Click Continue.
6. On the next screen, you are asked to fill out several key attributes of the rule:
• Rule Name
Provide a unique name for the rule.
• Severity
The rule can have a severity level, which could be Critical (serious issue if this
rule is violated), Warning (not a serious issue if violated), or Minor Warning (a
minor issue if violated). Severity impacts the compliance score along with the
importance that may be set for this rule when it is added to a compliance
standard.
• Applicable To
Target type this rule works against.
• Description
Description of the rule
• Rationale
Text describing what this rule is checking and what the effect of a violation of
this rule may be.
• Recommendation
Recommendation text describing how to fix a problem when a violation
occurs.
• Reference URL
URL to a document that describes the compliance control in more details.
Many times these documents may be stored in a content management system.
• Keywords
Keywords can be assigned to a rule so that you can control how data is
organized in various reports.
7. Click Next.
8. On the next screen, you select the method of providing the signature definition
file. You can either load it by uploading a file, or enter the text directly into the UI.
9. Enter Compliant and Non-Compliant Message. These are the messages that will
be shown in regards to the evaluation. When a violation occurs, the Non-
Compliant message will be the string describing the event under the Incident
Management capabilities.
10. Choose the columns that will be displayed along with violations. These columns
should be defined as return columns in the signature definition.
12. The next screen will allow you to test your rule. You can choose a target in your
environment and click the Run Test button. Any issues with the rule will be
displayed and you can resolve them before saving the rule.
14. The final page allows you to review everything you have configured for this rule.
Ensure that everything is correct and click the Finish button to save the rule.
This newly crated rule does not function until it is associated to one or more
compliance standards and those compliance standards are associated to targets. Once
this association happens, the following is the workflow of this rule:
• Violations are uploaded to Cloud Control server, from where they are
subsequently processed into violations in Management Repository tables.
• Violations are then viewable in compliance results pages and the Compliance
Dashboard.
Example WebLogic Server Signature
Using the rule creation wizard makes it simple to add a new rule, but the important
part of the WebLogic Server signature rule is the signature definition. A signature
definition consists of a list of managed beans (MBeans) and an XQuery expression.
Managed beans represent the configuration data to collect. They define a type and the
attributes within the type to collect. They also declare which attributes to consider in
determining whether there are violations. The XQuery expression defines the logic to
use in evaluating the collected data for compliance. An XML example signature
definition follows.
<SignatureDefinition>
<MBeanList>
<MBean scoreBase="true" mBeanType="ServerRuntime">
<AttributeName>Name</AttributeName>
<AttributeName>WeblogicVersion</AttributeName>
</MBean>
</MBeanList>
<XQueryLogic>declare function
local:getServerRuntimesEqualToVersionWithPatch($targetData, $major as xs:integer,
$minor as xs:integer, $servicePack as xs:integer, $crNumber as xs:string) {
for $ServerRuntime in $targetData/DataCollection/ServerRuntime
let $weblogicVersion := fn:replace($ServerRuntime/@WeblogicVersion, "WebLogic
Server Temporary Patch", "")
let $majorVersion :=
let $spaceParts := fn:tokenize(fn:substring-after($weblogicVersion,
"WebLogic Server "), " ")
let $majorVersionParts := fn:tokenize($spaceParts[1], "\.")
return
$majorVersionParts[1] cast as xs:integer
let $SP_MP :=
if ($majorVersion = 8) then
"SP"
else
if ($majorVersion >= 9) then
"MP"
else " "
let $minorVersion :=
let $spaceParts := fn:tokenize(fn:substring-after($weblogicVersion,
"WebLogic Server "), " ")
let $minorVersionParts := fn:tokenize($spaceParts[1], "\.")
return
$minorVersionParts[2] cast as xs:integer
let $servicePackVersion :=
let $spaceParts := fn:tokenize(fn:substring-after($weblogicVersion,
"WebLogic Server "), " ")
let $servicePackParts := fn:substring-after($spaceParts[2], $SP_MP)
return
if ($servicePackParts = "") then
0
else
$servicePackParts cast as xs:integer
where $majorVersion = $major and $minorVersion = $minor and $servicePackVersion =
$servicePack and
fn:contains(fn:upper-case($ServerRuntime/@WeblogicVersion),fn:upper-
case($crNumber))
return
$ServerRuntime
};
for $server in local:getServerRuntimesEqualToVersionWithPatch(/,
10,0,1,"CR366527")
|
local:getServerRuntimesEqualToVersionWithPatch(/,10,0,0,"CR366527")
return <Server
Name="{fn:data($server/@Name)}"/></XQueryLogic>
</SignatureDefinition>
Effectively, this definition collects the server name and WebLogic version of all
runtime servers. Much of the definition iterates over the preciseness of the version-
major and minor patch, service pack, CR number, and so forth. A violation occurs if
any server has either of the stated patches (10.0.1 CR366527 or 10.0.0 CR 366527), in
which case return the name of the server to be reported in violation. Hence, the rule
definition must include a column to account for display of the server name. The
version is irrelevant in the context of the display. Those alerted are interested only in
which servers are in violation.
Important Prerequisite Steps to Use WebLogic Server Signature Rules
The following are some required steps that are specific to the version of WebLogic you
are trying to monitor:
1. WebLogic versions earlier than 10.3.3: To enable data collection for the WebLogic
Server signature-based rules on WebLogic Server targets earlier than v10.3.3, you
need a copy of bea-guardian-agent.war. You can find a copy of this war file in your
OMS installation's work directory:
$T_WORK/middleware/wlserver_10.3/server/lib/bea-guardian-agent.war
3. WebLogic Server v10.3 up to and including v10.3.2: Copy the war file from your
OMS installation into each target's $WL_HOME/server/lib directory. Restart all
the servers in the target domain.
• All rules are visible in the global rule library and are visible to all users.
• To share this user-defined compliance standard rule with other privileged users,
provide the XML schema definition (using the Export feature) so they can import
the compliance standard rule to their Management Repository.
• You can minimize scrolling when reading the Description, Impact, and
Recommendation information by restricting the text to 50 characters per line. If
more than 50 characters are needed, start a new line to continue the text.
• Look at the context-sensitive help for information for each page in the Compliance
Standard Rule wizard for specific instructions.
• Look at the context-sensitive help for information for each page in the Compliance
Standard Rule wizard for specific instruction.
5. Click OK.
6. On the next screen, you are asked to fill out several key attributes of the rule:
• Rule Name
Provide a unique name for the rule.
• Severity
The rule can have a severity level, which could be Critical (serious issue if this
rule is violated), Warning (not a serious issue if violated), or Minor Warning (a
minor issue if violated). Severity impacts the compliance score along with the
importance that may be set for this rule when it is added to a compliance
standard.
• Applicable To
Target type this rule works against.
• Entity Type
A type of object that is part of a target being monitored. For example, for the
Operating System (OS), entity type may be OS File, OS Process, or OS User.
For Database, an entity type may be Database Table, Database Function,
Database Procedure, or Database User.
• Description
Description of the rule
• Rationale
Text describing what this rule is checking and what the effect of a violation of
this rule may be.
• Details URL
URL to a document that describes the compliance control in more details.
Many times these documents may be stored in a content management system.
• Message
The message that will be used for the violation when an observation is
determined to be unauthorized.
• Clear Message
The message that will be used for a previous violation after it is cleared.
• Keywords
Keywords can be assigned to a rule so that you can control how data is
organized in various reports.
For additional information, see Importance of Target Property Filters for a Real-
time Monitoring Rule.
7. Click Next.
8. On the next page, you select the facets that are to be monitored for this rule. You
can include facets that are already defined or create a new facet inline with this
rule creation. A facet is simply a list of patterns to monitor. For instance, a list of
files, user names, processes, and so on. Facets are discussed later in the section
Real-time Monitoring Facets.
10. On the next screen, you will choose the actions you want to monitor. The actions
you choose will depend on what entity type you chose for the rule. For instance,
for OS File Monitoring, you can watch for actions such as file create, modify,
delete, rename, and so on. For OS User monitoring, you can watch for actions
such as login, logout, SU, SSH, and so on. You must choose at least one action to
monitor for a rule.
For additional information, see Selecting the Types of Actions You Want to
Monitor.
12. On the next screen, you can optionally configure filters for monitoring. Filters are
used to limit when or under what conditions you want an action to be observed.
For instance, if you are monitoring a file facet FILES1, you can add a filter so that
only file changes done by a specific list of users are captured, or if the change
happens during a certain time window, or a certain process is used to modify the
file. Filters are also facets, just of different entity types. If you are monitoring OS
File entity type, you can apply an OS User, OS Process, or Time Window facet as a
filter. You can include an existing facet, or create a new facet inline with the rule
creation. If you cancel the rule wizard, any facet you created inline will still exist
in the facet library.
For additional information, see Using Facets as Filters in Real-time Monitoring
Rules.
14. On the next screen, you can configure several settings related to how the
observations are handled when detected at the Management Agent.
• Collection Settings
For additional information, see Configuring Audit Status and Controlling
Observation Bundle Lifetimes.
16. On this screen you can review the settings of the rule.
17. Click Finish to save the rule and return to the rule listing page.
To implement this IT control, you can create a compliance standard rule with the
following:
1. Create a rule and select the file facet "Critical OS configuration files" for the
monitoring facet that has patterns covering all critical OS configuration files.
4. Add a Time Window filter selecting facet "Production Hours" that lists patterns
describing the times of the week that are considered to be production hours. For
example, Every day 4am-2pm PST.
When the Management Agent sees any content change to the patterns in Critical OS
configuration files, it will only report these changes back to Cloud Control if the
change happened during production hours and if any user described in the
Administrator's facet is the one making the change. Filters can also be inverted to
monitor anyone not in the administrators group or for changes outside of production
hours.
More details on how to use filters is described in the section above on Creating a Real-
time monitoring rule.
Configuring Audit Status
Each observation can have an audit status. This audit status can change over time and
be set manually or automatically by Cloud Control. The way audit statuses are
managed is configured when creating or editing a real-time monitoring rule.
When creating a rule, on the settings page of the wizard, the user has an option of
choosing whether all observations detected against this rule will get their audit status
manually from the user or automatically using connector integration with a Change
Request Management server.
When the user chooses to manually set audit status in a rule, there are two options
available:
• Default Audit status can be set so that all observations that are found against this
rule are by default unaudited, authorized, or unauthorized. Unaudited is the same
as saying they have not been reviewed and there has been no determination of
whether the observation is good or bad.
• The user can choose an informational event during manual authorizations. This is
used to create a new event of informational class in the Incident Manager when a
new observation bundle occurs. Based on this event, an event rule could be created
to send a notification based on the observation bundle or perform any other action
the Incident Manager can perform.
If the user chooses to use automatic reconciliation using a Change Request
Management server, then steps must be taken to set up the Cloud Control connector
for Change Management. This is explain in detail in the later section, Additional Setup
for Real-time Monitoring.
Once the connector has been configured, there will be a drop down in this settings
step of the rule creation wizard to choose which connector to use for this rule. Based
on attributes of the observation and observations defined in any open change requests,
the observation will be automatically determined to be authorized if there are open
matching change requests, otherwise it will be considered unauthorized.
1. Idle timeout: The amount of time after the user has no more activity from their last
activity against a specific rule on a given target. The use case for this is that a user
logs into a server, starts making a few file changes and then no more file changes
are made after 15 minutes. This 15 minute waiting period is the idle timeout. After
this idle timeout period is reached, the current observation bundle is closed and
sent to the Cloud Control server. The next time a new observation is detected, a
new group will be started and the process starts over.
2. Maximum lifespan of a group: If a user were to set the idle timeout to 15 minutes
and a user on a host was making one file change every 10 minutes for an
indefinite period of time (say through a script or even manual), the observation
bundle will never close and therefore never get sent to the Cloud Control server
for reporting/processing. Setting the maximum lifespan of a group tells the
Management Agent to only allow a group to accumulate for a maximum specific
time. For example, this maximum lifespan may be 30 minutes or an hour.
results immediately. Capturing and bundling results together is more important for
understanding what is happening and making observations easier to manage.
When an observation becomes part of two or more bundles on the Management Agent
because the same facet is used in multiple rules or multiple targets on the same host
monitor the same facet with shared entities, then whenever the first bundle either hits
its ending criteria (idle timeout, group maximum life, or maximum group entries),
then all of the bundles containing these shared observations are closed at the same
time.
To control observation bundle lifetimes, see the section above on how to create Real-
time Monitoring Rules and set the appropriate settings on Settings page of the rule
creation wizard.
Selecting the Types of Actions You Want to Monitor
When creating a rule, you can decide which types of observations or user actions are
important to be monitored and reported back to Cloud Control. The Management
Agent has a specific set of observations that are possible for each entity type. Some
options may be specific to certain operating system platforms or versions. You can
select one or more of these options.
The observation types that you may be able to select can also be limited by the target
properties/criteria selected for the rule. For instance, some operating systems may not
have every monitoring capability for files. When building the list of available
observation types available, the target type, entity type, and target properties are all
taken into consideration to come up with the resulting available observation types.
To select the type of observations you want to monitor in a rule, follow these steps:
1. If you want to select observations for a currently existing rule, click on the Real-
time Monitoring rule in the Rules table and then click Edit.
Cloud Control opens the Edit Rule: Real-time Monitoring wizard and displays the
Details page. Move to the Observations page.
If you want to select observations while creating a new rule, click Create to create a
new rule. Cloud Control opens the Create Rule: Real-time Monitoring wizard and
displays the Details page. After entering relevant information on the Details and
Facets pages of the wizard, move to the Observations page.
2. On the Observations page, select one or more activities to be observed from the list
that appears. During target association for this rule, auditing must be enabled to
capture selected details. It is important to note that different operating systems and
different capabilities have specific auditing requirements.
3. In the Parameters section, if there are additional observation parameters, you can
review and update the parameters.
• All Rules are visible in the global rule library and are visible to all users.
• To share this user-defined compliance standard rule with other privileged users,
provide the XML schema definition (using the Export feature) so they can import
the compliance standard rule to their Management Repository.
• You can minimize scrolling when reading the Description, Impact, and
Recommendation information by restrict the text to 50 characters per line. If more
than 50 characters are needed, start a new line to continue the text.
• Look at the context-sensitive help for information for each page in the Compliance
Standard Rule wizard for specific instructions.
• If you choose to monitor OS File entity type, you will notice one action type "File
Content Modified (successful) - Archive a copy of the file [Resource Intensive]". If
you select this option, every time a file modify action is observed, a copy of the file
will be archived locally on the Management Agent. This can be used later to
visually compare what changed between two versions of the file. There is an
additional setting to set how many archived copies to store on the Actions to
Monitor page of the rule creation wizard.
• When you add a facet inline with the create rule wizard either as a monitoring facet
or as a filtering facet, if you cancel the rule wizard, the newly created facets will
still exist and be usable in future rules. You can delete these facets by going to the
facet library. Real-time monitoring facets are discussed in a separate section later in
this document
5. Click Continue.
6. On the next screen, you are asked to fill out several key attributes of the rule:
• Rule Name
Provide a unique name for the rule.
on targets yet. After you promote a rule to production, you cannot change it
back to development.
• Severity
The rule can have a severity level, which could be Critical (serious issue if this
rule is violated), Warning (not a serious issue if violated), or Minor Warning (a
minor issue if violated). Severity impacts the compliance score along with the
importance that may be set for this rule when it is added to a compliance
standard.
• Applicable To
Target type this rule works against.
• Description
Description of the rule
• Rationale
Text describing what this rule is checking and what the effect of a violation of
this rule may be.
• Recommendation
Recommendation text describing how to fix a problem when a violation occurs.
• Reference URL
URL to a document that describes the compliance control in more details. Many
times these documents may be stored in a content management system.
• Keywords
Keywords can be assigned to a rule so that you can control how data is
organized in various reports.
7. Click Next.
9. Enter Compliant and Non-Compliant Message. These are the messages that will be
shown in regards to the evaluation. When a violation occurs, the Non-Compliant
message will be the string describing the event under the Incident Management
capabilities.
11. The Text screen allows you to test your rule. You can choose a target in your
environment and click the Run Test button. Any issues with the rule will be
displayed and you can resolve them before saving the rule.
13. The final page allows you to review everything you have configured for this rule.
Ensure that everything is correct and click the Finish button to save the rule.
5. Click Continue.
6. On the next screen, you are asked to fill out several key attributes of the rule:
• Rule Name
Provide a unique name for the rule.
• Severity
The rule can have a severity level, which could be Critical (serious issue if this
rule is violated), Warning (not a serious issue if violated), or Minor Warning (a
minor issue if violated). Severity impacts the compliance score along with the
importance that may be set for this rule when it is added to a compliance
standard.
• Applicable To
Target type this rule works against.
• Description
Description of the rule
• Rationale
Text describing what this rule is checking and what the effect of a violation of
this rule may be.
• Recommendation
Recommendation text describing how to fix a problem when a violation occurs.
• Compliant Message
This message displays when the target is compliant.
• Non-Compliant Message
When a violation occurs, the Non-Compliant message will be the string
describing the event under the Incident Management capabilities.
• Reference URL
URL to a document that describes the compliance control in more details. Many
times these documents may be stored in a content management system.
• Keywords
Keywords can be assigned to a rule so that you can control how data is
organized in various reports.
7. Click Finish.
4. In the Create Rule popup, select Missing Patches Rule as the type.
5. Click Continue.
6. On the next screen, you are asked to fill out several key attributes of the rule:
• Rule
Provide a descriptive name for the rule, for example, DBMS Patches.
This is a required field.
– Development
Indicates a compliance standard rule is under development and that work on
its definition is still in progress. While in development mode, a rule cannot
be referred from production compliance standards. Use Development until
the rule has been developed and tested.
– Production
Indicates a compliance standard rule has been approved and is of
production quality.
You can edit a production rule to create a draft from a production rule and
update the draft rule, test it, and then make it production and then
overwrite/merge it back to the original production rule. This will make all
the compliance standards, referring to the original production rule, to see the
new definition of the rule (after overwrite).
• Severity
Minor Warning, Warning, Critical
• Applicable To
Type of target the rule applies to, for example, Database Instance. This is a
required field.
– Version Name
– Platform Name
– Lifecycle State
• ReferenceUrl
This URL should reference information that is pertinent to this rule.
• Keywords
Add Keywords to further categorize the compliance standard rules Choose one
or more keywords that closely match your rule's intent.
7. Click Next.
Element Description
Compliant A compliance standard rule is compliant when the SQL query
Message does not return result data.
If a user has preferences to be notified when a compliance
standard rule is cleared, this is the message he or she will receive
for compliance.
Default: Compliance standard rule <name of compliance
standard rule> is compliant.
You can override the default text.
Element Description
Non-Compliant A compliance standard rule is non compliant when the SQL
Message query returns result data. If no data is returned, the compliance
standard rule is compliant.
This message is used in notification rules. If a user has
preferences to be notified for compliance standard rule
violations, this is the message he or she will receive for violation.
Default: Compliance standard rule <name of compliance
standard rule> is not compliant.
You can override the default text
9. Click Next.
10. On the Test page, validate whether a patch was applied to a particular target. This
test evaluation is not stored in the Management Repository and is a one-time run. If
there are no errors, the compliance standard rule is ready for publication or
production.
Note: You can have test results that intentionally show violations. For example, if
you are testing target_type equal to host and you are evaluating a host target, then
you will see violation results.
Rule Violations
Provides the details of a compliance standard rule violation. This is the same
information you see on the Violation Details drill-down page in the Compliance
Standard Rules Errors page.
12. On the Review page, verify that the information on the page reflects what you
intended to supply in the definition.
If corrections are needed, click Back and make the needed corrections.
Note: The compliance standard rule is not defined until you click Finish.
Tips
• Once the compliance standard rule has been created, it is not automatically
evaluated. Consider adding the compliance standard rule to a compliance
standard.
• Assign a corrective action to the rule after the rule has been created.
– On the Compliance Standard Rules tab, highlight the rule you just created.
– From the Assign Creative Action popup, select an existing corrective action and
click OK.
4. In the Create Rule popup, select Configuration Consistency Rule as the type.
5. Click Continue.
6. On the next screen, you are asked to fill out several key attributes of the rule:
• Rule
Provide a descriptive name for the rule, for example, DBMS Consistency.
This is a required field.
– Development
Indicates a compliance standard rule is under development and that work on
its definition is still in progress. While in development mode, a rule cannot
be referred from production compliance standards. Use Development until
the rule has been developed and tested.
– Production
Indicates a compliance standard rule has been approved and is of
production quality.
You can edit a production rule to create a draft from a production rule and
update the draft rule, test it, and then make it production and then
overwrite/merge it back to the original production rule. This will make all
the compliance standards, referring to the original production rule, to see the
new definition of the rule (after overwrite).
• Severity
Minor Warning, Warning, Critical
• Description
Provide complete and descriptive information.
• Applicable To
Type of target the rule applies to, for example, Database Instance. This is a
required field.
• Comparison Template
This is a required field.
– Operating System
– Version
– Platform
• Rationale
Provide complete and descriptive information about the importance of the rule.
• Keywords
Add Keywords to further categorize the compliance standard rules Choose one
or more keywords that closely match the rule's intent.
7. Click Finish.
4. In the Create Rule popup, select Configuration Drift Rule as the type.
5. Click Continue.
6. On the next screen, you are asked to fill out several key attributes of the rule:
• Rule
Provide a descriptive name for the rule, for example, DBMS Drift.
This is a required field.
– Development
Indicates a compliance standard rule is under development and that work on
its definition is still in progress. While in development mode, a rule cannot
be referred from production compliance standards. Use Development until
the rule has been developed and tested.
– Production
Indicates a compliance standard rule has been approved and is of
production quality.
You can edit a production rule to create a draft from a production rule and
update the draft rule, test it, and then make it production and then
overwrite/merge it back to the original production rule. This will make all
the compliance standards, referring to the original production rule, to see the
new definition of the rule (after overwrite).
• Severity
Minor Warning, Warning, Critical
• Applicable To
Type of target the rule applies to, for example, Database Instance. This is a
required field.
• Comparison Template
This is a required field.
• Source Configuration
– Latest Configuration
– Saved Configuration
– Operating System
– Version
– Platform
• Keywords
Add Keywords to further categorize the compliance standard rules Choose one
or more keywords that closely match the rule's intent.
7. Click Finish.
6. Click Save.
3. Highlight the rule you want to edit and click the Edit button.
4. Step through the screens of the rule creation wizard as previously described when
creating a rule.
5. Click Save.
Usage Notes
• For repository rules, you can change all the rule properties except the Rule Name,
State (if it is already production), and Applicable To.
For real-time monitoring rules, you cannot change Rule Name, State (it is already
production), Applicable To, Target Property Filters, and Entity Type.
• If you change the critical rule properties for a repository rule, for example, rule
query, violation condition, parameters, or severity, then editing the rule invalidates
the results for compliance standards which refer to the rule. The compliance
standards compliance score will be reevaluated at the next rule evaluation.
• For rules in production mode, you have a choice to create and save a draft of the
rule or to overwrite the existing production rule. If you create a draft, you can edit
the draft rule, at a later point in time, test it, and then overwrite and merge it back
to the original production rule the draft was made from. Note: You cannot include
a draft rule into any compliance standard.
• For WebLogic Server Signature rule or Real-time Monitoring rule, if the rule being
edited is referred to by a compliance standard which is associated with a target,
then the rule definition will be deployed to the Management Agent monitoring the
target, so that the Management Agent can evaluate the latest definition of the rule.
In the case where the Management Agent is down or unreachable, the rule
definition changes will be propagated to the Management Agent as soon as the
Management Agent is available.
6. The XML representation of the compliance standard rule is generated and placed in
the directory and file you specified.
4. Provide the file name from which the rule definition (as per Compliance Standard
Rule XSD) will be imported. Specify whether to override an existing definition if
one already exists. The override option is not available to Real-time monitoring
rules.
5. Click OK.
3. To view the details of a particular standard rule, highlight the rule and click Show
Details.
3. In the Search portion of the page, provide criteria to use to narrow the search.
By default, all the compliance standard rules in the compliance standard rule
library appear in the results table. However, you can specify a set of search criteria
and then perform a search that will display only the compliance standard rules that
meet those criteria in the results table.
For example, if you choose Security in the Category list, contains in the Compliance
Standard Rule list, "port" in the adjacent Compliance Standard Rule text field, Host
in the Target Type list, and then click Go, Cloud Control displays only the
compliance standard rules for the host security category that contain "port" in their
names.
4. Click Search.
1. From the Enterprise menu, select Monitoring, then select Corrective Actions.
a. Select SQL Script in the Create Library Corrective Action field, and click Go.
b. On the General tab, type a name for the corrective action (for example, CA1),
provide a description, and select Compliance Standard Rule Violation as the
Event Type. Select Database Instance as the Target Type.
You can make similar changes to any parameter. Ensure that the parameter
name matches the name of the column in the SQL query.
d. Select the corrective action you just created and click Publish.
3. From the Enterprise menu, select Compliance, then select Library. Choose a
database compliance standard rule with the rule type of agent-side or repository.
In the Actions menu, select Assign Corrective Action. Select a corrective action
and click OK.
You will then see the corrective action in the Show Details page for the
compliance standard rule.
Automatic Corrective Action
To create a corrective action that is automatically triggered when the violation occurs,
follow these steps:
1. From the Setup menu, select Incidents, then select Incident Rules.
2. On the Incident Rules - All Enterprise Rules page, click Create Rule Set. Provide
a name for the rule, select All targets in the Targets region, and click Create... in
the Rules region.
3. On the Select Type of Rule to Create dialog box, select Incoming events and
updates to events. Click Continue.
5. Select either All events of type Compliance Standard Rule Violation or Specific
events of type Compliance Standard Rule Violation.
7. On the Create New Rule: Add Actions page, click Add. On the Add Conditional
Actions page, click Select corrective action. Select the corrective action. Click
Continue.
8. In the Create New Rule: Add Actions page, click Next. Provide a description on
the Create New Rule: Specify and Description page and click Next.
10. Click Save. Note that newly added rules are not saved until the Save button is
clicked. After you click Save, verify that the rule set entity has added the new
incident rule by reviewing the details.
• Operations on Facets
• OS File
• OS Process
• OS User
• Patterns of the same specificity with one being include and one being exclude, the
include will win.
• Patterns that are more specific override (like in the previous example, exclude
dummy.cfg overrides the inherited include c:\dummy.cfg from the first pattern.)
• If there are no patterns at all, exclude * is assumed (for example, no entities in the
facet)
For each pattern that you add to a facet, an optional description field is available to let
you document their patterns.
• Deleting a Facet
4. You can choose which columns to display in the table by clicking View and then
choosing Columns. You can either choose to Show All columns or you can select
individually the columns you want to appear in the table. You can reorder the
columns by clicking Reorder after you click View and then changing the order in
which the columns appear by moving them up or down using the arrow keys.
5. You can expand the area of the page titled "Search" to choose the search criteria to
apply to the view of facets.
6. You can view a history of a selected facet by choosing it from the table and then
clicking History. The View History page appears.
To view the facet library in browse mode, follow these steps:
Cloud Control displays the Facet Library page that lists all existing facets along
with their target type, entity type, and other details about the facet. From this page
you can perform administrative tasks such as create, create like, view, delete,
import, and export if you have the audit author role.
The Facet Library page that is shown is split into two views, The left side shows the
facet folder hierarchy. The right side lists facets in the folder that is selected on the
left. The table on the left displays the Facet Name, Author, Target Type, Entity
Type, Rules Using the facet, Description, and the Last Updated time of the facet.
You can see the details of any facet by selecting it from the table and clicking Show
Details.
4. You can choose which columns to display in the table by clicking View and then
choosing Columns. You can either choose to Show All columns or you can select
individually the columns you want to appear in the table. You can reorder the
columns by clicking Reorder after you click View and then changing the order in
which the columns appear by moving them up or down using the arrow keys.
5. The only filtering allowed on this screen is by selecting a different folder. You will
always see the facets that are in the selected folder only.
6. You can view a history of a selected facet by choosing it from the table and then
clicking History. The View History page displays.
Cloud Control displays the Facet Library page that lists all exiting facets along with
their target type, entity type, and other details about the facet. From this page you
can perform administrative tasks such as create, create like, view, delete, import,
and export. There are two views when looking at this page, search or browse. In
the search view, all facets are listed in a flat list. In the browse view, facets are
grouped in folders to make it easier to find facets.
4. Choose which facet folder this facet should belong to. If you have not yet created
the folder for it, you can add it to the Unfiled folder. This folder always exists and
cannot be remove. Later you can move the facet to a new folder you create using
drag-and-drop in the UI from the Unfiled folder to the new folder.
5. Enter the name you want to assign to the facet in the Facet Name field, then choose
the target type for the facet you are creating from the drop-down list in the Target
Type field. Once you choose the Target Type, you can enter values in the Target
Property Filter fields.
The target properties you add here limit which targets to which this facet can
ultimately be assigned. For instance, you could define a facet to work only for
Linux version 5 on 64-bit servers.
6. Choose the Entity Type from the drop-down. This list will be limited depending on
the target type chosen previously.
8. The Create Facet page contains two tabs you can use to enter the patterns and
parameters for the facet you create. Use the Patterns tab to add patterns to be either
Included or Excluded. Use the Add or Delete buttons to add additional patterns or
to remove a selected pattern from the facet definition. There is a bulk add button
which will bring up a popup window where you can paste text listing patterns
rather than entering each in the UI manually.
9. If you are defining a facet for the OS File entity type, there is an optional ability to
browse a host to find the files you want to monitor. The right side of the page has
an area where you can choose the host to use as the basis for looking for files. In the
pattern area, you can click the Browse button to interactively browse the files on
the selected host and select the files to include in the pattern. After selecting
patterns from a host, you can continue to manually add more or edit existing ones.
10. Use the Parameters tab to view parameters that are part of the new facet. Oracle
provides a set of predefined parameters based on target parameters (such as
ORACLE_HOME) that are defined out of the box. These parameters do not require
a default value and are always set according to the target's value. Parameters will
appear under this tab when they are used in a pattern. To start using a new
parameter, simply add the parameter to the pattern by enclosing it in curly
brackets {}. For instance, a pattern of {INSTALL_DIR}\config\main.conf would
result in a parameter of INSTALL_DIR being listed under this tab. All parameters
must have a default value that will be automatically used for all targets against
which this facet is used. This value can be overridden when associating a
compliance standard containing a real-time monitoring rule to one or more targets.
The Parameters tab displays the Parameter Name, Default Value, Used in Pattern,
and Description. Used in Pattern indicates that the parameter is currently in use.
This parameter may have been defined at some point in a pattern and then
removed. The pattern will still be available for use again at a later time even if the
pattern is not currently in use. If the entity for which you are adding a pattern
includes a "{" or "}", you can escape these characters by using "{{}" and "{}}" in the
pattern respectively. These will not be counted as parameters.
11. A third tab, Time Window is only available if the facet being created/edited is of
entity type Time Window. A facet of this entity type is only usable as a filter in a
Real-time monitoring rule. For instance, you can specify in the rule that you only
want to monitor a facet during a specific time, for example, "Production Hours". In
the Duration section, choose either a 24 Hour Interval or Limit Hours to, which
allows you to enter a Start time and an Interval in Hours and Minutes. In the
Repeating section, you can choose either All the time or you can select Repeat and
then choose which days of the week to repeat the operation.
• Create: Allow you to create a new folder. A popup will display asking for the
folder name to create. You will also have the choice of making this new folder a top
level folder or adding it as a child to the currently selected folder.
• Delete: Allows you to delete a user-defined folder. You cannot delete a folder that
has facets or other folders inside of it.
You cannot delete, rename or move out-of-the-box folders that are populated by
Oracle.
There is a default folder that exists called Unfiled. Anytime a facet is created or
imported without specifying a folder, it will go into this Unfiled folder.
You can move facets into folders by simply finding the facet you want to move in the
right side, selecting it and dragging it to the folder on the left where you want to place
it. The facet will move to that folder. A facet can only belong to one folder at a time
and it always must belong to a folder (even if it is just the Unfiled folder). You can also
click on the facet and click on the MOVE button. A popup window will appear letting
you choose which folder to move the facet to.
Folders have no impact on observation analysis or compliance score. They are only
used in the Real-Time Monitoring Facets library screen to make it easier to manage a
very large number of facets that exist.
Cloud Control displays the Facet Library page that lists all exiting facets along with
their target type, entity type, and other details about the facet. From this page you
can perform administrative tasks such as create, create like, view, delete, import,
and export.
3. Select the facet from the list of facets in the table on the page.
4. Click Delete to delete the facet. You will be prompted to confirm that you want to
delete the facet.
An important limitation to the Create Like function is that you cannot change the
target type or entity type. The patterns contained in the facet may be dependent on
target type or entity type. If you want to use Create Like and change these attributes,
you should use Export to export the original facet, edit the name, target type, entity
type in the XML, and then import as a new facet.
To use create like to create a new facet, follow these steps:
Cloud Control displays the Facet Library page that lists all exiting facets along with
their target type, entity type, and other details about the facet. From this page you
can perform administrative tasks such as create, create like, view, delete, import
and export.
3. Choose the facet from the facet table that you want to use as the basis for the new
facet you want to create.
Cloud Control displays the Create Facet page. All the values that were applicable
to the facet you want to clone are entered. Use the page to edit the values for the
new facet and click OK.
It is important to understand that if the original base facet you used in the create
like activity is changed, that change will not be reflected in the newly created facet.
There is no relationship maintained when using Create Like.
5. For more information about using the Create Facet page, see Creating and Editing
Facets.
3. Select one or more facets from the list of facets on the Facet Library page that you
want to export and then click Export.
4. On the Open dialog box, you can choose to open or save the facet xml file using an
XML editor of your choice and then either edit or save the file to another location.
Cloud Control displays the Facet Library page that lists all existing facets along
with their target type, entity type, and other details about the facet. From this page
you can perform administrative tasks such as create, create like, view, delete,
import, and export.
3. Click Import and choose the facet XML file you want to import into the Facet
Library.
4. Cloud Control imports all facets specified in the imported XML file. You can then
edit the facet or use any other action on it as you would any other facet in the
library.
Cloud Control displays the Facet Library page that lists all exiting facets along with
their target type, entity type, and other details about the facet. From this page you
can perform administrative tasks such as create, create like, view, delete, import
and export.
3. Choose the facet from which you want to create a new facet with modified
attributes. Click Create Like.
4. Enter a new Facet Name and change whatever attributes to create a new facet
based on the previous facet.
46.6 Examples
This section provides examples of using compliance. Examples include:
• Suppressing Violations
• Clearing Violations
• Associate a target
• View results
To create a custom configuration:
2. From the Configuration Extensions page, click Create. The Create Configuration
Extension page appears.
b. In the Files & Commands section, type the Default Base Directory. [Use /tmp
as the directory.]
This is an example. For a real target it should be the directory containing the
target's configuration files.
Note: All files collected by custom configurations MUST NOT change on a
daily basis, but should only change very rarely due to an explicit action by an
administrator.
c. Click Add.
- In the Type column, select File.
- In the File/Command column, type foo.xml. The Alias column is
automatically filled in with foo.xml.
Note: You can use any file or files, not just xml and not just "foo.xml"
expressions. Custom configuration supports many files and corresponding
parsers.
b. On the Search and Select: Targets page, highlight the host target where
file /tmp/foo.xml was created and click Select.
4. On the Submit Pending Deployment Actions popup, select Yes. This action will
submit the deployment action.
On the Deployments page, click Refresh Status to refresh the status of the
deployment until the Status column displays "Successfully deployed".
5. Now that deployment is submitted, click Cancel to exit the page. (Note: Clicking
Save instead of Apply earlier, would have exited the page right after the
submission of the deployment action.)
To create a custom-based repository rule based on custom configuration collection:
2. On the Compliance Library page, click the Compliance Standard Rules tab.
3. Click Create.
a. On the Create Rule popup, select Repository Rule and click Continue.
b. On the Create Rule: Repository Rule: Details page, type in the Rule name, for
our example, compliance_css_rule.
c. For the Compliance Rule State, select Development, then select Minor
Warning for the Severity. For Applicable To: select Host. Click Next located at
the top-right of the page.
4. On the Create Rule: Repository Rule: Check Definition (Query) page, click Model
Query. New Search Criteria page appears.
a. Select compliance_css (Parsed Data) from the Configuration Item menu under
"Commonly Used Search Criteria".
b. Under the Host section and Parsed Data subsection, type foo.xml for Data
Source contains. For the Attribute, select is exactly comparison operator and
type foo to refer to the "foo" attribute in our sample file. (Note: % sign can
also be used as a wild card character in these expressions for Data Source and
Attribute.)
c. Click Search to see the rows returned for this filter. A table displays the data
with value 1 for attribute foo in our file.
d. Click OK.
e. The Create Rule: Repository Rule: Check Definition (Query) displays again
but this time the SQL Source appears.
f. Click Next. Note: In general, you could also update the query before
proceeding, if needed.
5. The Create Rule: Repository Rule: Check Definition (Violation Condition) page
displays.
a. Check all the columns as Key columns (VALUE, ATTR, CONTAINER, and
DATA SOURCE NAME), except the INFO column.
b. In the Condition Type section of the page, select Simple Condition, and in the
Column Name select VALUE and change the Comparison Operator to equal
sign (=). In the Default Value column, type 1. Click Next.
6. In the Create Rule: Repository Rule: Test page, click the icon next to Target Name
field. The Search and Select: Targets popup appears. Find the host where the
custom configuration was deployed. Select it and click Select.
7. In the Create Rule: Repository Rule: Test page, click Run Test. When the test runs
successfully, you get a confirmation stating that the Run Test - Completed
Successfully.
You should see one violation after running the test because we specified value of
"1" in step 5 above for violation condition and our sample file had value "1" for
attribute foo. Click Close.
9. In the Create Rule: Repository Rule: Review page, ensure that all the information
that you added is correct. Click Finish.
To create a compliance standard:
4. The compliance standard page displays with the information regarding the
compliance_css_cs compliance standard. Right-click on compliance_css_cs on the
left side and select the Add Rules... option in the right-click menu.
5. On the Include Rule Reference popup, select compliance_css_rule. Click OK. Click
Save to save the compliance_css_cs.
6. A confirmation message appears on the Compliance Library page stating that the
compliance standard has been created. Click OK.
To associate targets:
1. Select the compliance_css_cs that was just created. Click Associate Targets.
3. On the Search and Select: Targets page, select a target where /tmp/foo.xml is
present and click Select. Click OK.
You will then be prompted whether you want to Save the association or not. Click
either Yes or No. You will then get an Informational message stating that the
compliance standard has been submitted to the target for processing.
To view results:
2. Click the Violations tab associated with the compliance_css_rule. The target is
associated with one violation.
3. Click on the rule node in the tree to see the Violation Events tab, then click on this
tab to see the violation details for the rule. Click on a violations row in the
violations table, to view details of the violation.
3. Type a name for the extension, for example, DG0142 DBMS Privileged
action audit. You will use this name on the Check Definition page.
$UNAUTHENTICATED',
'CTXSYS','DBSNMP','DIP','DVF','DVSYS','EXFSYS','LBACSYS','MDDATA',
'MDSYS','MGMT_VIEW','ODM','ODM_MTR', 'OLAPSYS','ORDPLUGINS', 'ORDSYS',
'OSE$HTTP$ADMIN','OUTLN','PERFSTAT',
'PUBLIC','REPADMIN','RMAN','SI_INFORMTN_SCHEMA',
'SYS','SYSMAN','SYSTEM','TRACESVR', 'TSMSYSWK_TEST','WKPROXY','WKSYS',
'WKUSER','WMSYS','XDB', 'OWBSYS', 'SCOTT', 'ORACLE_OCM', 'ORDDATA',
'APEX_030200',
'OWBSYS_AUDIT', 'APPQOSSYS', 'FLOWS_FILES')
and owner not in (select grantee from dba_role_privs
where granted_role='DBA')
• SQL
select distinct 'Application object owner account '||owner||' is not
disabled.' value
from dba_objects, dba_users where
owner not in ('ANONYMOUS','AURORA$JIS$UTILITY$',
'AURORA$ORB$UNAUTHENTICATED','CTXSYS','DBSNMP','DIP','DVF',
'DVSYS','EXFSYS','LBACSYS','MDDATA','MDSYS','MGMT_VIEW','ODM',
'ODM_MTR','OLAPSYS','ORDPLUGINS','ORDSYS','OSE$HTTP$ADMIN',
'OUTLN','PERFSTAT','PUBLIC','REPADMIN','RMAN',
'SI_INFORMTN_SCHEMA','SYS','SYSMAN','SYSTEM','TRACESVR', 'TSMSYS',
'WK_TEST','WKPROXY','WKSYS','WKUSER','WMSYS','XDB')
and owner in (select distinct owner from dba_objects where object_type <>
'SYNONYM')
and owner = username and upper(account_status) not like '%LOCKED%'
4. Click Continue.
5. On the Create Rule: Agent-side Rule: Details page provide the following
information (see Figure 46-11):
c. Severity: Critical
g. Click Next.
6. On the Create Rule: Agent-side Rule: Check Definition Page search for the
configuration extension and alias you defined earlier. See Figure 46-12.
Note: The configuration extension name and the alias name are concatenated
together to form the name in the Configuration Extension and Name field. For this
example, the complete name is: DG0142 DBMS Privileged action audit-DBMS
application object ownership.
Click Next.
7. On the Create Rule: Agent-side Rule: Test Page, search for a target, and then click
Run Test. A pop-up displays stating that the test is running. Click Close on the
Confirmation pop-up. See Figure 46-13.
Note: You can have test results that intentionally show violations. For example, if
you are testing target type equal to host and you are evaluating a host target, then
you will see violation results.
Click Next.
8. On the Create Rule: Agent-side Rule: Review, ensure the information is as you
intended. If not, click Back and make the necessary corrections. When the
information is correct, click Finish. See Figure 46-14.
Note: The compliance standard rule is not defined until you click Finish.
Tips
• Once the compliance standard rule has been created, it is not automatically
evaluated. Consider adding the compliance standard rule to a compliance
standard.
• Assign a corrective action to the rule after the rule has been created.
– On the Compliance Standard Rules tab, highlight the rule you just created.
– From the Assign Creative Action popup, select an existing corrective action
and click OK.
3. Click Continue.
4. On the Create Manual Rule page, provide the following information (see
Figure 46-15).
c. Severity: Warning
l. Click Finish.
3. Click Create. On the Create Compliance Standard pop-up, provide the following
(see Figure 46-16):
• Author: SYSMAN
• Click Continue
4. On the Compliance Standard: CS1 - DB Check page, right-click the standard in the
navigation tree. Select Add Rules. On the Include Rule Reference, select DBMS
application object ownership, DBMS application owner accounts, and DBMS
testing plans and procedures. See Figure 46-17. Click OK.
5. Click Save.
Associating the Compliance Standard to a Target
To associate the compliance standard to a target, perform the following steps:
3. Highlight the newly created standard (CS1 - DB Check) and click the Associate
Targets button.
4. On the Target Association for Compliance Standard: CS1 - DB Check page click
Add.
5. Choose one or more targets, for example, Oemrep_Database. See Figure 46-19.
2. In the Evaluation Results tab, locate the compliance standard named CS1 - DB
Check. Notice that there is a violation against the standard.
3. Select the compliance standard and click the Manage Violations tab.
Figure 46-23 Manage Violations Page Showing the Suppressed Violations Tab
1. From the Compliance menu, select Results. Select the CS1 - DB Check compliance
standard.
3. On the Manage Violations page, highlight the DBMS testing plans and procedures
rule.
5. On the Manage Violations page, highlight the rule and click the Manual Rule
Violations tab.
6. Select the rows and then click Clear Violations. On the Clear Violations
Confirmation pop-up, select either Clear Violations Indefinitely or Clear
Violations Until and specify a date. For completeness, provide a reason for
clearing the violation.
This chapter introduces Enterprise Data Governance and describes how to use the
feature to protect sensitive data. The chapter includes the following sections:
2. Aided by (but not limited to) the results of discovering database candidates, drill
down to the data within the tables and columns of databases to further identify
sensitive data.
3. Armed with the results of this discovery, flag columns as sensitive and identify
them within the context of an Application Data Model (ADM).
4. Select these columns within an ADM and apply masking formats to protect the
data in the testing environment.
• Virtual Private Database (VPD)–A database feature that enforces data access at the
row and column level, using security conditions to protect the data.
• Oracle Label Security (OLS)–A database feature that provides data classification
and control access using security labels.
Metadata discovery checks for each security feature listed. The scan does not,
however, collect protection policy details, nor does it necessarily scan for all the
policies. Any protection policy found is sufficient to flag the database as potentially
sensitive. This strategy keeps the scan fast and lightweight.
• Review the results of sensitive discovery jobs (see Working with Sensitive Database
Discovery Results).
• Manage and review metadata discovery jobs (see Working with Metadata
Discovery Jobs).
• Manage and review data discovery jobs (see Working with Data Discovery Jobs).
• Click a number in a metadata column to see a pop-up list of items found. For
example, click the number in the Data Protections column to see which data
protections are in play for the database candidate.
• Click the database name itself to open the database instance home page.
2. Set the criteria for sensitive column types, application signatures, and data
protections.
For sensitive column type, select a row and click View Search Criteria to see
applicable criteria such as pattern matching, regex formatting and Boolean
condition.
When done, click Next to continue.
3. Select the targets on which you want to perform metadata discovery. First, select
the target type, then click Add to select the targets within a given type. Note that
you can include searches from the configuration search library as part of your
target search criteria.
You cannot select targets of a different type. If you select targets of one type and
then select targets of a different type, targets of the first selected type are
deselected.
When done, click Select to close the selection dialog, then click Next to continue.
4. Schedule the job. Provide a meaningful name and description. Set other parameters
as appropriate. Note that metadata discovery is a job you would typically want to
repeat on a rotating schedule to be vigilant in monitoring your databases for
sensitive data.
5. A confirmation message appears at the top of the page. Click the link to view job
details in the Jobs system. Refresh the Metadata Discovery Jobs page to see the
completed job.
1. Select a job in the top table to see the discovery results at the bottom.
2. Use the Show drop-down list to filter the display based on all databases evaluated
or only those with or without sensitive data.
3. Click View Discovery Results Detail to see matching metadata based on specified
criteria.
4. Click a number in a metadata column to see a pop-up list of items found. For
example, click the number in the Data Protections column to see which data
protections are in play for the database candidate.
5. Click the database name itself to open the database instance home page.
2. Click the search icon to select the database candidate on which you want to
perform data discovery. Note that you can include searches from the configuration
search library as part of your target search criteria.
3. Set the criteria for sensitive column types, application signatures, and data
protections.
For sensitive column type, select a given column row and click View Search
Criteria to see applicable criteria such as pattern matching, regex formatting and
Boolean condition. Set the number of rows you feel constitutes an adequate sample
size. Indicate whether to scan empty tables.
The data discovery job ignores empty tables on the basis that data is what makes a
column sensitive. You may, however, want to include empty tables in the
discovery search based on other factors such as column name and comment
patterns. While an empty table is defined as a table without data values, the
metadata discovery job might report some nonempty tables as empty, if the
statistics collection job has yet to run.
4. Specify schema and table parameters (those to include or exclude). Use pattern
matching to scope the searches. Alternatively, you can opt to include all of either or
both entities.
5. Schedule the job. Specify a meaningful name and description. Provide credentials
to access the database. Set the job schedule.
6. A confirmation message appears at the top of the page. Click the link to view job
details in the Jobs system. Refresh the Data Discovery Jobs page to see the
completed job.
1. Click the database name link in the job row to open the database instance home
page; click the job status link to open the job summary page in the Jobs system.
2. Optionally associate a database with either a new or existing ADM. Select a data
discovery job row, then click Assign Application Data Model and choose the
appropriate option.
3. Select a job in the top table to see the discovery results at the bottom. Review job
results by clicking the job criteria tabs. Expand tab contents as necessary to drill
down to the details.
4. Click the Sensitive Data Columns tab to see the origin and nature of the data in
the sensitive columns. As noted, if there is an ADM assigned, you can
interactively set the sensitivity status by selecting a row and choosing a status
from the Set Sensitive Status drop-down menu.
Use the information in the table to inform your decision to declare a column
sensitive. For example, the sample data and columns matching the criteria both in
name and as a percentage of data are strong indicators of the column's sensitivity.
If there is no ADM assigned to the data discovery job, sensitivity status is
disabled, and the relevant schema is displayed in place of an application.
5. Click the Application Signatures tab to see database objects that uniquely identify
the application.
6. Click the Objects with Data Protection Policies tab to see the specific objects the
job discovered that are protected by supported protection policies.
Set sensitive column status on the discovered objects:
c. Click the List Columns button to display all the columns in the table covered
by the protection policy.
d. Set status to sensitive and select an associated sensitive column type for those
columns you consider sensitive within the application.
1. Open the Application Signature link from the Enterprise Data Governance
dashboard.
3. Click Add and select from the available objects to include in the signature. The
name provided for any of these object types can be specified explicitly or with a
pattern (for example, HR%).
4. Repeat Step 3 to include additional objects in the signature. Remember that all
signature objects must be found in the database for there to be a match.
The editor window closes and the signature appears in the table on the Application
Signature page. The signature can now be used as search criteria for metadata
discovery and data discovery jobs.
The following are core capabilities of Change Management that allow developers and
database administrators to manage changes in database environments:
• Schema Baseline—A point in time of the definition of the database and its
associated database objects.
• You can specify schemas and object types to capture. For example, you can capture
all Tables, Indexes and Views in schemas APPL1 and APPL2. This form of scope
specification is appropriate when there is a well-defined set of schemas that contain
your application objects. In addition to schema objects, you can also capture non-
schema objects (such as Users, Roles and Tablespaces) and privilege grants to Users
and Roles.
• You can specify schemas to exclude, and object types. This form of scope
specification captures objects that are contained in all schemas other than those you
specify. For example, you can capture all object types in schemas other than
SYSTEM and SYS. This form of scope specification is appropriate when you want
to capture all database objects, with the exception of objects contained in Oracle-
provided schemas. As with the first form of scope specification, you can also
capture non-schema objects and privilege grants.
• Finally, you can capture individual schema objects by specifying the type, schema
and name of each object. This form of scope specification is appropriate when you
want to capture a few specific objects, rather than all objects contained within one
or more schemas. While capturing individual schema objects, you can also capture
non-schema objects and privilege grants.
If you include a non-schema object type, such as User or Role, in a scope specification,
all objects of that type are captured. There is no way to capture individual non-schema
objects.
• Selecting a baseline object and specifying "Generate DDL" displays the DDL used
to create the object.
• Selecting a baseline version and specifying "Generate DDL" generates the DDL for
all objects in the baseline version. While an effort is made to create objects in the
correct order (for example, creating tables before indexes), the resulting DDL
cannot necessarily be executed on a database to create the objects in the baseline
version. For example, if you capture all the schema objects contained in schema
APPL1, then try to execute the baseline version DDL on a database that does not
contain User APPL1, the generated DDL will fail to execute.
Baseline versions are also used with other Database Lifecycle Management Pack
applications, such as Compare and Synchronize. You can compare a baseline version
to a database (or to another baseline version). You can also use a baseline version as
the source of object definitions in Synchronize, allowing you to re-create the
definitions in another database.
• To see what has changed between a version and the version that precedes it, select
the version and specify "View Changes Since Previous Version." The display shows
which objects have changed since the previous version, which have been added or
removed, and which are unchanged. Selecting an object that has changed displays
the differences between the object in the two versions.
• To see how an individual object has changed over all the versions of the baseline,
select the object and specify "View Version History." The display identifies the
versions in which the object was initially captured, modified, or dropped. From
this display, you can compare the definitions of the object in any two baseline
versions.
• Transferring baselines between two Cloud Control sites with different repositories.
• Offline storage of baselines. Baselines can be exported to files, deleted, and then
imported back from files.
You can select a schema baseline or a version and then export it to a file. The system
uses Data Pump for export and import. The dump files and log files are located in the
Cloud Control repository database server host. They can be located in directories set
up on NFS file systems, including file systems on NAS devices that are supported by
Oracle.
2. Create a directory object as the alias for a directory on the repository database
server's file system where the baselines are to be exported or where the import
dump file is stored.
The newly created directory will be available for selection by Cloud Control
administrators for export and import of schema baselines. Data pump log files from
the export and import operations are also written to the same directory.
During import, new values can be set for name, owner, and source database. Super
administrators can set another administrator as the owner at the time of import.
The export operation does not export job information associated with a baseline. On
import, the job status will hence be unknown.
For non-super administrators, the following applies:
• Non-super administrators can export their own baselines. They can also export a
version of baseline owned by another administrator, provided they have the
privilege to view the version and see the list of schema objects in that version.
• At the time of import, a non-super administrator must become the owner of the
baseline being imported. A non-super administrator cannot set another
administrator as the owner. If the baseline in the import dump file was owned by
another administrator, its new owner is set to the logged-in non-super
administrator at the time of import.
• View privileges granted on the baseline to non-super administrators are lost during
import and cannot be re-granted after the import, since there is no associated job
information.
• When the source is a database, object definitions are taken directly from the
database at the time the comparison version is created.
• When the source is a baseline, object definitions are taken from the specified
baseline version. Using a baseline version allows you to compare a database (or
another baseline version) to the database as it existed at a previous point in time.
For example, a baseline version might represent a stable point in the application
development cycle, or a previous release of the application.
For baseline sources, there are various ways to specify the version to be used.
• If you want a specific baseline version to be used in all versions of the comparison,
specify the baseline version number. This is appropriate for comparing a well-
defined previous state of the database, such as a release, to its current state.
• You can also request that the latest or next-to-latest version be used in the
comparison. If you specify "Latest," you can also request that the baseline version
be captured before the comparison takes place. This option allows you to capture a
baseline and compare it to the other source in a single operation. For example,
every night, you can capture a baseline version of the current state of a
development database and compare it to the previous night's baseline, or to a fixed
baseline representing a stable point in development.
Scope Specification
The scope specification for a schema comparison identifies the objects to compare in
the left and right sources. Creating a comparison scope specification is the same as
creating a baseline scope specification, described in the "Schema Baselines" section. As
with baselines, you can specify object types and schemas to compare, or individual
objects to compare.
Schema Map
Normally, schema objects in one source are compared to objects in the same schema in
the other source. For example, table APPL1.T1 in the left source is compared to
APPL1.T1 in the right source.
However, there may be cases where you want to compare objects in one schema to
corresponding objects in a different schema. For example, assume that there are two
schemas, DEV1 and DEV2, which contain the same set of objects. Different application
developers work in DEV1 and DEV2. You can use the optional schema map feature to
allow you to compare objects in DEV1 to objects with the same type and name in
DEV2.
To add entries to the schema map, expand the "Mapped Objects" section of the
comparison "Objects" page. You can create one or more pairs of mapped schemas.
Each pair designates a left-side schema and a corresponding right-side schema.
When using a schema map, you can compare objects within a single database or
baseline version. In the example above, DEV1 and DEV2 can be in the same database.
You specify that database as both the left and right source, and supply the schema
map to compare objects in DEV1 to those in DEV2.
Comparison Options
You can select several options to determine how objects are compared. These options
allow you to disregard differences that are not significant. The options include the
following:
• "Ignore Tablespace" and "Ignore Physical Attributes" – These two options allow
you to compare stored objects without regard to the tablespaces in which they are
stored or the settings of their storage-related attributes. This is useful when you are
comparing objects in databases having different size and storage configurations,
and you are not interested in differences related to the storage of the objects.
• "Partitioned Objects: Ignore High Values" — Tables that are otherwise the same
might have different partition high values in different environments. Choose this
option to ignore differences in high values.
• "Compare Statistics" — Choose this option to compare optimizer statistics for tables
and indexes.
• "Ignore Table Column Position" — Choose this option if tables that differ only in
column position should be considered equal.
• Identical – The object is present in both left and right sources, and is the same.
• Not Identical – The object is present in both left and right sources, and is different.
The page lists all versions of a comparison and shows the number of objects in each
state within each version. On the Comparison version page, you can see the objects in
each state individually. Objects that are "Not Identical" can be selected to view the
differences, and to generate DDL for the left and right definitions.
You can take two further actions to record information about objects in a comparison
version:
• You can add a comment to an object. For example, the comment might explain why
two objects are different.
• You can ignore the object. Ignoring the object removes it from lists of comparison
version objects. You might ignore an object that is different if you decide that the
difference is not important.
Scope Specification
Defining a scope specification for a schema synchronization is similar to defining a
scope specification for a schema baseline or comparison. However, there are
restrictions on what you can include in the scope specification.
• You cannot specify individual objects for synchronization. You must specify object
types and either schemas to include or schemas to exclude.
• You cannot directly include User and Role objects for synchronization. (However,
Users and Roles are automatically included as needed during the synchronization
process.)
• Oracle recommends that the following object types be selected as a group: Table,
Index, Cluster, Materialized View, and Materialized View Log.
The scope specification for a synchronization should be carefully tailored to the
application. Do not include more schemas than are needed. For example, if you are
synchronizing a module of a large application, include only those schemas that make
up the module. Do not attempt to synchronize a large application (or the entire
database) at one time.
Schema Map
The definition and use of the schema map is the same in schema synchronizations as
in schema comparisons. When you use a schema map, object definitions in each
schema map are synchronized to the mapped schema in the destination, rather than to
the schema with the same name. In addition, schema-qualified references (other than
those contained in PL/SQL blocks and view queries) are changed according to the
schema map.
For example, assume the schema map has two entries, as follows:
• Table DEV2B.T2
Synchronization Options
Schema synchronization options are similar to the options you can specify with
schema comparisons. In synchronization, the options perform two functions:
• During initial comparison of source and destination objects, the options determine
whether differences are considered meaningful. For example, if the "Ignore
Tablespace" option is selected, tablespace differences are ignored. If two tables are
identical except for their tablespaces, no modification to the destination table will
occur.
• When generating the script that creates objects at the destination, some options
control the content of the script. For example, if "Ignore Tablespace" is selected, no
TABLESPACE clauses are generated.
In addition to the options provided with schema comparison, the following options
are specific to Synchronize:
• "Preserve Data In Destination" and "Copy Data From Source"—These two options
control whether table data is copied from the source database to the destination
database. (The option is not available if the source is a baseline.) By default,
Synchronize preserves data in destination tables. Choosing "Copy Data From
Source" causes Synchronize to replace the destination data with data from the
source table.
Synchronization Mode
The next step in defining a synchronization is to choose the synchronization mode.
There are two options:
• Interactive synchronization mode pauses after initial comparison of the source and
destination objects, and again after generation of the synchronization script.
Interactive mode gives you a chance to examine the results of comparison and
script generation, and to take appropriate action before proceeding with the next
step.
• Source and destination, from the comparison's left and right sources, respectively.
This means that you cannot create a synchronization from a comparison whose
right source is a baseline.
• Scope specification. Note that some comparison scope specification options are not
available in a synchronization. For example, you cannot synchronize individual
schema objects, User objects, or Role objects.
• Comparison options
• Source Only
• Destination Only
• Identical
• Not Identical
In interactive mode, you can view the objects that are in each state, or all objects at
once. For objects that are not identical, you can view the differences between the
objects. At this stage, you can anticipate what will happen to each destination object:
• Identical objects will be unaffected. However, if you chose the "Copy Data From
Source" option, the data in tables that are identical will be replaced with data from
the source.
• Messages are placed in the impact report for the synchronization version. The
messages provide information about the synchronization process, and may report
one or more error conditions that make it impossible to generate a usable script.
• The DDL statements needed to carry out the synchronization are generated and
recorded in the synchronization version.
Dependency analysis examines each object to determine its relationships with other
objects. It makes sure these other objects exist (or will be created) in the destination.
Some examples of these relationships are:
• A schema object depends on the User object that owns it. For example, table
DEV1.T1 depends on user DEV1.
• An index depends on the table that it is on. For example, index DEV1.T1_IDX
depends on table DEV1.T1.
• A table that has a foreign key constraint depends on the table to which the
constraint refers.
• A source object such as a package body depends on other source objects, tables,
views, and so on.
• A stored object depends on the tablespace in which it is stored, unless you choose
"Ignore Tablespace."
The relationships established during dependency analysis are used later in the script
generation process to make sure script statements are in the correct order.
Dependency analysis may determine that a required object does not exist in the
destination database. For example, a schema object's owner may not exist in the
destination database. Or, a table may have a foreign key constraint on another table
that is in a different schema. There are several possible outcomes.
• If the required object is in the source and is selected by the scope specification,
Synchronize creates the object. For example, if DEV1.T1 has a foreign key
constraint that refers to DEV2.T2, and both DEV1 and DEV2 are in the scope
specification, Synchronize creates DEV2.T2.
• If the required object is a user or role, and the object is available in the source,
Synchronize automatically includes the user or role and creates it at the destination.
This occurs even though User and Role objects are not part of the Synchronize
scope specification.
• If a required schema object is in the source but is not selected by the scope
specification, Synchronize does not automatically include the object; instead, it
places an Error-level message in the impact report. This restriction prevents
uncontrolled synchronization of objects outside the scope specification. It is for this
reason that scope specifications should include all the schemas that make up the
application or module.
• If the source is a baseline version, it may not include the required object. For
example, a baseline might not capture Users and Roles. Synchronize cannot look
for objects outside the baseline version, so it places an Error-level message in the
impact report. This is why it is important to include Users, Roles, and privilege
grants in any baseline that will be used for synchronization.
At the end of the script generation step, Synchronize has added the impact report and
the script to the synchronization version. In interactive mode, you can examine the
script and impact report before proceeding to script execution.
The impact report contains messages about situations that were encountered during
script generation. There are three types of messages:
• Warning messages report a situation that requires your attention, but that may not
require action. For example, if Synchronize is unable to determine if a reference in a
source object can be resolved, it adds a warning message to the impact report. You
need to verify that situations reported in warning messages will not prevent script
execution.
• Error messages indicate a situation that will prevent script execution if not
corrected. For example, if Synchronize is unable to locate a required dependency
object, it adds an error message to the impact report. Depending on the message,
you may be required to create a new synchronization. For example, if the
dependency object is not in the synchronization scope, or if the source is a baseline
that does not contain the dependency object, you will need to create a new
synchronization with an expanded scope or a different source baseline. In other
cases, you can resolve the situation by excluding one or more objects from the
synchronization and regenerating the script.
The script display contains the statements of the generated script in the order they will
be executed. You can examine the script if you have any concerns about its
correctness. The display allows you to locate statements that are associated with a
particular object or object type.
Following script generation, you can continue to script execution unless an error was
encountered during the script generation step. In this case the impact report will
contain one or more Error-level messages detailing the problem and solution. In some
cases, you may be able to solve the problem by selecting "Regenerate Script,"
excluding an object from the synchronization, and regenerating the script.
There may be cases where you need to create a new version of the synchronization in
order to correct the problem. For example, if you need to modify the definition of an
object in the source or destination or add an object in the destination, you will need to
create a new version. This allows the new or modified object to be detected during the
comparison step. In this case, the old version becomes "abandoned" since you cannot
continue to script generation.
Script Execution Step
Following successful script generation, the script is ready to execute. In unattended
mode, the script executes as soon as script generation completes. In interactive mode,
you proceed to script execution as soon as you have reviewed the impact report and
the script.
The script executes in the Cloud Control job system. Once script execution is complete,
you can view the execution log. If the script fails to execute successfully, you may be
able to correct the problem and re-start the script execution from the point of failure.
For example, if the script fails due to lack of space in a tablespace, you can increase the
size of the tablespace, then re-start the script.
• Change Plan change requests that create objects can get the object definitions from
Change Management Schema Baselines.
• Change requests that modify objects can use the contents of an object in a Change
Management Schema Comparison to specify the change.
Figure 48-1 shows the steps in a change plan. A change plan is a named container for
change requests. You can define change requests to reproduce or modify object
definitions at a destination database. A destination database is a database where you
want to apply the change requests in a change plan. After you finish planning and
defining the changes, evaluate the impact of the changes that you want to make.
To evaluate the impact of the change requests at a particular database, generate a
script and an impact report for a change plan and that destination database. The
impact report explains the changes that will be made by the script when it executes at
the destination database. It also describes any change requests that cannot be applied
at the destination database.
To implement the change requests in a change plan at a destination database, execute
the script at the destination database.
• Using External Clients to Create and Access Change Plans in Cloud Control
• Ensure that the Application Developer (AD) is an Cloud Control user who has the
following privileges:
– EM_ALL_OPERATOR privilege
• Ensure that the Database Administrator (DBA) is an Cloud Control user who has
the following privileges:
– EM_ALL_OPERATOR privilege
3. Use Metadata Baselines wizard to define a baseline that includes the schemas of
interest. Schedule a job to capture the first version of the baseline.
6. Schedule a job to create the first version of the comparison and save the
comparison.
8. Specify a Name and Description for the change plan and click OK to save the
change plan.
10. In the Create Change Items from Schema Comparison page, select the
Comparison Version created earlier, specify the development database as the
Change To side and the production-staging database as the Change From side in
the Conversion Assignment and click OK.
11. In the Create Change Items from Schema Comparison: Select Differences page,
select:
12. Submit request to apply the Change Plan on the destination database.
2. In the DBA role, examine the Change Plan, evaluating its suitability for application
to the proposed database. Remove individual Change requests if required.
3. From the Schema Change Plans page, select Create Synchronization from Change
Plan.
4. Specify the details in the Schema Synchronization wizard with the source as the
Change Plan instance created earlier. For information about using the Schema
Synchronization wizard, see Synchronizing with Production Staging. By default,
the synchronization created from change works in the interactive mode.
7. Check completed script execution job for errors. If the change plan job failed, do the
following:
• If the failure is due to a condition in the source or destination database that can
be fixed manually, fix the problem and perform the operation again.
• If the failure is in the script execution phase, view the script output in the job
details. If the problem can be resolved by actions such as issuing missing grants,
fix the problem manually in the database and then click Retry Script Execution.
8. Fix the errors and submit the change plan creation job again.
48.5.2.2 Using External Clients to Create and Access Change Plans in Cloud Control
Cloud Control provides support for external clients such as SQL Developer to create
and access change plans. You can use these applications to connect to the Cloud
Control repository and create change plans and add and update change items in them.
Client users are of two types:
• Users who can access (view and possibly edit) specific change plans
Following are the steps:
2. From the Setup menu, click Security and then select Administrators.
4. In the Create Administrator: Properties page, specify the Name and Password for
the user. This creates a database user with the specified name and password, as
well as creating the Cloud Control administrator. Click Next.
• If you want to create an administrator who has all access to all change plans,
select Manage Change Plans in the Resource Type Privileges section.
• If you want to create an administrator who has specific access to one or more
change plans, click Add in the Resource Privileges section. In the list of change
plans that have been created already, select one or more and click Select. The
selected plans are added to the Resource Privileges section. By default, the
administrator is granted View Change Plan privilege; you can edit this to grant
Edit Change Plan privilege.
9. Click Continue.
10. In the Create Administrator: Review page, click Finish to create the new
administrator.
11. For an external client to be able to access change plans using any of these privilege
types, follow these steps:
1. Ensure that the repository administrator has configured the repository database to
accept remote database connection from SQL developer. You can do this by
configuring the repository listener process.
3. Provide the repository user of the local OMS account privileges to be a change plan
user by running the following SQL commands on the repository database as user
SYS:
or:
grant CHANGE_PLAN_USER to <repos_user>;
4. Edit the OMS users resource privileges to give the user access to edit the change
plans.
• Tables
• Single-table views
• Materialized views
index that includes a NUMBER column and an NCHAR column, then the data
comparison does not support them.
The index columns in a comparison must uniquely identify every row involved in a
comparison. The following constraints satisfy this requirement:
• VARCHAR2
• NVARCHAR2
• NUMBER
• FLOAT
• DATE
• BINARY_FLOAT
• BINARY_DOUBLE
• TIMESTAMP
• RAW
• CHAR
• NCHAR
If a column with datatype TIMESTAMP WITH LOCAL TIME ZONE is compared,
then the two databases must use the same time zone. Also, if a column with datatype
NVARCHAR2 or NCHAR is compared, then the two databases must use the same
national character set.
Data comparison feature cannot compare data in columns of the following datatypes:
• LONG
• LONG RAW
• ROWID
• UROWID
• CLOB
• NCLOB
• BLOB
• BFILE
• User-defined types (including object types, REFs, varrays, and nested tables)
• Oracle-supplied types (including any types, XML types, spatial types, and media
types)
You can compare database objects that contain unsupported columns by excluding the
unsupported columns when providing comparison specification. Edit the comparison
item and include only the supported columns in the Columns To Include list of
column names.
Since data comparison cannot compare LOB column values directly, their
cryptographic hashes will instead be used for comparison. If you include LOB type
columns to be compared, make sure that the database users connecting to the
reference and candidate databases have EXECUTE privilege on SYS.DBMS_CRYPTO
package. For more information about DBMS_COMPARISON, see Oracle Database
PL/SQL Packages and Types Reference for the database version of your reference
database.
Note:
A Data Comparison job may fail with the error "ORA-28759: failure to
open file error."
This failure occurs when data comparison tries to get data from the candidate
database into the reference database over a database link in the reference
database for comparing them.
The database server (candidate/source database) requires the use of TCPS
protocol for connections, but the client (reference/destination database) does
not have a valid wallet location. Connection over the database link fails since
no wallet was specified on the client side.
This problem can be fixed by specifying a valid WALLET_LOCATION entry in
sqlnet.ora file (which is by default located in the $ORACLE_HOME/network/
admin directory). The following wallet location must be specified at the
reference database:
WALLET_LOCATION = (SOURCE=(METHOD=FILE)
(METHOD_DATA=(DIRECTORY=/net/slc05puy/scratch/dbwallets/s
wallets)))
1. From the main Data Comparisons page, click Create. The Create Data
Comparison page appears.
a. If you want to compare objects residing in two databases, select one database
as the Reference and the other as the Candidate.
b. Click OK when you have finished. The Data Comparison Specification page
appears.
Tip:
It is recommended that you define the comparison specification once and run
it many times.
3. Open the Actions menu, then select Add Object Pair or Add Multiple Objects. If
you select Object Pair, continue with the following sub-steps. If you select
Multiple Objects, go to the next step.
• Adding an object pair consists of selecting one object from the reference
database and one object from the candidate database. You can compare
dissimilar object types, if desired, such as a table in the reference database and
a materialized view in the candidate database.
a. Specify the reference and candidate objects. The reference database can be the
same as the candidate database. In this case, the objects are from the same
database.
e. Either specify or let the system compute the maximum number of buckets
and minimum rows per bucket.
a. Specify the schema name, one or more object types, then click Search.
The table populates with object names.
5. Select your comparison name from the list, open the Actions menu, then select
Submit Comparison Job. For information about privileges required for user
credentials for the reference and candidate databases, see Overview of Change
Management for Databases.
6. Provide the required credentials in the page, schedule the job, then click OK.
The Data Comparisons page reappears and displays the following confirmation
message:
"The job was submitted successfully. Click the link in the Job Status column to
view job status."
After the Job Status column shows Succeeded, go to the next step.
7. Select your comparison name from the list, open the Actions menu, then select
View Results. The Data Comparison Results page appears.
8. Look for rows in the Result column with the =/= symbol, indicating that there are
differences between reference row and candidate row data.
• Data comparison attempts to compare all tables. If there is an error, you can
see the error message by selecting the Messages tab. An error message is
indicated with an X instead of the = or =/= symbol.
• You can see the SQL statements that are running to perform the comparison by
clicking the Executed Statements tab.
• The Row Source column indicates the origin of each row of data as a whole.
Furthermore, data in a row differing between reference and candidate are
displayed in contrasting colors, indicating whether the source of the data is the
reference or candidate database.
Schema Mapping
By default, a reference object will be compared with a candidate object in the same-
named schema as the reference schema. Using schema mapping, you can optionally
compare objects in a reference schema with objects in a different candidate schema.
Any schema can only be mapped once. Provide reference and candidate schema
names for mapping under the Schema Mapping section of the Data Comparison
Specification page. Default candidate schema will then be picked from schema
mapping you specified.
You may further override the candidate schema of individual item by editing the item,
clicking the Override button next to the Candidate Object field, and explicitly
specifying the candidate object belonging to any schema. For such items whose
candidate objects are overridden in this way, schema mapping will be ignored.
Usage of Buckets
A bucket is a range of rows in a database object that is being compared. Buckets
improve performance by splitting the database object into ranges and comparing the
ranges independently. Every comparison divides the rows being compared into an
appropriate number of buckets. The number of buckets used depends on the size of
the database object and is always less than the maximum number of buckets specified
for the comparison by the maximum number of buckets specified.
When a bucket is compared, the following results are possible:
Note:
If an index column for a comparison is a VARCHAR2 or CHAR column, the
number of buckets might exceed the value specified for the maximum number
of buckets.
This section describes Oracle Enterprise Manager Cloud Control's (Cloud Control)
Compliance features include the ability to monitor certain elements of your targets in
real time to watch for configuration changes or actions that may result in
configuration changes.
These features include Operating System level file change monitoring, process starts
and stops, Operating System user logins and logouts, Oracle database changes and
more.
The real-time monitoring for these features takes place from the Cloud Control agent.
Some of these monitoring capabilities require specific setup steps depending on the
type of monitoring you will do and what Operating System is being monitored.
This chapter outlines the specific requirements and pre-requisites that exist to use the
Compliance Real-time Monitoring features. For details on how to use Real-time
monitoring from Cloud Control, see the chapter Compliance Management in this
document. This chapter covers the following topics:
After the Compliance Standard to target association is complete, the set of monitoring
rules are sent to the agent to enable real-time monitoring. All monitoring for Real-time
monitoring occurs on the agents and all observed action data is sent from the agent to
the Cloud Control server for reporting and data management.
1. Ensure that the agent's root.sh script is run after agent installation.
After installing the agent, the root.sh script must be run as the root user. This
script must be run before configuring the rest of these credential steps.
Privilege Delegation settings are found from the Setup menu by choosing Security,
then Privilege Delegation. On this page you can either set privilege delegation for
each host manually or you can create a Privilege Delegation Setting Template.
Privilege delegation for each host that will have real-time monitoring must have
SUDO setting enabled with the appropriate SUDO command filled in (for
example, /usr/local/bin/sudo).
Monitoring Credential settings are found from the Setup menu. Choose Security
then Monitoring Credentials. From this page, select the Host target type and click
Manage Monitoring Credentials.
For each entry with the credential “Host Credentials For Real-time Configuration
Change Monitoring“, select the entry and click Set Credentials. You will be asked
for a credential set to use. Ensure you also add “root" to the Run As entry. If “Run
As" is not visible, then the privilege delegation was not set properly in the previous
step.
To set monitoring credentials in bulk on multiple hosts at once, you can use
EMCLI. For more information on using EMCLI to set monitoring credentials, see
the section, Managing Credentials Using EMCLI in the Security chapter of Oracle
Enterprise Manager Administration. Likewise, for more information about
configuring monitoring credentials in Cloud Control, the Security chapter of Oracle
Enterprise Manager Administration.
2. Build your own kernel module. To build your own kernel module, you can
download the following RPM from the Downloads link:
Fileauditmodule-emversion-revision-noarch.rpm
You should always retrieve the latest revision available at the time you are
installing this module. The emversion field must match the version of Cloud
Control agent and server you are using.
Install this RPM on the host you want to monitor as root. The installation of this
RPM depends on the kernel-devel package matching your running kernel also
existing on the host. This kernel-devel package comes with the same media as the
Linux installers.
In addition to installing this package, you must ensure that the version of gcc
available on your host matches the version with which the kernel was built. To do
this, view the /proc/version file to see what gcc version the kernel was built with and
then run the command gcc –v to see what version of gcc is being used. These two
versions should match.
Also check that the file /boot/System.map-{version} exists where {version} must match
the kernel version you see when you run the uname -r command. This file
contains system symbols that are required to decode the kernel symbols we are
monitoring for real-time changes. Without this file, real-time file monitoring will
not function. This file is standard on all default Linux installations.
After installing this package and checking prerequisites successfully, go to the
directory where the package contents were installed (defaults to /opt/
fileauditmodule) and run the following script:
compmod.sh
This will build the kernel module file (.ko, .k64, or .o extension depending on the
OS version) and place it in the /opt/fileauditmodule directory.
If the audit module file is not created, check the make.log and build.log files for any
errors in building the module.
If all of your hosts have the exact same kernel version as shown using the command
uname –r, then you only need to compile the module on one machine. You can then
copy the .ko, .k64, or .o file to the other servers without having to build on that specific
host.
Deploying the Kernel Module
Once you have either the prebuilt .ko file or a .ko file that exists from building it from
the source RPM, the .ko file must be located in the proper directory. The default
location for this file is in the bin folder under the agent home directory. You can also
place the file in any location on the host and change the nmxc.properties file under the
AGENT_INST/sysman/config directory of the agent home. The property
nmxcf.kernel_module_dir specifies the absolute path to the .ko directory.
Install Kernel Module Job
In addition to manually placing the .KO file on the agent, there is a Cloud Control job
named Real-time Monitoring Kernel Module Installation. This job is configured with a list
of Linux hosts on which you can install the kernel module. It will search in a directory
locally on the Cloud Control server disk for prebuilt .ko files or the source RPM file. If
it finds a matching prebuilt .ko file, it will send this to the matching agents; otherwise
it will send the RPM to the agent and install and compile it resulting in a new .KO file.
Prior to using this job, files from OSS.ORACLE.COM must be manually retrieved by
the user and placed into the %ORACLE_HOME%/gccompliance/fileauditmodule/
resources/linux directory. This directory already exists on the server with a README
file indicating this is the location to place these files. The files that must be placed here
are either prebuilt .KO files or the source RPM file. If you have built your own .KO
files in your environment, you can also place those .KO files into this directory on the
server and deploy it to other hosts in your environment.
Special Considerations for Enterprise Linux 5 and Greater
For Enterprise Linux 5 and greater, the kernel audit module is not required. The
monitoring will use the built-in audit subsystem if a kernel module is not detected at
startup time. However, the functionality of the audit subsystem is not as robust as the
capability that the kernel audit module can provide.
You will lose the functionality that provides the granularity of what type of change
there has been to a file, whether it was a create action or a modify action. Without the
kernel module, all changes to a file will appear as a modify action. Additionally,
monitoring a directory that does not exist yet or a directory that may exist now and
gets removed later may be disrupted since the underlying Linux audit subsystem does
not handle these cases.
It is recommended that you use the kernel audit module even with the newer versions
of Linux, if possible.
1. You may have noticed that you do not receive real-time file changes on the
Enterprise Manager console for file changes that you know should occur.
3. When examining the nmxcf.log file under AGENT_INST/sysman/logs, you may see
errors indicating that the kernel module could not be loaded or used for various
reasons.
If you encounter any of these issues, most likely there was a problem with compiling
or inserting the Linux kernel module at run time.
You can confirm whether the auditmodule was loaded properly by running the
following command.
grep -i auditmodule /proc/modules
If you do not receive any output, then the auditmodule is not loaded and the agent
will not perform real time file monitoring.
If the audit module file was generated properly and it does not show up in the module
list above, you can attempt to manually load the module to see if there are any errors.
Use the following command where you replace {audit module file name} with the
entire name of the .ko file that was created from compmod.sh:
insmod {audit module file name}
If you experience no errors during this command, you can check the module list again
by using the grep command above. If the audit module now appears, then the file
monitoring capability should work. An agent restart is necessary; however there still
may be a problem with the file monitoring process finding the .ko file which you will
experience again next time your host is rebooted.
One additional step to debug any issues with the file monitoring process is to try to
run it manually. To do this, follow these steps:
2. Execute the nmxcf process using the following command replacing the values in {}
with the proper path elements or the process ID from the previous command:
sudo {agent_home}/core/{agent_version}/bin/nmxcf -e
{agent_home}/agent_inst/ -m {agent_home}/agent_inst/
sysman/emd/state/fetchlet_state/CCCDataFetchlet –w {process
id of TMMAIN}
Running the nmxcf process this way will not work in the long term since it will not
start up again when the agent is restarted, but this can help in trying to debug any
issues as to why the process cannot start.
If the module still is not able to load and if you need to contact Oracle support about
the issue, please be sure to include the following information with your support ticket:
• The make.log and build.log files from the /opt/fileauditmodule directory where you
ran compmod.sh if you built your own .ko file
• Any warnings or errors you received when trying to start nmxcf manually.
This information will help Oracle Support to determine if the real time file monitoring
audit module of the agent can be built on your environment.
Note:
1. From Windows Explorer, select the directory that is being monitored by a Real-
time Monitoring Rule, right-click and select Properties.
3. Click Advanced.
5. Click Add. (In Microsoft XP, double-click the Auditing Entries window).
6. Select the Name Everyone, then click OK. You can also choose specific users if you
are only monitoring for changes by specific users in Configuration Change Console
rules. The rules filter the results by user as well, so even if you enable audit for
everyone, only users that you want to monitor changes of in your rules will be
captured.
7. Select the following options (Successful and/or Failed) from the Access window.
For Windows XP and Windows 2003:
• Delete
For Windows 2008 and Windows 7:
• Write Attributes
• Delete
• Change Permissions
• Take Ownership
8. Click OK to exit.
9. Repeat steps 1 through 7 for all other monitored directories and/or files.
10. From the Start menu, select Settings, then Control Panel, then Administrative
Tools , then Local Security Policy, then Local Policies, then Audit Policy. Double-
click and turn on the following policies (Success and/or Failure):
12. From the Start menu, select Settings, then Control Panel, then Administrative
Tools, and finally Event Viewer.
13. Select System Log, then click Action from the menu bar and select Properties.
14. From the System Log Properties panel, on the General tab, set the Maximum log
size to at least 5120 KB (5 megabytes) and select Overwrite Events as Needed. Note
that the log size depends on the number of events generated in the system during a
two-minute reporting interval. The log size must be large enough to accommodate
those events. If you extend the monitoring time for file events because you expect
the change rate to be lower, you need to ensure that the audit log in Windows is
large enough to capture the events.
If Windows auditing is not configured properly, you will see warnings on the
Compliance Standard Target Association page on the Cloud Control user interface.
This is the same page where you associated your Real-time Monitoring compliance
Standards to your targets.
1. Log out of the host and then log back into the host.
2. From Start, select Settings, then Control Panel, then Administrative Tools, and
finally Event Viewer.
3. Select Security Log and choose Filter from the View menu. Select Security for the
Event Source and Logon/Logoff for the Category fields.
4. Click OK.
The Event Viewer should have the activity recorded as Event 528.
See the Solaris BSM Auditing manuals for additional details on setting up BSM
auditing.
If auditing is already enabled on the server, simply verify that the audit system
configuration matches the configurations detailed below.
The audit file can be configured to include specific events. The /etc/security/
audit_control file controls which events will be included in the audit file. This section
summarizes the configuration; for further details, refer to the Sun Product Online
Documentation site.
For monitoring entity types OS FILE (file changes) and OS USER (user logins/
logouts), the flags line in the file /etc/security/audit_control should be set as follows:
flags: +fw,+fc,+fd,+fm,+fr,+lo
This configuration enables success/fail auditing for file writes (fw), file creates (fc), file
deletes (fd), file attribute modifies (fm), file reads (fr) and login/logout events (lo);
where '+' means to only log successful events.
If you are interested in logging the failed events as well, remove the "+" sign before
each event in the flag.
Note:
Installing BSM on an existing host has the requirement that the host is
rebooted.
Auditing Users: The audit_user file controls which users are being audited.
The settings in this file are for specific users and override the settings in the
audit_control file, which applies to all users.
Audit Logs and Disk Space: The audit_control file uses entries to control
where the audit logs are stored and the maximum amount of disk space used
by the audit system. The minimum requirement for file monitoring is
approximately 10 minutes worth of data stored on the hard drive or the
configured reporting interval time.
Note:
This configuration will not affect the existing sessions in which users already
log into the host, so you must terminate all the existing sessions and then re-
login or simply reboot the machine to ensure this change takes effect.
As the bsmconv command has been removed on Solaris 11, you can use the
following command to enable the auditing feature, if needed:
audit -s
1. The auditing policy can be set to automatically drop new events (keeping only a
count of the dropped events) rather than suspending all processes by running the
following command:
2. Run the following command to force the audit daemon to close the current audit
log file and use a new log file:
/usr/sbin/audit -n
3. Run the following command to merge all existing closed auditing log files into a
single file with an extension of .trash and then delete the files:
/usr/sbin/auditreduce -D trash
4. Create a cron job to periodically run the commands in Step 2 and 3 above. The
frequency at which these two commands are run can be adjusted based on the
anticipated event volume and the amount of disk space allocated to auditing. The
only requirement is that the time between the audit -s command and the
auditreduce - D trash command is at least 15 minutes or twice the reporting
interval if that is changed.
4. Locate the following sections, and update or add the listed values:
start:
binmode = off
streammode = on
stream:
cmds = /etc/security/audit/cccstream
classes:
…
filewatch = PROC_Create,PROC_Delete,FILE_Open,FILE_Write,FILE_Close,FILE_
Link,FILE_Unlink,FILE_Rename,FILE_Owner,FILE_Mode,FILE_Fchmod,FILE_Fchown,FS_Chdir
,FS_Fchdir,FS_Chroot,FS_Mkdir,FS_Rmdir,FILE_Symlink,FILE_Dupfd,FILE_Mknod,FILE_Uti
mes
users:
root = filewatch
default = filewatch
Note:
In this case default refers to all users that are not root. Further note that the
last line of the config file should be a blank line.
Note:
7. Clear all text from the file. The default configuration for this file is not necessary, as
the File Monitoring agent module (nmxcf process) will operate as a direct audit
reader. Clearing the file helps to reduce CPU usage and improve overall auditing
performance.
10. At the prompt, restart audit using the command /usr/sbin/audit shutdown
and /usr/sbin/audit start or directly reboot the host to make the auditing
effective.
11. At the prompt, use the command audit query to view the configuration the
audit system is using. Ensure that the properties are set correctly and that the
required settings for filewatch are set.
49.7.3 Verifying AIX System Log Files for the OS User Monitoring Module
The OS User monitoring module relies on the following system log files:
• /etc/security/failedlogin
• /var/adm/wtmp
• /var/adm/sulog
Be sure the log files exist before running the OS User monitoring module on an AIX
host. If any of the log files is missing, refer to the AIX System documentation for more
information about how to generate it.
1. In Cloud Control, go to the target home page for the Oracle Database target for
which you want to enable Real-time Monitoring.
4. Find the parameter audit_trail and ensure it is set to DB. If not, this parameter needs
to be changed in the Oracle Database.
Level Effect
Statement Audits specific SQL statements or groups of statements that
affect a particular type of database object. For example, AUDIT
TABLE audits the CREATE TABLE, TRUNCATE TABLE,
COMMENT ON TABLE, and DELETE [FROM] TABLE
statements.
Privilege Audits SQL statements that are executed under the umbrella of
a specified system privilege. For Example, AUDIT CREATE
ANY TRIGGER audits statements issued using the CREATE
ANY TRIGGER system privilege.
To use the audit statement to set statement and privilege auditing options a DBA must
be assigned AUDIT SYSTEM privileges. To use the audit statement to set object audit
options, the DBA must own the object to be audited or be assigned the AUDIT ANY
privilege within Oracle. Privilege assignments are covered in the following section.
Audit statements that set statement and privilege audit options can also include a BY
clause to supply a list of specific users or application proxies to audit, and thus limit
the scope of the statement and privilege audit options.
Some examples of audit statements can be seen below. Feel free to use these as a basis
for the audit settings you specify within your database. Once all audit settings are in
place you can create application policies, using the Oracle (SQL Trace) agent module
with which to monitor the Oracle database instance.
Statement Audit Options (User sessions)
The following statement audits user sessions of users Bill and Lori.
AUDIT SESSION BY scott, lori;
Privilege Audit Options
The following statement audits all successful and unsuccessful uses of the DELETE
ANY TABLE system privilege:
AUDIT DELETE ANY TABLE BY ACCESS WHENEVER NOT SUCCESSFUL;
Object Audit Options
The following statement audits all successful SELECT, INSERT, and DELETE
statements on the dept table owned by user jward:
AUDIT SELECT, INSERT, DELETE ON jward.dept BY ACCESS WHENEVER
SUCCESSFUL;
Example Oracle Audit Monitor Configurations
The following command audits all basic statements. Extra statements are not audited.
Audit all by access;
The following statement audits all extra statements:
audit ALTER SEQUENCE, ALTER TABLE, DELETE TABLE, EXECUTE
PROCEDURE, GRANT DIRECTORY, GRANT PROCEDURE, GRANT SEQUENCE,
GRANT TABLE, GRANT TYPE, INSERT TABLE, LOCK TABLE, UPDATE TABLE
by access;
The following command displays audit settings for statements:
SELECT * FROM DBA_STMT_AUDIT_OPTS;
Once you have specified your audit configuration you can then create real-time
monitoring rules from the Cloud Control Server that uses the Oracle Database entity
types.
1. Install Remedy ARS 7.1. Ensure the following components are all installed and
properly licensed:
ARS 7.1.00 Patch 011
Midtier 7.1.00 Patch 011
Flashboard Server 7.0.03
Assignment Engine 7.1
Asset Management 7.0.03*
CMDB 2.1.00 Patch 4
CMDB Extension Loader
Approval Server 7.1
Change Management Server 7.0.03 Patch 008*
Problem Management Server 7.0.03*
Incident Management Server 7.0.0*3
User Client
Administrator Client
These packages all come with the IT Service Management Pack. Oracle provides
example customizations for the Remedy under ITSM 7.0.03 Patch 008
environment. For different versions, the customizations may need to be adjusted
to account for changes in the version of Remedy.
2. Install the Cloud Control EMCLI_Client on the same host on which Remedy is
installed. This will need to be able to communicate to your Cloud Control Server.
b. Choose Setup, then select Command Line Interface from the My Preferences
menu.
c. Click Download the EM CLI kit to your workstation and download the jar to
your Remedy server.
d. Follow the steps given on the page to install the EMCLI client on the Remedy
server.
3. Get the latest version of the Change Request Management connector self-update
package. Also acquire the latest version of the example Remedy ARS
customizations for Cloud Control version 12c.
These definition files provide a guideline of customizations that must be made in
your environment for the integration. These customization files assume a fresh
install of Remedy ARS. When integrating with a production instance of Remedy,
care should be taken to make sure these customizations are compatible with any
previous customizations that have been made to the Remedy instance.
• ActiveLinks _Customization.def
• Forms_Customization.def
• Menus_Customization.def
• Webservices_Customization.def
To get these definition files, in the Enterprise Manager Self Update user interface,
export the connector. The definition files are inside this connector package.
4. Install the four definition files (.DEF) files in the running Remedy environment by
completing these steps:
c. From the Tools menu, select Import Definitions, then select From Definition
File...
g. Click Import.
h. You should not encounter any errors during this process. At the end of
import there should be an Import Complete message.
d. In the input on top, modify the midtier_server and servername values in the
WSDL Handler URL.
f. If the midtier uses port 80, you can omit the port, otherwise include the port
after the server name.
g. For the servername after "public/", enter the name of the Remedy server.
h. Click View.
i. You should see an XML representing the webservice WSDL file for the
webservice.
j. If you see an error, check the midtier_server name, port, or servername. Also,
you can try adding/removing the domain part of the servername. Another
possible issue occurs when the midtier password set in Remedy's System >
General > Serverinfo > Connection Settings may not be set correctly. Be sure
to check this also if the WSDL XML is not returned.
k. If you see the XML content after clicking View, then close this window and
save the changes.
b. Select active links and then select the active link EMCCCC_ApprovedCR.
Right click, then select Open.
d. Click the Current Action Run Process at the end of the list of actions.
e. In the Command Line field, change the path to emcli.bat to match that of
where you installed the emcli on the local host.
7. Create a user in Remedy that will be used for creating requests that will be used
for automatic observation reconciliation:
d. When adding the person, add the support group under the Support Groups
tab.
e. Under the Support Groups Tab, select sub tab Support Group Functional
Roles.
h. On the left side bar, select Application, then Users/Groups/Roles, then Select
Users.
i. This will load the user search page. Click Search at the top right.
j. Double-click the newly created user above to bring up the user form.
k. Click the down arrow next to "Group List" field and select Infrastructure
Change Master.
l. Repeat the previous step and add the following Groups to this user as well.
Infrastructure Change Submit
Infrastructure Change User
Infrastructure Change Viewer
m. Save the changes to this user by clicking the Save button in the upper right
hand corner of the window.
b. From the Setup menu, select Provisioning and Patching, then choose
Software Library.
d. Click Add.
f. Provide a location where the swlib files will be located on the Cloud Control
server. This can be anywhere, but must be a path that the Cloud Control user
can access. You must put the full absolute path in this input.
j. If you have errors with the previous step, make sure the user you run emcli as
has permissions to access this directory and file. Also, be sure you are using
absolute path for the -file switch.
Operation completed successfully. Update has been uploaded to Cloud Control. Please
use the Self Update Home to manage this update.
m. From the Setup menu, select Extensibility, then select Self Update.
n. Find the type "Management Connector" and click the link "1" under
"Downloaded Updates" for this entry.
d. Provide a name and description for the connector. This name is used to
choose the connector when creating a Real-time Monitoring Compliance
Standard Rule.
e. After returning to the management connector listing page, select the newly
added row, then click Configure.
f. Under the Web Service End Points label, change the [servername] and [port]
to match that of your Remedy instance Web Services. The values you put here
will be similar to what you configured in the Web Services step earlier in
these instructions.
g. Enter the Remedy username and password you are using for the connector
integration.
i. Enter the time zone offset of the remedy server from UTC, ('-08:00', for
example).
j. Enter the Change ID to use as a test. This should be a valid Change Request
ID currently existing in Remedy that is used to test the connectivity between
Cloud Control and Remedy.
49.9.1.1.1.1 Using Automatic Reconciliation Rules
Once Remedy is customized and the Cloud Control connector is configured, to utilize
the automatic reconciliation features you need to create Real-time Monitoring Rules
that are configured to use automatic reconciliation. Use the following steps:
e. Continue to save the rule after this. The Real-time Monitoring Rule can be
used like any other Real-time Monitoring rule. The integration with a new
Change Management server will not begin until at least one Real-time
Monitoring Standard with a rule using Automatic Reconciliation is associated
to a target. Create a Compliance Standard, add this rule to the Compliance
Standard, and associate this compliance standard to one or more targets.
The configuration of rules is discussed in more detail in the Compliance Management
section.
49.9.1.1.1.2 Creating Change Requests for Upcoming Changes
Now that integration is set up and Real-time monitoring rules have been created,
Change Requests can be created by Remedy users in the Remedy interface. These
Change Requests will be compared to observations that occur to automatically
determine if these observations are from actions that were authorized by change
requests or not.
To make this correlation, some new fields that have been added to the Change Request
form must be filled out by the change request filer. Not all fields are required;
correlation only occurs on the fields that are present in the Change Request.
For instance, the following fields have been added to the Change Request form under
the Oracle Enterprise Manager Integration tab:
• Connector: Choose the Cloud Control connector this Change Request will use to
integrate with Cloud Control.
• Hostname: the hostname(s) this change request is for. These are the hosts that this
change request is specifying someone needs to make changes to. An empty value in
this field indicates that all hosts will be correlated to this change request.
• Target User List: the user name(s) this change request is for based on target users.
These are the target users you expect to log in to the target to make a change. An
empty value in this field means that all users on the target will be correlated to this
change request.
• Target Type: the target type this change request is against. An empty value in this
field means that any target type will be correlated to this change request.
• Target: The target this change request is specifically for. An empty value in this
field indicates that any target will be correlated to this change request.
• Facet: The facet this change request is specifically for. An empty value in this field
indicates that all facets on the above target type and target will be correlated for
this change request.
When creating a change request that you want to use to authorize changes detected by
Real-time monitoring rules, follow these steps in addition to whatever requirements
your organization implements for creation of Change Requests:
1. Under the Dates tab of the Change Request form, fill out the Scheduled Start date
and Scheduled End Date. These are the date ranges the request is valid for
reconciliation. If an action occurs outside this time, it is marked as unauthorized by
the Real-time Monitoring feature.
4. Optionally select values for the five reconciliation criteria as described above:
Hostname, Target User List, Target type, Target and Facet. The last three -- Target
Type, Target, and Facet -- will be Choice lists based on content in Real-time
Monitoring Rules that have been created in Cloud Control that belong to
Compliance Standards which are associated to targets. You can add multiple
values separated by commas.
Note:
This form can be customized in Remedy to look differently. The example form
elements from the customizations loaded earlier are only examples.
5. Change the auditable status to True. This configures Remedy to allow Cloud
Control to use this change request for reconciliation of Real-time Observations that
are detected.
7. A popup displays, notifying you that active links will send the content to Cloud
Control. You will see a DOS command window open and then close.
After creating a change request that references a target and/or facet that is being
monitored by Real-time Monitoring rules, any observations that happen against that
rule will be correlated to all open and matching change requests.
When the observation arrives at the Cloud Control server, all open change requests
that were active (based on Scheduled Start/Stop time) and have matching correlation
criteria from the Cloud Control Integration tab will be evaluated. If any change
request exists that matches the criteria of the observation, this observation will be
marked with an “authorized" audit status. If the annotation check box was checked in
the Rule configuration, details of these authorized observations will be put into a table
in the Enterprise Manager Integration tab of the Remedy Change Request.
If no open change requests can be correlated to the observation and the rule was
configured to use automatic reconciliation, then this observation is set to an
Unauthorized audit status. The Observation bundle to which this observation
belonged will be in violation and results in a Cloud Control event being created. This
event can further be used through creation of a Cloud Control Event Rule.
An observations audit status can be seen whenever looking at observation details
either by selecting Compliance, then Real-time Observations, then Observation Search,
or either of the Browse By screens. A user with the proper role can also override the
audit status for individual observations from these pages.
Any bundles that are in violation because they contain unauthorized observations will
be reflected as violations in the Compliance Results page. These violations cause the
compliance score skew lower. If these violations are cleared, the score becomes higher;
however, the history of these audit status changes will be retained for the given
observation.
Field Description
OBSERVATION_ID Unique ID given to the observation when
detected by the agent
Field Description
ACTION_PARENT_PROCESS_NAME Name of the parent process of the process that
performed the action (only applicable to some
entity types)
View: mgmt$ccc_all_obs_bundles
Description: This view returns a summary of all observations bundles. Any query
against this view should ensure that filtering is done on appropriate fields with
bundle_start_time being the first to take advantage of partitions.
Fields:
Field Description
BUNDLE_ID Bundle to which this observation belongs based
on rule bundle settings
Field Description
BUNDLE_CLOSE_REASON Explanation of why this bundle was closed
View: mgmt$ccc_all_violations
Description: This view returns all real-time monitoring violations caused by an
observation bundle having at least one unauthorized observation in it.
Fields:
Field Description
ROOT_CS_ID Root Compliance Standard GUID. This is used
for internal representation of the violation
context.
Field Description
BUNDLE_CLOSE_TIME Time the Observation Bundle closed
View: mgmt$compliant_targets
Description: This view returns all evaluation and violation details for all targets. This
is the same data that is shown in the Compliance Summary dashboard regions for
targets.
Fields:
Field Description
TARGET_ID Internal representation of the Target
View: mgmt$compliance_summary
Description: This view returns all evaluation and violation details for Compliance
Standards and Frameworks. This is the same data that is shown in the Compliance
Summary dashboard regions for Standards and Frameworks.
Fields:
Field Description
ELEMENT_NAME Display name of the Compliance Standard or
Compliance Framework
View: mgmt$compliance_trend
Description: This view returns the last 31 days compliance trend information for
compliance frameworks and standards. This is the same data that is shown in the
Compliance Summary dashboard trend regions for Standards and Frameworks.
Fields:
Field Description
ELEMENT_ID Internal ID representation of the standard or
framework
Field Description
ELEMENT_NAME Display name of the Compliance Standard or
Compliance Framework
Note:
For more information about modifying data retention values, see the chapter
"Maintaining and Troubleshooting the Management Repository" in the book
Oracle Enterprise Manager Administration.
EM_CCC_HISTORY_OBS_STATUS 366 Days This table stores the state change history for
audit status (unaudited, unauthorized,
authorized) for each observation.
EM_CCC_FILEOBS_DIFF 366 Days This table stores past file comparison for OS
File based observations.
X86 X86 X86 64 X86 32 X86 64 X86 X86 X86 X86 X86 X86
32 bit 32 bit bit bit bit 32 bit 64 Bit 32 bit 64 bit 32 bit 64 bit
Telnet Login X X X X X NS NS NS NS NS NS
(successful)
Telnet Logout X X X X X NS NS NS NS NS NS
(successful)
Telnet Login X X X X X NS NS NS NS NS NS
(failed)
SSH Login X X X X X NS NS NS NS NS NS
(successful)
SSH Logout X X X X X NS NS NS NS NS NS
(Successful)
Console Login X X X X X X X X X X X
(successful)
Console Logout X X X X X X X X X X X
(successful)
Console Login X X X X X X X X X X X
(failed)
FTP Login NS NS NS X X NS NS NS NS NS NS
(successful)
FTP Logout NS NS NS X X NS NS NS NS NS NS
(successful)
SU Login X X X X X NS NS NS NS NS NS
(successful)
SU Logout X X X X X NS NS NS NS NS NS
(successful)
SU Login (failed) X X X X X NS NS NS NS NS NS
SUDO (successful) X X X X X NS NS NS NS NS NS
SUDO (failed) X X X X X NS NS NS NS NS NS
RDP Login NS NS NS NS NS X X X X X X
(Successful)
RDP Logout NS NS NS NS NS X X X X X X
(Successful)
X86 X86 X86 64 X86 32 X86 64 X86 X86 X86 X86 X86 X86
32 bit 32 bit bit bit bit 32 bit 64 Bit 32 bit 64 bit 32 bit 64 bit
RDP Login (failed) NS NS NS NS NS X X X X X X
X86 X86 32 X86 X86 Sparc X86 Sparc X86 Spar POWER POWER
32 bit bit 64 bit 64 bit 64 bit 64 c
Telnet Login X X X X X X X X X X X
(successful)
Telnet Logout X X X X X X X X X X X
(successful)
Telnet Login X X X X X X X X X X X
(failed)
SSH Login X X X X X X X X X X X
(successful)
SSH Logout X X X X X X X X X X X
(Successful)
SSH Login X X X X X X X X X X X
(failed)
Console Login NS X X X X X X X X NS NS
(successful)
Console Logout NS X X X X X X X X NS NS
(successful)
Console Login NS X X X X X X X X NS NS
(failed)
FTP Login X NS NS X X X X X X X X
(successful)
FTP Logout NS NS NS X X X X X X X X
(successful)
FTP Login X X X X X X X X X X X
(failed)
SU Login X X X X X X X X X X X
(successful)
X86 X86 32 X86 X86 Sparc X86 Sparc X86 Spar POWER POWER
32 bit bit 64 bit 64 bit 64 bit 64 c
SU Logout NS X X NS NS NS NS X X NS NS
(successful)
SU Login (failed) X X X X X X X X X X X
SUDO X X X NS NS NS NS NS NS NS NS
(successful)
SUDO (failed) X X X NS NS NS NS NS NS NS NS
RDP Login NS NS NS NS NS NS NS NS NS NS NS
(Successful)
RDP Logout NS NS NS NS NS NS NS NS NS NS NS
(Successful)
RDP Login NS NS NS NS NS NS NS NS NS NS NS
(failed)
X86 X86 X8 X8 X86 X86 X86 X86 X86 X86 X86 X86 Spa X8 Spar X8 Sp
32 32 6 6 64 32 64 32 64 32 64 64 rc 6 c 6-6 arc
bit bit 64 32 bit bit Bit bit bit bit bit bit 64 4
bit bit bit bit
Process Start X X X X X X X X X X X X X X X X X
(successful)
Process Stop X X X X X X X X X X X X X X X X X
(successful)
Note:
When restoring a file from the Recycle Bin on the Microsoft Windows operating
system, capturing the user that made the change is not available since that feature is
not available from the Operating System.
When using the audited monitoring method on Linux operating systems, not the
Oracle kernel audit module method, directory creations are reported as file creation.
Additionally, file create activity will be reported as a file modification instead of
create. These are limitations of using the audited method of monitoring. If you use the
Oracle kernel audit module approach for OS file monitoring on Linux, these
limitations will not exist.
An X indicates support for the listed action and NS indicates "Not Supported".
X86 X86 X86 X8 X86 X86 X86 X86 X86 X86 X86 X86 Sp X8 Spa X8 Spa
32 32 64 6 64 32 64 32 64 32 64 64 arc 6 rc 6 rc
bit bit bit 32 bit bit Bit bit bit bit bit bit 64 64
bit bit bit
File Read X X X X X X X X X X X X X X X X X
(successful) (K (K (K (K
O) O) O) O)
File Delete X X X X X X X X X X X X X X X X X
(Successful) (K (K
O) O)
File Rename X X X X X X X X X X X X X X X X X
(successful)
File Create X X X X X X X X X X X X X X X X X
(successful)
File Content X X X X X X X X X X X X X X X X X
Modified
(successful)
File X X X X X X X X X X X X X X X X X
Modified
without
content
change
File NS X NS X X NS NS NS NS NS NS NS N N NS NS NS
Modified (N (N S S
(failed) on- on
KO -
) K
O)
File NS X X X X NS NS NS NS NS NS X X X X X X
Permission (no (no (K
Change n- n- O)
(successful) KO K
) O)
File NS X X X X NS NS NS NS NS NS X X X X X X
Ownership (no (no (K
Change n- n- O)
(successful) KO K
) O)
X86 X86 X86 X8 X86 X86 X86 X86 X86 X86 X86 X86 Sp X8 Spa X8 Spa
32 32 64 6 64 32 64 32 64 32 64 64 arc 6 rc 6 rc
bit bit bit 32 bit bit Bit bit bit bit bit bit 64 64
bit bit bit
File content NS X X X X X X X X X X X X X X X X
modified (no (no
(successful) n- n-
Archive File KO K
) O)
File Read NS NS NS NS NS NS NS NS NS NS NS X X X X X X
(failed)
File Delete NS X X NS NS NS NS NS NS NS NS X X X X X X
(failed) (N (N
on- on-
KO K
) O)
File Rename NS X X X X NS NS NS NS NS NS X X X X X X
(failed) (N (N (n (no
on- on- on n-
KO K - KO
) O) K )
O)
File Create NS X X X X NS NS NS NS NS NS X X X X X X
(failed) (no (no (n (no
n- n- on n-
KO K - KO
) O) K )
O)
File NS X X NS X NS NS NS NS NS NS X X X X X X
Permission (N (N (no
Change on- on- n-
(Failed KO K KO
) O) )
File NS X X NS X NS NS NS NS NS NS X X X X X X
Ownership (N (N (no
Change on- on- n-
(failed) KO K KO
) O) )
X86 32 bit X86 64 bit X86 32 bit X86 64 bit X86 32 bit X86 64 bit
Create Key (successful) X NS X X X X
X86 32 bit X86 64 bit X86 32 bit X86 64 bit X86 32 bit X86 64 bit
Delete Key (successful) X NS X X X X
Select (successful) X X X X
Update (successful) X X X X
Create (successful) X X X X
Drop (successful) X X X X
Truncate (successful) X X X X
Alter (successful) X X X X
Comment (successful) X X X X
Rename (successful) X X X X
Lock (successful) X X X X
Grant (successful) X X X X
Revoke (successful) X X X X
Audit (successful) X X X X
NOAUDIT usage X X X X
Flashback (successful) X X X
Select (successful) X X X X
Update (successful) X X X X
Delete (successful) X X X X
Create (successful) X X X X
Drop (successful) X X X X
Comment (successful) X X X X
Rename (successful) X X X X
Grant (successful) X X X X
Revoke (successful) X X X X
Audit (successful) X X X X
NOAUDIT usage X X X X
Flashback (successful) X X X
Select (successful) X X X X
Update (successful) X X X X
Delete (successful) X X X X
Create (successful) X X X X
Drop (successful) X X X X
Alter (successful) X X X X
Comment (successful) X X X X
Lock (successful) X X X X
Grant (successful) X X X X
Revoke (successful) X X X X
Audit (successful) X X X X
NOAUDIT usage X X X X
Drop (successful) X X X X
Alter (successful) X X X X
Analyze (successful) NS X X X
Drop (successful) X X X X
Alter (successful) X X X X
Select (successful) X X X X
Grant (successful) X X X X
Revoke (successful) X X X X
Audit (successful) X X X X
NOAUDIT usage X X X X
Drop (successful) X X X X
Execute (successful) X X X X
Grant (successful) X X X X
Revoke (successful) X X X X
Audit (successful) X X X X
NOAUDIT usage X X X X
Drop (successful) X X X X
Execute (successful) X X X X
Grant (successful) X X X X
Revoke (successful) X X X X
Audit (successful) X X X X
NOAUDIT usage X X X X
Execute (successful) X X X X
Grant (successful) X X X X
Revoke (successful) X X X X
Audit (successful) X X X X
NOAUDIT usage X X X X
Drop (successful) X X X X
Execute (successful) X X X X
Grant (successful) X X X X
Revoke (successful) X X X X
Drop (successful) X X X X
Drop (successful) X X X X
Alter (successful) X X X X
Drop (successful) X X X X
Alter (successful) X X X X
Truncate (successful) X X X X
Drop (successful) X X X X
Drop (successful) X X X X
Alter (successful) X X X X
Drop (successful) X X X X
Alter (successful) X X X X
Drop (successful) X X X X
Drop (successful) X X X X
Drop (successful) X X X X
Drop (successful) X X X X
Alter (successful) X X X X
Grant (successful) X X X X
Revoke (successful) X X X X
Audit (successful) X X X X
NOAUDIT usage X X X X
Alter (successful) X X X X
Set (successful) X X X X
Logon (successful) X X X X
Drop (successful) X X X X
Alter (successful) X X X X
Logoff X X X X
Change Activity Planner (CAP) enables you to plan, manage, and monitor operations
within your data center. These operations involve dependencies and coordination
across teams and business owners, as well as multiple processes. Operations can
include rollout of security patches every quarter, building new servers to meet a
business demand, migration or consolidation of data centers, and rolling out
compliance standards across an environment.
Using CAP, you can:
• Plan change activity, including setting start and end dates; and creating, assigning
and tracking task status.
• Manage large numbers of tasks and targets, using automated task assignment and
completion support.
• Use a dashboard where you can monitor your plans for potential delays and
quickly evaluate overall plan status.
• Have task owners manage their tasks using a task-based dashboard showing task
priorities and schedules.
This chapter covers the following:
• Viewing My Tasks
• EM_CAP_ADMINISTRATOR
• EM_CAP_USER
• Plan
• Task Definition
• Task Group
• Task
50.1.2.1 Plan
Plans introduce new changes into an organization, and specifies the start and end date
for the required changes. It identifies the set of changes that are required, as well as
the targets where the changes are needed.
To create plans you must have been granted the EM_CAP_ADMINISTRATOR role.
(See 'Privileges and Roles' in the Oracle Enterprise Manager Cloud Control
Administrator's Guide for additional information.) Once a plan is created, it is possible
to monitor the progress of the plan. A plan is considered complete once all tasks that
make up that plan are closed.
The following terms are used when working with plans.
Activated Plan definition is complete and the plan has been activated. Activation
means that tasks have been created and you can start managing the
plan's progress.
Overdue Plan An Active Plan which still has open tasks but has passed the Plan's
scheduled end date.
Due Within One Active Plan which is due within seven days.
Week
Deactivated Plan was activated but then canceled. All tasks associated with the plan
are canceled. The plan can still be viewed in the Change Activity Plans
page, and operations like Create Like, Manage, and Delete are available.
You can view the Manage Plan page for Deactivated Plans.
Definition in Plan has been partially defined and saved for later use. Edit the plan to
Progress continue or complete its definition, and to activate it.
Failed in Error occurred when the plan was being activated. The Comments and
Activation Audit Trail or the log files will have more details on the error.
⁎ Manual
Task owners will have to update the task status.
⁎ Use evaluation results from the compliance standard rule to update the
status. (Only available for task definitions with a target type specified.)
A compliance standard rule can be associated with the task definition. When
the tasks are generated, and as work is done on the target, this rule is
evaluated and the results are used to determine if the task is still open, or if
the task can be automatically closed.
– Patch Plan
This choice integrates with the My Oracle Support Patches & Updates support.
A patch template is associated with the task definition, and when you assign
your tasks, you can use the template to create and deploy a patch plan.
Both Patch Template and Compliance Rule require a target type selection on
which to base the evaluation.
– Job
Enables the plan creator to create tasks based on jobs. In turn the task owner
submits a job for the task or associates the task with an existing job execution.
Use either the My Tasks page or the Manage Plan Tasks tab for submitting jobs
and associating tasks.
A library job must be associated with the task. Only those library jobs whose
target type matches the task's target type will be available for selection.
There are a number of ways to specify when a task is completed.
⁎ Task is marked as complete when the job completes successfully. This is the
default. Once a day CAP updates the job-based tasks based on the status of
their associated job executions.
To immediately update the status of a job-based task, select the task in the
My Tasks page or the Manage Plan Tasks tab.
– Deployment Procedure
Enables the plan creator to associate a configured deployment procedure with a
task definition. A popup displays the available configured deployment
procedures.
Note that only configured deployment procedures will be available for
selection. To create a configured deployment procedure, select a procedure from
the Procedure Library page. Click the Launch button, fill in the desired
attributes, and then save the configuration.
The task owner can associate a CAP task with a procedure run which has been
launched from the Procedure Library.
There are a number of ways to specify when a task is completed.
definition and any procedure runs for the selected procedure. You can edit the
access for procedure definitions from the Procedure Library page. You can edit
the access for procedure runs from the Procedure Activity page. After launching
a procedure, Edit Permissions for the procedure execution associated with the
CAP task, and grant access to the appropriate administrators. Failure to grant
the proper access will prevent task owners from viewing the procedure
associated with their task.
• Task Status: The task definitions and task groups all have an associated status,
indicated by an icon. The status is determined by the status of the tasks that make
up the task definition or task group.
Example: If a task definition has four associated Tasks: Task 1 is Unacknowledged,
Task 2 is Complete, Task 3 is In Progress and Task 4 is Canceled, then the task
definition status will be In Progress.
• Dependent Task (or Dependency): Task that cannot be started until another task is
complete.
When creating a task definition or task group, you can define a single dependency
on another task definition or task group. When the plan is activated, tasks are
generated. Task dependencies are determined based on the dependencies defined
during plan creation. If a task is dependent on another task, it will be flagged as
waiting until that task is complete. If a task is dependent on a task group, it will be
flagged as waiting until all the tasks in the task group are complete.
50.1.2.4 Task
When a plan is activated, tasks are created based on task definitions. If the task
definition has targets, a task will be created for each licensed target.
The following terms are used when working with tasks:
Waiting Indicates the task is dependent on another task, and the task it depends on has
not been closed. If a task is waiting, it is indicated in the Waiting column of the
Tasks table. To see details, click on the task. The Tracking section will show
information on the task's dependency.
The task status shows where the task is in its lifecycle. The task statuses include:
Acknowledged Indicates a user has seen the task but has not yet started work on it.
Completed Indicates work on the task is complete. Tasks with automatic verification
will enter this status when the system verifies the expected changes were
made. For manual tasks, users will have to set the task status to Complete
once they finish their work. Once a task is completed, the details become
read only. Note: Completing a task cannot be undone.
Canceled Indicates the work specified in the task is no longer needed, and will not
be done. When a task is canceled, the details become read only. Note:
Canceling a task cannot be undone.
• Specify a task hierarchy that contains nested groups of tasks with related
dependencies. Using task groups, you can organize your tasks and structure the
flow of how your plan should be processed.
• Specify custom instructions for completing the task or select a patch template that
should be applied to the targets.
• Specify whether the system will automatically detect that the task is complete or
whether the owner of the task will manually close the task.
To create a Change Activity Plan:
1. From the Enterprise menu select Configuration, then select Change Activity
Plans.
2. On the Change Activity Plans page, click Create. This activates the Create Plan
Wizard.
• Name
• Target type - A plan's target type determines the types of targets that can be
associated with the plan's tasks.
• Priority - When managing a plan, priority may be used to indicate the order of
importance of ensuring task ownership is assigned and completion dates are
met.
• External Links - You can also add Links to documentation pertinent to the plan.
For example, a link could be one document that explains a specific patch set, a
specific internal procedure or policy, the set of applications impacted by the
task, and so on. Provide a name for the link and type the URL.
Click Next.
4. In the Create menu on the Create Plan: Task Definitions page, select either Task
Group or Task Definition to create a hierarchy of task actions to perform in this
plan.
See Creating a Task Definition and Creating a Task Group for detailed information.
Click Next.
6. On the Create Plan: Review page, ensure all the information you have entered is as
you intended. If updates are necessary, click the Back button and make the
necessary changes.
To save your in-progress plan definition, go to the end of the flow (the Review
step), and click the Save and Exit button.
Note: The information on this page is not saved until you click Save and Exit or
Activate Plan on the Create Plan: Review page.
1. Type the name for the task. Task group and task definition names should be
unique within a plan
Note: If you select an existing task, you can create a task before, after, in parallel, or
inside the selected task. If you select Before or After, it specifies the order in which
the tasks should be performed, and sets the task dependencies accordingly. If you
decide that you do not like the order, you can change the order by using Move or
Set Dependency.
Use the NONE option when a task definition is not based on a target.
– Select Task owner will update the status to indicate that the task owner will
manually mark the task as completed when appropriate. Tasks can be
marked as completed from the Manage Plan page or from the My Tasks
page.
– Select Use evaluation results from the compliance standard rule to update
the status to indicate that the task should be automatically marked as
completed when a target is compliant with the selected compliance rule.
Click the magnifying glass to access the Search and Select Compliance
Standard Rule dialog box to select the compliance rule from the list of rules
appropriate for the task target type.
• Patch Plan
When you select this option, use the Patch Template dialog box to select a patch
template which specifies the list of patches that should be applied to the targets
for the task.
The tasks for each target in this task definition will be closed automatically
when the system detects that the patches have been deployed to the target.
You can add more instructions in the Action field, or add a URL in the Links
section that leads to more information for the task owner.
• Job
This option enables the plan creator to create tasks based on jobs. Click the
magnifying glass to access the Search and Select Job dialog box to select the job
to be executed.
The task owner submits a job for the task or associates the task with an existing
job execution. Use either the My Tasks page or the Manage Plan Tasks tab for
submitting jobs and associating tasks.
A library job must be associated with the task. Only those library jobs whose
target type matches the task's target type will be available for selection.
There are a number of ways to specify when a task is completed.
– Task is marked as complete when the job completes successfully. This is the
default.
Once a day CAP updates the job-based tasks based on the status of their
associated job executions. To immediately update the status of a job-based
task, select it in the My Tasks page or the Manage Plan Tasks tab.
• Deployment Procedure
Enables the plan creator to associate a deployment procedure with a task
definition. Click the magnifying glass to access the Search and Select
Deployment Procedure dialog box and select the appropriate deployment
procedure.
Note: Only configured deployment procedures will be available for selection.
To create a configured deployment procedure, select a procedure from the
Procedure Library page. Click the Launch button, fill in desired attributes, and
then save the configuration.
The task owner can associate a CAP task with a procedure run which has been
launched from the Procedure Library. Ensure that the task owners have
permission to view the necessary procedure runs.
There are a number of ways to specify when a task is completed.
Note: When selecting a deployment procedure for use with a CAP task, ensure
that task owners have the appropriate access to the procedure definition and
any procedure runs for the selected procedure. You can edit the access for
procedure definitions from the Procedure Library page. You can edit the access
for procedure executions from the Procedure Activity page. After launching a
procedure, Edit Permissions for the procedure execution associated with the
CAP task, and grant permission to the appropriate administrators. Failure to
grant the proper access will prevent task owners from viewing the procedure
associated with their task.
4. Provide the action for this task definition, for example, schedule a blackout, backup
a database, and so on.
5. Select the verification method. The methods displayed are dependent on the type
of task. Methods include:
• Use evaluation results from the compliance standard rule to update the status.
Select the compliance standard rule.
Patch plan deployments are automatically verified by the system.
6. Click Add to add links to documentation pertinent to the task definition. Provide a
name for the link and type the URL.
7. Click Next.
Note: The information on this page is not saved until you click Save and Exit or
Activate Plan on the Create Plan: Review page.
Setting Dependencies
When you add task definitions to the plan, using the Add After or Add Before options
automatically set a task dependency to specify the order in which the tasks should be
completed. If you want to change the order, use the Move option to move a single task
before or after another task.
You can manually specify a dependency between task definitions to indicate whether
a task definition should be completed before another one starts. Use Set Dependency
to move a task and all the tasks after it to be performed after another task. To set a
dependency, select a task definition and click Set Dependency... located in the toolbar.
Specify the task definition that should be completed before the selected one starts. This
does not prevent the task owner from completing a task definition out of order, but
the task owner will be able to see the dependency and will be warned that the task
definition should wait until the dependency is completed.
Dependencies can only be created on task definitions that are defined at the same level
of the task definition tree, and that appear above the selected task definition.
Moving Task Definitions
Task definitions can be moved to a different location in the tree by selecting a task
definition and clicking the Move button. Note that this does not affect the order of task
definition execution, unless by moving the task definition a dependency must be
removed. Moving a task definition in or out of a task group can affect the targets
added to the task definition if the target is inherited from the task group.
Note: The information on this page is not saved until you click Save and Exit on the
Create Plan: Review page.
Adding Targets
Click Add Targets to select the targets associated with the task. To activate the plan,
there must be at least one target associated with each task except for tasks with None
as the target type.
If the target type for the plan is a system, tasks will only be created for targets in the
system appropriate to the task definition.
Note the following:
• You can only add targets that correspond to the target type for the plan. If the
target type for the plan is a system, tasks will only be created for targets
appropriate to the task definition.
For example, if the plan target type is a Database System and the task definition is
for a Listener, tasks will be created for any Listener targets associated with the
Database Systems you select.
• If a task specifies a target type, at least one target must be associated with it before
the plan can be activated.
• To see which targets are added to a task or task group, select the task definition
and click the Edit Targets button, or the target count in the table.
The Edit Targets dialog box will show the targets added directly to the selected
task, as well as the targets inherited from enclosing task groups.
1. Type the name for the task group. Task group and task definition names should be
unique within a plan.
Note: If you select an existing task group, you can create a task group (or task
definition) before, after, in parallel, or inside the selected task group. If you select
Before or After, it specifies the order in which the task groups (or task definitions)
should be performed, and sets the task group (task definition) dependencies
accordingly. If you decide that you do not like the order, you can change the order
by using Move or Set Dependency.
2. Provide a description of the task group, for example, what is the purpose of the
group. A task group is a folder used to group related task definitions.
3. Click OK.
Note: The information on this page is not saved until you click Save and Exit on the
Create Plan: Review page.
• Editing a Plan
• Deleting a Plan
• Deactivating a Plan
• Exporting Plans
• Printing Plans
1. On the Change Activity Plans page, highlight the plan which you want to mimic.
Click Create Like. In the dialog box, type the name of the new plan. Click OK.
This activates the Create Plan Wizard. Follow the steps outlined in Creating a
Change Activity Plan. The following steps alert you to changes that have
consequences when creating a plan like another plan.
2. On the Create Plan: Properties page, you can change the name of the plan. You can
also change the target type but if you do, all the task definitions will be removed!
Click Next.
3. On the Create Plan: Tasks page, you can add, edit, and remove task definitions and
task groups. You can also move task definitions and task groups within the plan.
When moving or copying task definitions and task groups, dependencies if set will
be retained. It is strongly recommended that all task names be unique. This ensures
less confusion when managing tasks after the plan has been activated. During the
copy operation, a new name with an updated index is provided to propagate
unique naming. The name can be changed using edit.
Click Next.
4. On the Create Plan: Targets page, you can add and edit targets associated with a
task definition. On the resulting dialog box, select the targets associated with the
task definition. There must be at least one target associated with each task
definition. Click OK after you have made the changes. Click Next.
5. On the Create Plan: Activate page, you can activate the schedule.
Note: You can leave this Activate page blank until you are ready to activate the
plan.
Click Next.
6. On the Review page, ensure all the information you have entered is as you
intended. If updates are necessary, click the Back button and make the necessary
changes.
You also have the option to Save and Exit if you are not ready to activate the plan.
Click Activate if you are ready to activate the plan. Click Refresh on the Change
Activity Plans page to see that the plan is activated. You can only manage a plan
after it has been activated.
• Pending Activation - Indicates the plan is scheduled to start some time in the
future. This plan can be modified up to and until the start date at which time it
becomes Active.
The Edit button is enabled for activity plans that have not yet been activated.
Clicking the Edit button shows a wizard that is essentially the same as the Create
Plan wizard. All edits are enabled. Follow the steps outlined in Creating a Change
Activity Plan.
• Active - Once an activity plan has been activated, editing is available only from the
Manage Plan page. From this page, it is possible to update the activity plan's
scheduled end date and depending on a task's status, assign a task, update a task's
start or scheduled end date, and cancel a task.
2. Click Delete.
If you no longer need to view the plan, consider deleting the plan instead of
deactivating it.
Note: Once a plan is deactivated, it cannot be re-activated.
1. On the Change Activity Plans page, highlight a plan in the Change Activity Plans
table.
1. From the Actions menu on the Change Activity Plans page, select Export.
The export function exports a list of all the plans, not an individual plan.
Note: The resulting spreadsheet is in Read Only format.
1. From the Actions menu on the Change Activity Plans page, select Print.
2. Print creates a printable page in another tab. Select the tab and then Print content.
The content is the list of all Plans, not an individual plan.
1. In the Change Activity Plans table located on the Change Activity Plans page,
highlight the plan whose owner you want to change.
3. In the resulting table, select the name of the new owner. Click Select.
Note: The owner of the plan and Super Administrator can change the owner.
You can change the Owner of a task based on the Target Owner. If the task has a
target, and that Target has an Owner defined, then you can set the task Owner to the
Target Owner.
Assigning Tasks to the Group Owner
For project purposes, you might need to assign tasks to a manager. This enables the
manager to evaluate the work load, the availability of the team, and then assign tasks
accordingly.
To assign tasks to a group owner, you must first create a target property. The
following steps explain how to create a target property.
where CAP_assignments represents any property (NOTE: The property MUST begin with
CAP_)
and * represents all targets in the Enterprise Manager Repository
2. From the Targets menu, select Host. From the table, click the host in which you are
interested. On the resulting Host home page, from the Host menu located in the
upper-left corner, select Target Setup, then select Properties. Type the target
property value for the target.
Using the CAP UI, you can reassign tasks using this target property. For example, on
the Tasks tab located on the Manage Plan page, click Change Owner, then select
Assign Tasks by Target Property.
• Drill into the plan and get more details and manage individual tasks. This helps
you to identify any issues that may delay the activity plan completion deadline.
• View the audit trail on who made changes to the plan and what changes were
made to the plan.
• View and update links. Links associated with the plan during creation can be
viewed during plan management. Plan owners and privileged users can update
existing links.
• Reassign a plan to a different plan owner; reassign a task to a different task owner.
• Manage tasks
– Maintain audit trail on who made changes to the plan and what changes were
made to the plan.
• Summary Tab
Provides an overall summary of the plan's status including the status of tasks and
plan summary.
• Tasks Tab
Enables you to manage all the plan's tasks. Tasks can be updated individually or in
bulk.
• Overview - Shows the progress that has been made in the accomplishment of the
plan. Plan details are also included: status, scheduled end date, priority, owner,
and links.
The estimated end date is the maximum scheduled end date of all the tasks that
make up the plan.
• Plan Summary - Visual depiction of the task definitions and task groups that make
up the plan. Dependencies are indicated by arrows between the objects in the plan
and show the order in which tasks should be performed.
You can view the Plan Summary in either graph or table format. The topology
graph (and alternate table view) show all the task definitions and task groups that
make up the plan and the order in which the tasks should be executed.
The task definitions and task groups all have an associated status, indicated by an
icon. The status is determined by the status of the tasks that make up the task
definition or task group.
Example: If a task definition has four associated Tasks: Task 1 is Unacknowledged,
Task 2 is Complete, Task 3 is In Progress and Task 4 is Canceled, then the task
definition status will be In Progress.
Table View
In Table view, there are counts associated with each Task Definition / Task Group.
Clicking the links displays the Tasks tab which automatically adds search filters to
allow you to see details on that task definition / task group based on the link you
clicked. For example, when you click the Overdue link, the task tab displays with
the task name set and the Overdue flag checked (in requires attention). You will
then see all overdue tasks outstanding for the task definition you were looking at in
the table.
From the Table view of the Plan Summary, you can Print and Export the plan.
Graph View
When using the graph format, you can choose the graph format to be either top-
down or left-to-right. You can opt for the following annotations to be displayed on
the graph: Scheduled Start Date, Scheduled End Date, or Total Tasks.
In graph view, arrows will be drawn between task definitions that have a
dependency relationship. Arrows will point from a task definition that needs to be
completed first, toward a task definition that must wait for its completion.
Task groups are drawn as an encompassing box around other task definitions.
Hover over a task definition name and the resulting arrow to display information
about the tasks created as a result of this task definition.
Right-click on an empty area of the graph to bring up a menu that allows you to
switch the orientation of the graph, or print the contents of the graph.
The Display option allows you to toggle between the Graph and Table views.
• If a plan is not yet completed, you can change its scheduled end date by clicking on
the calendar icon next to the Scheduled End Date in the Overview section.
• You can view the task completion trend by clicking on the line chart icon next to
the Progress(%) in the Overview section.
• Status of Tasks
Pie chart breakdown of the plan by task status.
• Views
This section provides quick links to allow you to quickly find tasks that require
attention. The supported views are: Unassigned Tasks, Unacknowledged Tasks,
Overdue Tasks, Due Within One Week Tasks, and All Active.
All Active shows all open tasks. There is also a Show All link that shows all plan
tasks, including completed and canceled tasks.
• Tasks
This section displays a table of tasks and allows you to search, view and update the
plan tasks. The Actions menu and table buttons provide bulk task update support
for operations like setting dates, changing task owners, acknowledging and
canceling tasks. By default, the tasks are filtered to show only open tasks.
Select a single task to see and update task details or select multiple tasks, and use
the bulk operation buttons at the top of the table, and in the table's Action menu.
The plan task view automatically filters out closed tasks but search filters can be set
by expanding the search section, or using the View Links.
Single Task
When a single task is selected, you can view and edit task information using the
General tab, for example, target name. You can add task comments and use the
Comments and Audit Trail tab to determine who made changes to the plan and
what changes were made to the plan. The information is specific to the task you are
looking at. Therefore it tracks all changes made to that task including comments
that were manually added.
Multiple Tasks (Bulk Operations)
When multiple tasks are selected, you can perform the following:
– Acknowledge a task. Acknowledging a task means you have seen the task but
have not started working on it. The only users who can set a task to
Acknowledged are: the task owner, the plan owner, and super administrator.
When a user is assigned a task, he is allowed to see the plan that contains the
task.
– Cancel a task
– Change owners
– Submit a job. You can submit a job only if the tasks are part of the same task
definition.
- Associate with existing job execution. Choose an existing job execution to
associate with all the selected tasks. Note from engineer: Didn't have time to
implement the bulk case.
– Create a patch plan. A patch plan can be created when patch plan-based tasks
are selected. The tasks must be part of the same task definition. Use the Patches
and Updates page to deploy the patch plan.
– Associate with existing patch plan. Note that when the patches in the patch
template for the task definition have been applied to the task's target, the task
will close automatically whether or not a patch plan is associated with the task.
– Associate with existing procedure run. This option is enabled for tasks in the
same task definition. Once you create the association, use the Deployment
Library page to launch the deployment procedure.
The task table also supports Export and Print of the task list. Use the Action menu
for these operations.
Requiring Attention
When you click the Requires Attention icon in the table, the General tab and the
Comments and Audit Trail tabs appear. Study the available information to
determine what you should do next. For example, if the plan is overdue, determine
what task is causing the delay.
You can study the audit trail for a particular task, for example, dates, whether
dependencies exist, and owner or status of the task changes.
Changing Owner
One especially helpful thing you can do is change the owner. This is particularly
useful when job responsibilities change and user roles change. Change Owner
shows you a list of users; select the new user.
2. Highlight the task on which you want to perform any of the following operations:
Note: You can select one or more tasks in the table and access bulk operations, such as
Acknowledge and Change Owner.
By default the tasks are filtered to show only open tasks.
The following sections are provided on the My Task page:
• Views - Provides quick links to allow you to quickly find tasks that require
attention. The supported views are: Overdue Tasks, Due Within One Week Tasks,
Unacknowledged Tasks, and All Active.
All Active shows all open tasks. The Show All link shows all tasks, including
completed, and canceled tasks.
• Tasks - Displays a table of tasks and allows you to search, view, and update the
tasks. The Actions menu and table buttons provide bulk task update support for
operations like setting dates, changing task owners, acknowledging and canceling
tasks.
Selecting a Single Task
Selecting a single task allows you to view and edit the details of the task, and lets
you add comments to the task, as well as review the comments and audit trail for
the task.
When you select a single task, the data associated with the task is displayed in the
following tabs:
– Task Details: Provides basic task information like the plan, target and task
action. In this section you can update the task's scheduled start date and
scheduled end date.
From the task details section, you can submit the job, create a patch plan,
associate the task with a patch plan, associate a job with a job execution, and
associate a job with a deployment procedure run.
Selecting Multiple Tasks
If you multi-select tasks in the task table, you can use the table buttons (and
Actions menu) to perform actions across tasks. For example: Acknowledge, Change
Owner, Cancel, Set Scheduled Start Date, Set Scheduled End Date for multiple
tasks at one time. In addition, you can Submit Job, Associate with Existing Job
Execution, Create Patch Plan, Associate with Existing Patch Plan, and Associate
with Existing Deployment Procedure Execution.
– The bulk Change Owner feature supports the following ways of setting the
owner:
⁎ Assign Tasks to User (Select using the Enterprise Manager user selector.)
⁎ Assign Tasks to Target Owners: (Only applies to tasks that have targets. The
Target Owner must be set for that target for the assignment to take place.)
Note: This label can change to Assign Tasks by Target Property, if you add
any Change Activity Planner target properties to your environment. For
additional information, see the 'Overview of Change Activity Planner'
chapter in the Oracle Enterprise Manager Lifecycle Management Administrator's
Guide.
– For job-based tasks, you can select multiple tasks and submit one job to
complete the tasks, provided the tasks have the same task definition.
If the task definitions for the selected tasks do not all refer to the same job, the
Submit Job option will be disabled.
– For deployment procedure-based tasks, all the tasks must have the same task
definition. If the task definitions for the selected tasks do not all refer to the
same deployment procedure, the Associate with Existing Deployment Procedure
Execution option will be disabled.
– For patch template based tasks, the plan creator associates the task definition
with a patch template. The task owner can either associate the task with an
existing patch plan, or create a new patch plan. The patch template must be the
same for all selected tasks.
• Tracking: Allows you to see and edit the current owner and the task status. If this
task depends on another task, the dependency will be displayed and, in the case
where the task is waiting on this dependency, the waiting icon will be visible. If
this task requires attention, details of the issues will be provided in this section.
• Comments and Audit Trail: Displays all comments and the audit trail annotations.
Note:
For tasks involving jobs and deployments procedures, if you do not see your
assigned task in the Tasks region, ensure you have necessary permissions to
see the job and deployment procedure. If you do not have the appropriate
access, contact the person who created the job or deployment procedure.
1. Create a new plan and provide the priority of the activity plan, activity plan
description, and the target type to which the activity plan applies.
2. Create new task definitions/task groups under this plan. This is an iterative
process until you finish entering all task definitions and task groups.
3. Optional step. Assign at least one target to these tasks on which tasks need to be
performed.
Task definitions that have a specified target type must be assigned at least one
target of the specified type. If multiple targets are assigned to a task definition, a
separate task will be created for each target. Note: Task definitions with target type
'None' do not require a target selection.
Note: For tasks involving jobs and deployments procedures, if you do not see the
associated job or deployment procedure for your assigned task in the task details
region, ensure you have the required permissions to see the job and deployment
procedure. If you do not have access, contact the person who created the job or
deployment procedure. You may also need to request permission to see specific job
or procedure runs if they were submitted by another user.
• Automatically close the task using a compliance standard rule. The system will
automatically close the task if the rule check passes on the chosen targets.
• When you assign targets to a task group, all tasks in the task group will get the
assignment. You can assign targets at the task group level or the individual task
level. So if all tasks should be performed on Database1, and you select the task
group folder and assign target type Database1, all tasks in the group will be
assigned Database1. You can then continue to select a single task within the group
to be assigned to an additional database target if needed.
• Close the task group automatically. The system automatically closes the task group
when all subtasks are complete.
• During the task definition phase, rearrange tasks using delete, move and copy
functions. Task definitions and task groups can be relocated in the plan hierarchy
to ensure proper dependencies and flow.
• During the task creation phase, create dependency between two tasks, for example
task B depends on task A implying that task B cannot start until task A is
completed. The system allows users to create dependencies at the same level
(dependency only between siblings).
This chapter provides an overview of Deployment Procedures and describes the key
aspects you need to know about them. In particular, this chapter covers the following:
• Components of a Procedure
• Creating a Procedure
• Procedure Library Tab: Use this tab to view a list of all available procedures. For
executing procedures or creating new procedures or creating procedures from
Oracle supplied ones, a deployment procedure is created. A deployment procedure
is a sequence of provisioning steps and phases, where each phase can contain
sequence of steps.
Oracle provides best practice deployment procedures that are marked as Oracle
under the Created By field. You cannot edit or delete these procedures.
For information about the tasks that can be performed from the Procedure Library
page, refer to Managing Deployment Procedures.
• Procedure Activity Tab: Use this tab to view a list of all procedure runs that have
been submitted for execution and all executing procedures.You can also view the
status of a procedure run. In addition to this, you can perform a number of actions
on the submitted procedure like Stop, Suspend, Resume, Retry, Delete, and
Reschedule. To understand that actions that you qualify to perform on a procedure,
select the procedure. For example, you can Stop or Retry a failed procedure.
Starting with Enterprise Manager 12.1.01.3, a new option called Reschedule has
been introduced, which is enabled for a job that is in progress and has a repeating
schedule.
When a procedure is executed, an instance of that procedure is created. This
instance keeps track of which step is currently being executed and stores any data
collected from the user or any data automatically gathered by executing the action
steps.
For an overview of the Procedure Activity tab, see Overview of the Procedure
Instance Execution Page
• Recycle Bin Tab: You can delete procedures and runs. When procedures or runs
are deleted, they will be internally marked as deleted and will be displayed in the
Recycle Bin tab.
Figure 51-1 shows you how you can access the Provisioning screen from within Cloud
Control.
• Super Administrator role allows you to perform all the Administrative operations,
and provides full privileges on all the targets.
Roles Description
EM_PATCH_DESIGNER Role has privileges for creating and viewing any patch plan
– Create Enterprise Rule Set, basically collection of rules that apply to Enterprise
Manager elements, for example, targets and job.
– Create new Named Credential that are required to perform Enterprise Manager
Administrative Operations.
– Create Any Software Library Entity, Import Any Software Library Entity,
Export Any Software Library Entity, and so on.
Note:
• EM_ALL_OPERATOR (Operator): This role has restricted access, and allows you to
perform only the run-time activities. For example, Launching a Deployment
Procedure.
The following table lists all the roles predefined for Operators, and their
corresponding descriptions:
Roles Description
EM_ALL_VIEWER Role Desc
– Create new Named Credential that are required to perform Enterprise Manager
Administrative Operations.
Note:
Note:
• Target List
• Procedure Variables
• For copying a jar file to multiple hosts, you will need just one target list. You may
choose to use the default target list for this purpose.
• For cloning an Oracle Home, and provisioning it on multiple targets, you will need
a minimum of two target lists: one target list for the source which contains only a
single target, and a second target list which contains all the destination targets.
• For provisioning or patching a WebLogic Server, you might require three separate
target lists one for the Administration Server, one for the Managed Servers, and
one for the Database.
To declare the Procedure Variable, you must enter a unique name, a description for it.
Optionally, you can select the password check box to make the variable secure.
You can create two types of Procedure Variables that can be later used while
launching the deployment procedure. They are as follows:
– ⁎ Text, allows you to enter one value for the variable. For example, staging
location, host name, profile name, and so on.
⁎ List of Values, allow you to enter many values for a variable. To provide
multiple values for a variable, click Add, then enter the details like Value,
Display Name, and a Description for the variable. For example, let's declare
a variable called country with multiple values as follows
US America
IE Ireland
• Software Library Entity: This variable allows you the flexibility of binding the
variable to Software Library Directive or Component at the time of launching the
procedure. Earlier these values had to be specified at design-time while creating
the procedure, now with the introduction of Software Library entity variable you
can specify the values dynamically at the time of launching the procedure.
Note:
• Types of Phases
• Rolling Phase
Rolling phase is a phase where steps are run serially across targets.
• Parallel Phase
Parallel phase is a phase where steps are run in parallel across targets.
• Manual Step
Manual Step is that task that requires user interaction and cannot be automated.
Typically, Deployment Manager would display the instructions that need to be
performed by the user. After the operation is performed, the user proceeds to the
next step.
Examples of a Manual Step:
– Reboot a system.
• Computational Step
Computational Step is that task whose operations are performed within the
Deployment Engine and does not require any user intervention. This step gathers
additional information for executing a procedure. This step cannot be inserted by a
user, and only Oracle Corporation can insert this step.
Examples of Computational Step:
– Retrieving target properties from the repository and updating the runtime
information.
• Action Step
Action step is a task that performs some operations run on one or more targets.
They must be enclosed within a phase. The Deployment Procedure maps the
Action Step and target pair to a job in the Enterprise Manager Job System. The
Deployment Procedure can schedule, submit, and run a job per Action Step per
target. For example, running a script, applying a patch, upgrading an Oracle home,
and so on.
Also note that an Action Step is said to have completed successfully only when all
its associated jobs have completed successfully. If a job fails, then the Action Step
also fails. You may then manually restart the job in question or ignore and instruct
the Deployment Procedure to proceed.
The different types of Action Steps include:
– Job
Job Step is a special type of Action Step that executes a predefined job type on a
target. This is used if you want to execute a job type as a part of a Deployment
Procedure. You need to pass job parameters for a step.
⁎ Staging a patch.
⁎ Starting a database.
– Library: Directive
Directive Step is a special type of Action Step to deploy a directive alone. This is
useful when users want to store their custom scripts in the Software Library and
reuse them in a Deployment Procedure.
For more information about Directives, see Oracle Enterprise Manager Cloud
Control Administrator's Guide.
Examples of Directive Step:
– Library: Component
A Component Step is a special type of Action Step to deploy a Software Library
Component and the associated Directive. Deployment Procedure Manager
executes the directive with respect to the component. Components used for
Generic Component Step generally has one directive associated with it. This
association is done by selecting both the component and directive while creating
the step. All directives that you associate with the component while uploading
to the software library will be ignored while executing the step.
For more information about Components, see Oracle Enterprise Manager Cloud
Control Administrator's Guide.
Examples of Component Step:
⁎ Applying a patch.
– Host Command
Host Command Step is a special type of Action Step that encapsulates simple
host commands. This step allows the user to enter a command line or a script
(multiple commands) to be executed on the target host.
Examples of Host Command Step:
⁎ Restarting OID
1. To disable a phase or step, select the phase or step you want to disable, and
click Disable.
2. To enable a phase or step, select the phase or step you want to enable, and click
Enable.
• Delete: Select the step or phase you want to delete, and click Delete.
Note:
Oracle recommends that you disable the steps or phases instead of deleting
them because steps or phases once deleted cannot be retrieved, but steps or
phases disabled can always be enabled later.
• Insert: To add a new Step or Phase, click Insert. In the Create wizard, do one of the
following:
- Add a Phase. See Adding Rolling or Parallel Phase
- Add a Step. See Adding Steps
• Edit Step: To edit a Step or Phase, click Edit Step. Depending upon your selction
either the Edit Phase or Edit Step wizard is displayed. Accordingly, follow the
steps available in:
- Edit a Phase. See
- Edit a Step. See
1. From the Enterprise menu, select Provisioning and Patching, then select
Procedure Library.
2. In the Procedure Library, from the list of actions, select Create New and click Go.
3. On the Create New Procedure page, in the General Information tab, provide a
Name and description for the procedure, Procedure Utilities Staging Path, and
Environmental Variables.
4. On the Create New Procedure page, click Target List tab. You can create your own
custom target lists on which designated Phases can run. See Target List.
The advantage of this approach is that you can have multiple custom target lists,
and assign it to the different phases in your procedure.
5. On the Create New Procedure page, click Procedure Variable tab. You can create
your own procedure variables. See Procedure Variables.
6. On the Create New Procedure page, click Procedure Steps tab. This tab allows you
to add phases and steps to your procedure. For more information about adding a
phase or a step, see sections Adding Rolling or Parallel Phase and Adding Steps.
1. From the Enterprise menu, select Provisioning and Patching and then select
Procedure Library.
2. On the Provisioning page, in the Procedure Library tab, from the menu, select
Create New, then click Go.
Note:
When creating a phase inside another phase, for the insert location, select
After "Default Phase" or Before "Default Phase". Inside "Default Phase",
you will not be able to select any target in the next page.
a. On the Create page, specify general information about the phase as described
in the following table:
Insert Location If you want to insert the custom phase after the phase or step you selected, then
select After <phase or step name>. To insert it inside the phase or step selected,
select Inside<phase or step>, Otherwise, select Before <phase or step>.
Type If you are adding a rolling phase, then select Rolling. If you are adding a parallel
phase, then select Parallel.
Error Handling Select the error handling mode you want to set for the custom phase. Every step in a
Deployment Procedure is preconfigured with an error handling mode that indicates
how the Deployment Procedure will behave when the phase or step encounters an
error. The error handling modes offered by Cloud Control are:
- Inherit - Inherits the error handling mode that was set for the enclosing phase.
(When set for a step that is outside a phase, it inherits the error handling mode from
the Deployment Procedure).
- Stop On Error - Stops when an error is encountered. Deployment Procedure does
not proceed to the next step until you correct the errors or override them.
- Continue On Error - Continues even when an error is encountered.
- Skip Target - Ignores the failed target on the list and continues with other targets.
b. On the Select Target List page, select a target list to indicate the type of targets
on which the new phase should run.
All the target lists declared while creating the procedure is listed in the drop
down menu, select the target list to use for this phase. The actual targets can
be chosen when the procedure is being launched.
c. On the Review page, review the information you have provided for creating a
new phase, and click Finish.
1. On the Create page, specify general information about the step as described in
Table 51-4.
2. On the Select Type page, select a job type that best describes the task that you
want the step to perform. For example, if you want to job to transfer files across
the network, then select File Transfer.
3. On the Map Properties page, specify values for the parameters that are required
by the selected job type. Additionally, you can set the target List to be applied for
this step.
4. On the Review page, review the information you have provided for creating a
new step, and click Finish.
1. On the Create page, specify general information about the step as described in
Table 51-4.
2. On the Select Directive page, if you have selected a Software Library variable in
the Procedure Library tab, then you can select one of the following options. If not,
you can directly select a directive from the table, and click Next.
- Select New Directive: This option lists all the directives available in Software
Library, select a directive from the list that you want to run on the targets. Provide
necessary values in the Select Directive section to narrow down the search results.
- Select New Software Library Entity Variable: Select the Software Library
variable that you declared while creating the procedure from the Software
Library Entity Variable drop down menu. This variable behaves as a place holder
and enables you the flexibility of binding it with the directives dynamically while
launching the procedure. Essentially, you can use this variable to select certain
entities which do not need any parameters.
For example, user-defined scripts like a Perl to print the current directory location
that does not need any parameters to be passed.
- Select New Software Library Entity Variable with Directive Properties: This
option allows you to bind a Software Library entity variable with directives that
are available in Software Library. Ensure that you choose a directive whose
properties (signature) matches with the entity declared.
Select Always Use Latest Revision so that only the latest revision of the selected
directive will be used at all times.
• By default the Run Directive, and Perform Cleanup options are enabled to run
the script, and remove the files after the steps has run.
• In the Directive Properties section, specify values for the properties associated
with the selected directive. You have the option of providing or not providing
the property values at this stage. If you do not provide the property values
now, then they are prompted at the time of launching the procedure.
• In the Credentials section, set the target List to be applied for this step.
• In the Time limit properties section, you can set a max time allowed for an
operation to complete in seconds. For example, let's assume that you have a
huge procedure with numerous steps and you do not want to block the whole
execution if one step fails (because agent is down). In such a scenario, setting a
time limit on a step is very effective. If you set a time limit of 75 seconds on a
step, then if the job exceeds this set time, the step is skipped.
4. On the Review page, review the information you have provided for creating a
new step, and click Finish.
1. On the Create page, specify general information about the step as described in
Table 51-4.
2. On the Select Component page, select a component name from the table, and click
Next. However, if you have set a Software Library variable in the Procedure
Library tab, then you must select one of these following options:
- Select New Component: This option lists all the components available in
Software Library, select a component from the list that you want to run on the
targets. Provide necessary values in the Select Component section to narrow
down the search results.
- Select New Software Library Entity Variable: Select the Software Library
variable that you declared while creating the procedure from the Software
Library Entity Variable drop down menu. This variable behaves as a place holder
and enables you the flexibility of binding it with the components dynamically
while launching the procedure. Essentially, you can use this variable to select
certain entities which do not need any parameters.
For example, user-defined scripts like a Perl to print the current directory location
that does not need any parameters to be passed.
- Select New Software Library Entity Variable with Component Properties: This
option allows you to bind the Software Library variable with the components that
are available in Software Library. Ensure that you choose a component whose
properties (signature) matches with the entity declared.
If you check Always Use Latest Revision, then only the latest revision of the
selected component will be used at all times.
3. On the Select Directive page, you can select a directive from the table, and click
Next. However, if you have set a Software Library variable in the Procedure
Library tab, then you must select one of these following options:
- Select New Directive: This option lists all the directives available in Software
Library, select a directive from the list that you want to run on the targets. Provide
necessary values in the Select Directive section to narrow down the search results.
- Select New Software Library Entity Variable: Select the Software Library
variable that you declared while creating the procedure from the Software
Library Entity Variable drop down menu. This variable behaves as a place holder
and enables you the flexibility of binding it with the directives dynamically while
launching the procedure. Essentially, you can use this variable to select certain
entities which do not need any parameters.
For example, user-defined scripts like a Perl to print the current directory location
that does not need any parameters to be passed.
- Select New Software Library Entity Variable with Directive Properties: This
option allows you to bind a Software Library entity variable with directives that
are available in Software Library. Ensure that you choose a directive whose
properties (signature) matches with the entity declared.
Select Always Use Latest Revision so that only the latest revision of the selected
directive will be used at all times.
• By default the Run Directive, and Perform Cleanup options are enabled to run
the script, and remove the files after the steps has run.
• In the Directive Properties section, specify values for the properties associated
with the selected directive. You have the option of providing or not providing
the property values at this stage. If you do not provide the property values
now, then they are prompted at the time of launching the procedure.
• In the Credentials section, set the target List to be applied for this step.
• In the Time limit properties section, you can set a max time allowed for an
operation to complete in seconds. For example, let's assume that you have a
huge procedure with numerous steps and you do not want to block the whole
execution if one step fails (because agent is down). In such a scenario, setting a
time limit on a step is very effective. If you set a time limit of 75 seconds on a
step, then if the job exceeds this set time, the step is skipped.
5. On the Review page, review the information you have provided for creating a
new step, and click Finish.
1. On the Create page, specify general information about the step as described in
Table 51-4.
2. On the Map Properties page, select the Source Target from which you want to
transfer files, the source target path, the Target Destination for file transfer and the
destination path. Specify the Source and Destination Credential Usage, whether
Host or Privileged Host credentials. Click Next.
If you select Transfer all the files in this path option, then all the files in the
source path are transferred. If uncheck this option, then the Source File Name
field becomes mandatory.
3. On the Review page, review the information you have provided for creating a
new step, and click Finish.
1. On the Create page, specify general information about the step as described in
Table 51-4.
2. On the Enter Command page, specify the command or script, which you want to
run on the target, and the privilege to run it.
To run the host command as a script, select Script from the Command Type
menu. Specify the shell that can interpret the script. The script is passed as
standard input to the specified interpreter.
To run the host command as a command line, select Single Operation from the
Command Type menu. Specify the text you want to execute used as a command
line. No assumptions are made about the shell to interpret this command line. The
first entry in the command line is assumed to be the process to spawn and the rest
3. In the Time limit properties section, you can set a max time allowed for a script to
run in seconds. For example: Let's assume that you have set this value to 75
seconds, then when the script runs if it exceeds the set time, then this step is
skipped.
4. On the Review page, review the information you have provided for creating a
new step, and click Finish.
1. On the Create page, specify general information about the step as described in
Table 51-4.
2. On the Enter Instructions page, provide a message to inform the operator about a
manual step. For example, if want to instruct the operator to log in to a system
and update the kernel parameter, then specify the following:
You have been logged out of the system. Log in and update the Kernel parameters.
3. On the Review page, review the information you have provided for creating a
new step, and click Finish.
Insert Location If you want to insert the custom step after the step you selected, then select
After <step name>. Otherwise, select Before <step>.
Error Handling Select the error handling mode you want to set for the custom phase. Every
step in a Deployment Procedure is preconfigured with an error handling mode
that indicates how the Deployment Procedure will behave when the phase or
step encounters an error. The error handling modes offered by Cloud Control
are:
- Inherit - Inherits the error handling mode that was set for the enclosing
phase. (When set for a step that is outside a phase, it inherits the error
handling mode from the Deployment Procedure).
- Stop On Error - Stops when an error is encountered. Deployment Procedure
does not proceed to the next step until you correct the errors or override them.
- Continue On Error - Continues even when an error is encountered.
- Skip Target - Ignores the failed target on the list and continues with other
targets.
• Rescheduling a Procedure
• Reverting a Procedure
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Procedure Library.
• For Viewing the procedure, select the deployment procedure, and from the
actions menu, click View Procedure Definition.
• For Editing the procedure, select a user-defined procedure, and from the actions
menu, select Edit Procedure Definition and click Go. If you want to customize
an Oracle-provided procedure, from the actions menu, select Create Like and
click Go. Save the procedure, and then customize it.
• For Deleting the procedure, select the deployment procedure, and from the
actions menu, click Delete.
1. In Cloud Control, from the Enterprise menu select Provisioning and Patching,
then select Procedure Library.
2. On the Provisioning page, from the actions menu select Edit Permissions, and then
click Go.
3. On the Edit Permissions: <target name> page, click Add. From the Search and
Select Administrator or Role dialog box, select the administrators or roles to which
you want to grant the permissions, and click Select.
4. On the Edit Permissions: <target name> page, select the Role and the privileges
that you want to grant to each of these roles. A full privilege will let the Operator
edit the Deployment Procedure, and a Launch privilege will only allow an Operator
to view and run the Deployment Procedure. Click OK to save these grants.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Procedure Library.
3. On the Procedure Activity page, click the procedure to view the status of all
deployment procedures in various stages of their lifecycle.
Status Description
Status Description
Action Required Implies that the Deployment Procedure has stopped running as
user interaction is required.
Failed Implies that the Deployment Procedure has failed and cannot
execute the remaining steps. However, you always have the
option of retrying the failed steps. Alternatively, you can ignore
the error, and proceed further.
Saved Implies that the Deployment Procedure has not been submitted
for execution, and has been saved.
Completed with Errors Indicates that the Deployment Procedure completed, but
completed with errors possibly because some of the steps
within it might have failed and the steps might have the Skip
target/Continue on Error flag enabled.
a. Search for a particular Deployment Procedure using the Search section. Click
Search to refine your search criteria.
b. View the status of all Deployment Procedures. You can also manually refresh
the page and view the updated status by clicking Refresh.
Note:
For more information on tracking the jobs, see
1. From the Enterprise menu, select Provisioning and Patching, then select
Procedure Activity tab.
2. On the Procedure activity page, select the scheduled procedure from the table, and
click Reschedule.
3. On the Reschedule procedure page, select the date and time when you want to run
the procedure.
4. From the Repeat menu, select an option from the menu to run the job at the
selected frequency.
1. From the Enterprise menu, select Provisioning and Patching, then select
Procedure Library.
2. On the Procedure Library, from the menu, select Revert. Note that the revert option
is enabled only if the procedure was edited by user.
3. Click Go.
For example, if you have version 1.5 as the latest and you revert to version 1.3, a new
version 1.6 is created, which will be the same as version 1.3.
For example, to set a grace period of 12 mins for a step, run the following command:
Submit any procedure using the Cloud Control UI, if the agent machine is not
reachable for a time period longer than 12 minutes, then the step is marked as
failed.
Note:
Note:
For a video tutorial on creating and using the User Defined Deployment
Procedure, see:
Oracle Enterprise Manager 12c: Implement User-Defined Deployment Procedures
• Step 2: Saving and Launching User Defined Deployment Procedure with Default
Inputs
• Step 3: Launching and Running the Saved User Defined Deployment Procedure
Note:
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
select Procedure Library.
2. On the Provisioning page, from the list of actions, select Create New, and click
Go.
3. On the Create New Procedure page, in the General Information tab, provide a
unique name and description for your procedure.
4. In the Target Lists tab, you can use the default host_target_list variable or
add any number of new custom target lists. Adding new custom target lists
enables you to group the targets which in turn allows phases to use separate
target lists (targets) that they can iterate on.
5. In the Procedure Variables tab, click Add Row to define procedure variables. In
addition to String type, you can add Software Library Entity variable. For more
information about this, refer to Procedure Variables.
Specify the Variable Name, Display Name, Description, and Type from the drop
down menu. Also define whether the variable is a password and a mandatory
field.
6. In the Procedure Steps tab, select the default phase, and do the following:
a. Select Default Phase, and click Insert. For information on inserting a phase,
see Adding Rolling or Parallel Phase.
Note:
Without declaring a Target List, you can not proceed with the creation of a
phase.
b. Select the phase you created, and then click Insert to insert steps. For
information on inserting steps, see Adding Steps.
7. Repeat steps 6 to insert steps and phases according to the procedure you want to
create.
8. Click Save and Close to save the procedure. You can run the configuration for
future deployments.
51.6.2 Step 2: Saving and Launching User Defined Deployment Procedure with Default
Inputs
Log in to Enterprise Manager Cloud Control with Operator privileges to launch the
saved UDDP with default values. To do so, follows these steps:
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
select Procedure Library.
2. On the Provisioning page, select the saved UDDP, and click Launch.
3. On the Select Targets page, select the target list from the drop down menu, and
click Add to populate the target list. Click Next.
4. If you declared variables that you did not define during the procedure creation,
then you will have to provide the details in the Set variable page. All the unbound
variables are displayed, enter appropriate values for the same.
If you have declared a Software Library Entity variable, then you could search and
select the desired entity from Software Library. Once the value is populated, you
may even choose to lock this value so that any other Operator who does not have
privileges on your procedure will not be able to update this values. For more
information on different types of variables, see Procedure Variables. Click Next.
5. On the Set Credentials page, you need to set the credentials for the target host
machines that you have included as a part of the host_target_list variable.
Click Next.
6. On the Set Schedule and Notification page, you can schedule the job to run
immediately or at a later preferred time.
7. Click Save, and provide a configuration name to save the job template with default
values.
Some of the procedures allow you to not just save the procedure with default values,
but also lock them. For example, the following Database Provisioning procedure
describes how to save and launch a procedure with lock downs.
51.6.2.1 Saving and Launching the Deployment Procedure with Lock Down
Lock Down is a new feature introduced in Oracle Enterprise Manager Cloud Control
12c that enables administrators with Designer privileges to standardize the
Deployment Procedures across the enterprise. If Designers with Super Administrator
privileges create Deployment Procedure templates with lock downs, and save them,
then these templates can be used by Operators who can launch the saved Deployment
Procedures, make changes to the editable fields, and then run them.
To create a Deployment Procedure with lock downs, an administrator logs in with
designer privileges, and launches a Deployment Procedure. In the interview wizard of
the Deployment Procedure, the designer enters the values for certain fields and locks
them so that they cannot be edited when accessed by other users. For example, in the
following graphic, fields like Database Version, Database type are locked by the
designer, and when an operator launches the same deployment procedure, these fields
will be grayed out, and cannot be edited:
In the following use case, user logs in with designer privileges to provision a Single
Instance Database on a Linux host. Designer updates most of the values prompted in
the wizard and locks them as he/she does not want other users like Operators to have
edit privileges on them. Some of the fields like adding targets, and some additional
configuration details are not locked. The Deployment Procedure is then saved with a
unique procedure name, but not submitted. A user with Operator privileges logs in
and runs the saved procedure after updating all the editable fields, like adding targets,
additional configuration details.
Broadly, it is a two-step process as follows:
• Step 1: Saving a Single Instance Database Deployment Procedure with Lock Downs
1. From the Enterprise menu, select Provisioning and Patching, then select
Database Provisioning.
2. On the Database Provisioning page, select Provision Oracle Database, and click
Launch.
3. In the Select Hosts page, in the Select hosts section, click Add to select the
destination host where you want to deploy and configure the software.
If you want to use a provisioning profile for the deployment, choose Select a
Provisioning Profile and then select the profile with previously saved
configuration parameters.
In the Select Tasks to Perform section, do the following and lock the values:
• Select Create a New Database to create a new database and configure it after
installing the standalone Oracle Database
• On the Configure page, click Setup Hosts. On the Operating System Users
page, specify the operating system user for the Oracle Home for the database.
For Oracle Home User for the database, select the Normal User and Privileged
User to be added to the OS group, and lock the values. Click Next to proceed.
On the Specify Operating System groups page, specify the OS Groups to use
for operating system authentication and lock the values, as appears in the
following graphic
• On the Configure page, click Create Databases, the following screen appears.
Update only the mandatory fields, and click Next to proceed. Do no lock any
of the values in this wizard:
6. In the Review page, review the details you have provided for the deployment
procedure. Click Save to save the deployment procedure with a unique name
prov_db_template, and then click Cancel. The Procedure library page appears
with the saved procedure
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Procedure Library.
3. On the Select Hosts page, in the Select hosts section, click Add to select the
destination host where you want to deploy and configure the software, and then
click Next.
• On the Configure page, click Setup Hosts. Since the values here are locked by
the designer, you will not be able to edit them. Click Next to come back to the
Configure page.
• On the Configure page, click Deploy Software. Since the values here are locked
by the designer, you will not be able to edit them. Click Next to come back to
the Configure page.
• On the Configure page, click Create Databases. The following screen appears.
Update all the fields, and click Next to proceed.
For information about updating the Creating Database wizard, see Provisioning
and Creating Oracle Databases.
6. In the Review page, review the details you have provided for the deployment
procedure and if you are satisfied with the details, then click Finish to run the
deployment procedure according to the schedule set.
51.6.3 Step 3: Launching and Running the Saved User Defined Deployment Procedure
Log in to Enterprise Manager Cloud Control with Operator privileges to run the saved
UDDP. To do so, follows these steps:
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
select Procedure Library.
2. On the Provisioning page, select the saved UDDP configuration template that you
saved as a part of the previous step, and click Launch.
Note:
While creating the UDDP template, if you have locked any of the values, then
they will appear greyed out since they are read-only values that cannot be
edited now.
3. On the Select Targets page, select the target list from the drop down menu, and
click Add to populate the target list. Click Next.
4. If you declared variables that you did not define during the procedure creation,
then you will have to provide the details in the Set variable page.
If you have declared a Software Library Entity variable, then you could search and
select the desired entity from Software Library. Once the value is populated, you
may even choose to lock this value so that any other Operator who does not have
privileges on your procedure will not be able to update this values. For more
information on different types of variables, see Procedure Variables. Click Next.
5. On the Set Credentials page, you need to set the credentials for the target host
machines that you have included as a part of the host_target_list variable.
Click Next.
6. On the Set Schedule and Notification page, you can schedule the job to run
immediately or at a later preferred time.
7. Click Submit, and provide a unique Submission Name for your job.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
select Procedure Activity.
2. On the Procedure Activity page, click the job that you submitted.
3. The new Instance Execution page for the job is displayed which will give you
information about the success or failure of your job for on all the targets.
For more information about the new Instance Execution page, see Overview of the
Provisioning Page.
• Comparison Between the Existing Design and the New Design for Procedure
Instance Execution Page
• Creating an Incident
Note:
51.7.1 Comparison Between the Existing Design and the New Design for Procedure
Instance Execution Page
The current Procedure Activity page provides the status of all the steps executed in a
deployment procedure. This page also gives you information of the failed step and the
necessary action to be taken to rectify it.
Before you understand the new Procedure Activity page, take a moment to review the
challenges you might be facing while using the current Procedure Activity Page:
Table 51-6 Comparison Between the Existing Procedure Activity Page and the New Procedure
Activity Page.
Multiple Selects Multiple selects are not supported. Multiple selects are possible from a single
page. For example, if you want to select
only the failed steps you can do so using
the new design.
Target-Centric Step-centric approach restricts your access Target-centric design with the introduction
Design to all the targets from a single screen. of filters have made it easy to analyze all
Which means that in the earlier approach failed steps from the same page and
you could drill down to only one failed step perform the required action on the step.
at a time, and would have to repeat the
whole procedure for the other failed steps
Step Output The step-centric design requires traversing Target-centric design now allows you to
through a number of pages to drill down to view all the step details from the same
the actual step. page. unlike the earlier step-centric.
Detailed Output Detailed Output for a step was not available Detailed Output is a new option available
in the earlier design. You had to download at step-level which captures the log
the entire log. information pertaining to that step selected,
only making it easy to view and debug the
step in case of a failure.
Incident Creation Incident Creation was not available in the Incident Creation is a new feature that has
earlier design. been introduced at Procedure-Level which
enables you to create an incident for the
execution which can later be used to debug
the procedure in case of a failure.
Cloud Control addresses the challenges of the existing Procedure Activity page with
its much-improved target-centric procedure management solution that allows access
to all the targets and steps from one single page with maximum ease and minimum
time. The new Procedure Activity page offers the following benefits
Note:
starting with Enterprise Manager 12.1.0.3 (Patch Set 2), the following
enhancements have been made to the Procedure Execution Page:
• More filters have been introduced, and filtering of steps is now possible at
the table level instead of a submenu in view menu. For details refer to
point 6 in the Figure 51-2.
• By default, the step details are now available in a tab layout. However,
option is still available to switch to stack view. The benefit of using the tab
layout are:
a> All the log details are displayed on the screen itself.
b> There is a provision to download the log files.
c> You can click Job Summary link to get more information about the
underlying job.
Note:
For information about the tasks that can be performed from the Procedure
Instance Execution Page, refer to Procedure Instance Execution Page .
1. Breadcrumb Trail
2. View Data
Action Description
Debug Debugs the errors in the procedure. This is a one time action,
which means that the menu is disabled after using this option
the first time.
Retry Executes all the failed steps and phases in the deployment
procedure instance once again.
• You can view the log file details on the same page.
• You can access the Job details page to get more information about the underlying
job.
Action Description
Ignore Ignore the failure of a step, and continue with the other steps
in the deployment procedure.
Update and Retry Enables you to edit the step, and then executes the step when
submitted.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Procedure Activity.
2. On the Procedure Activity page, click the procedure name to select the procedure.
3. In the Procedure Steps section, from the Show menu, select Failed Steps.
All the steps that have failed are displayed in the Procedure Steps section. You can
now select the steps that you want to retry, ignore, or update, and the
corresponding details are displayed in the Step Details section
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Procedure Activity.
2. On the Procedure Activity page, click the procedure name to select the procedure.
3. Select the failed step from the Procedure Steps section. For information on selecting
failed steps, see Investigating a Failed Step for a Single or a Set of Targets.
The details of the step are displayed in the Setup Details section.
4. In the Step Details section, from the Actions menu, click Retry. To make changes to
the step, select Update and Retry option.
5. In the Retry confirmation dialog box, click OK to run the step again.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Procedure Activity.
2. On the Procedure Activity page, click the procedure name to select the procedure.
3. In the Procedure Instance Execution page, from the Procedure Actions menu, select
Incident.
4. In the incident confirmation dialog box, click OK to create an incident for your
execution.
A confirmation dialog box appears once the incident is created. For more
information about creating, packaging, and uploading an incident to an SR, see
Oracle Database Administrator's Guide.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Procedure Activity.
2. On the Procedure Activity page, click the procedure name to select the procedure.
3. In the Procedure Steps section, select a step. Click Download in the step details
section, as shown in the following graphic to download the step output:
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Procedure Activity.
2. On the Procedure Activity page, click the procedure name to select the procedure.
3. In the Procedure Steps section, select a step. Click Job Summary link available in
the step details section to get more information about the underlying job.
Type 1 Type 2
Editing Custom Deployment Procedures Creating a User Defined Deployment Procedures
You can edit an existing custom Deployment Procedure You can create your own Deployment Procedure with
that is offered by Cloud Control to add new phases and new steps, phases, privilege levels, and so on.
steps. However, for patching the steps that can be You can perform the following tasks:
added are restricted to either a Directive step or a Host
• Add your own phases and steps to the pre-defined
command step.
Default phase of the procedure structure.
You can perform the following tasks:
• Enable and disable phases and steps
• Add your own phases and steps to the pre-defined • Change privilege levels
blocks of the procedure structure.
• Change error handling modes
• Enable and disable phases and steps
• Enable e-mail notifications
• Delete phases and steps
Note: For steps to Create a User Defined Deployment
• Change privilege levels Procedure, see Creating, Saving, and Launching User
• Change error handling modes Defined Deployment Procedure (UDDP).
• Enable e-mail notifications
Note: You can not edit an Oracle-owned deployment
procedure. To do so, you must clone the Oracle-owned
procedure using Create-like functionality, and then edit
the copy to include your changes.
The following graphic shows how you can use the Customizing Deployment
Procedure page to create a copy of the default Deployment Procedure that is offered
by Cloud Control. You can then add new steps and phases or edit the existing steps
and phases in the copy to customize your procedure.
For more information on adding steps and phases, see Customizing a Deployment
Procedure.
Note:
If a step is added outside a phase, then the type of step that can be added is
restricted to a Job Step or a Manual Step. You can not add other steps outside
a phase. Howevere, within a phase all the steps discussed in this section can
be added.
This section explains how you can edit different types of phases or steps to a
Deployment Procedure. In particular, it covers the following:
1. From the Enterprise menu, select Provisioning and Patching and then select
Procedure Library.
2. On the Provisioning page, in the Procedure Library tab, from the menu, select
Create New, then click Go.
a. On the Create page, the general information of the phase as displayed. You
can change them if you want, and click Next.
b. On the Select Target List page, select a target list to indicate the type of targets
on which the new phase should run.
All the target lists declared while creating the procedure is listed in the drop
down menu, select the target list to use for this phase. The actual targets can
be chosen when the procedure is being launched.
c. On the Review page, review the information you have provided for creating a
new phase, and click Finish. The changes to the phase are saved.
2. On the Select Type page, you can view the job type details.
3. On the Map Parameters page, you can update the variable values or set a new one.
Additionally, you can also update the credentials to be used for the target list.
2. On the Select Directive page, you can select one of the following options:
- Select New Directive: This option lists all the directives available in Software
Library, select a directive from the list that you want to run on the targets. Provide
necessary values in the Select Directive section to narrow down the search results.
- Retain Directive Selection: This option allows you to use the same directive that
you selected while creating this step.
• By default the Run Directive, and Perform Cleanup options are enabled to run
the script, and remove the files after the steps has run. You may disable this is
you want.
• In the Directive Properties section, specify values for the properties associated
with the selected directive. You have the option of providing or not providing
the property values at this stage. If you do not provide the property values now,
then they are prompted at the time of launching the procedure.
• In the Credentials section, set the target List to be applied for this step.
• In the Time limit properties section, you can update the max time allowed for an
operation to complete in seconds.
1. On the Edit page, review the general information about the step. If you want to
update the values, follow the general information about the step as described in
Table 52-1.
2. On the Select Component page, you can select one of the following options:
- Retain Selection: This option allows you to use the same component that you
selected while creating this step.
- Select New Component: This option lists all the components available in
Software Library, select a component from the list that you want to run on the
targets. Provide necessary values in the Select Component section to narrow down
the search results.
If you have set a Software Library variable in the Procedure Variables tab, you may
additionally notice these two options:
- Select New Software Library Entity Variable: Select the Software Library
variable that you declared while creating the procedure from the Software Library
Entity Variable drop down menu. This variable behaves as a place holder and
enables you the flexibility of binding it with the components dynamically while
launching the procedure. Essentially, you can use this variable to select certain
entities which do not need any parameters.
For example, user-defined scripts like a Perl to print the current directory location
that does not need any parameters to be passed.
- Select New Software Library Entity Variable with Component Properties: This
option allows you to bind the Software Library variable with the components that
are available in Software Library. Ensure that you choose a component whose
properties (signature) matches with the entity declared.
3. On the Select Directive page, you can select one of the following options:
- Retain Software Library Entity Variable Selection: This option allows you to use
the same directive that you selected while creating this step.
- Select New Directive: This option lists all the directives available in Software
Library, select a directive from the list that you want to run on the targets. Provide
necessary values in the Select Directive section to narrow down the search results.
If you have set a Software Library variable in the Procedure Variables tab, you may
additionally notice these two options:
- Select New Software Library Entity Variable: Select the Software Library
variable that you declared while creating the procedure from the Software Library
Entity Variable drop down menu. This variable behaves as a place holder and
enables you the flexibility of binding it with the directives dynamically while
launching the procedure. Essentially, you can use this variable to select certain
entities which do not need any parameters.
For example, user-defined scripts like a Perl to print the current directory location
that does not need any parameters to be passed.
- Select New Software Library Entity Variable with Directive Properties: This
option allows you to bind a Software Library entity variable with directives that are
available in Software Library. Ensure that you choose a directive whose properties
(signature) matches with the entity declared.
• By default the Run Directive, and Perform Cleanup options are enabled to run
the script, and remove the files after the steps has run. You may disable this is
you want.
• In the Directive Properties section, specify values for the properties associated
with the selected directive. You have the option of providing or not providing
the property values at this stage. If you do not provide the propertie values
now, then they are prompted at the time of launching the procedure.
• In the Credentials section, set the target List to be applied for this step.
• In the Time limit properties section, you can update the max time allowed for an
operation to complete in seconds. If there are two options, the interview time
will display a radio button. If there are more than two options, then a display
menu appears.
1. On the Edit page, review the general information about the step. If you want to
update the values, follow the general information about the step as described in
Table 52-1.
2. On the Map Properties page, select the Source Target from which you want to
transfer files, the source target path, the Target Destination for file transfer and the
destination path. Specify the Source and Destination Credential Usage, whether
Host or Privileged Host credentials. Click Next.
If you select Transfer all the files in this path option, then all the files in the source
path are transferred. If uncheck this option, then the Source File Name field
becomes mandatory.
1. On the Edit page, review the general information about the step. If you want to
update the values, follow the general information about the step as described in
Table 52-1.
2. On the Enter Command page, specify the command or script, which you want to
run on the target, and the privilege to run it.
To run the host command as a script, select Script from the Command Type list.
Specify the shell that can interpret the script. The script is passed as standard input
to the specified interpreter.
To run the host command as a command line, select Single Operation from the
Command Type list. Specify the text you want to execute used as a command line.
No assumptions are made about the shell to interpret this command line. The first
entry in the command line is assumed to be the process to spawn and the rest of the
command line as passed as arguments to this process. Therefore, a command line
of ls -a /tmp spawns a process of "ls" (from the current path; also depends on
the Oracle Management Agent) and passes "-a" as the first argument and then "/
tmp" as the second argument to this process.
Note: The command line mode assumes that the first part of the command line is
the process to be spawned. Therefore, shell internals and the commands that rely
on the PATH environment variable for resolution are not recognized. If any such
commands need to be used, then you need to prepend the shell that interprets the
command line.
For example, the command cd /tmp && rm -rf x expands to "cd" as a process
and then "/tmp, &&, rm, -rf, x" as arguments. To fix this, change the command line
to /bin/csh -c "cd /tmp && rm -rf x".
Another example, the command export PATH=/opt:${PATH}; myopt -
install expands to "export" as a process and then "PATH=/opt:${PATH};,
myopt, -install" as arguments. To fix this, use /bin/sh -c "export PATH=/
opt:${PATH}; myopt -install".
3. In the Time limit properties section, you can set a max time allowed for a script to
run in seconds. For example: Lets assume that you have set this value to 75
seconds, then when the script runs if it exceeds the set time, then this step is
skipped.
1. On the Edit page, review the general information about the step. If you want to
update the values, follow the general information about the step as described in
Table 52-1.
2. On the Enter Instructions page, provide a message to inform the operator about a
manual step. For example, if want to instruct the operator to log in to a system and
update the kernel parameter, then specify the following:
You have been logged out of the system. Log in and update the Kernel parameters.
Insert Location If you want to insert the custom step after the step you selected, then select
After <step name>. Otherwise, select Before <step>.
Error Handling Select the error handling mode you want to set for the custom phase. Every
step in a Deployment Procedure is preconfigured with an error handling mode
that indicates how the Deployment Procedure will behave when the phase or
step encounters an error. The error handling modes offered by Cloud Control
are:
- Inherit - Inherits the error handling mode that was set for the enclosing
phase. (When set for a step that is outside a phase, it inherits the error
handling mode from the Deployment Procedure).
- Stop On Error - Stops when an error is encountered. Deployment Procedure
does not proceed to the next step until you correct the errors or override them.
- Continue On Error - Continues even when an error is encountered.
- Skip Target - Ignores the failed target on the list and continues with other
targets.
Example:
In the following example, isPingSuccessful is the DP variable, and true is the value.
print '$$$--*$$';
print '<commandBlock>';
print '<executeProc name="MGMT_PAF_UTL.UPDATE_RUNTIME_DATA"> ';
print '<scalar>%job_execution_id%</scalar> ';
print '<scalar>${data.isPingSuccessful}</scalar> ';
print '<scalar>true</scalar> ';
print '</executeProc>';
print '</commandBlock>';
print '$$$*--$$';
Any language (shell, Perl, and so on) can be used to write the script that will run on
the host. As long as the above command block gets printed on the output stream,
procedure variables will be updated.
This feature enables you to update the values of procedure declared variables at
runtime. The command block feature allows some automatic post-processing of the
output generated from a remote machine (agent), which is then uploaded to the OMS,
and processed by Job Systems. In PAF, you can use this feature to define steps in a
procedure with command block formatted output. Within the command block, SQL
procedure can be invoked to assign values to procedure variables.
Note:
Only a directive step, or a component step, and a job step can use the
command block to update the values of a procedure declared variables at
runtime.
The following example demonstrates how a command that is run on the target can
update the variable in a procedure at runtime. In this example, a ping command is run
on a host using the directive step in a procedure. If the ping command succeeds, then
the directive sets the value of the procedure to true. If not, the directive sets the
value of the procedure to false.
Step Details
Step 1 Create a Perl File called TestPingAndSetDPVariable.pl on your local
host. For details, see Step 1: Creating a Perl Script to Assign Values to
Deployment Procedure Variables at Runtime.
Step 2 Upload the Perl script to the Software Library. For a detailed list of steps, see
Step 2: Uploading TestPingAndDPvariable.pl to Software Library.
Step Details
Step 4 Run the Procedure. For details, see Step 4: Launching the Deployment
Procedure, and Providing the Variable Values at Runtime
Step 5 Verify the variable details. For details, see Step 5: Verifying the Deployment
Procedure Variable Values
1. In Cloud Control, from Enterprise menu, select Provisioning and Patching, then
click Software Library.
2. On the Software Library home page, right-click any User-owned directive folder.
From the context menu, select Create Entity, then click Directive. The Create
Directive page is displayed.
3. On the Describe Page, enter a unique name for the folder. For example,
TestPingAndSetDPVariable. Click Next.
4. On the Configure page, in the Configure Properties section, from the Shell Type
list, select Perl. Click Next.
5. On the Select Files page, in the Specify Destination section, select any Software
Library Upload Location for uploading the specified files.
In the Specify Source section, select File Source as Local Machine. Click Add, and
select the Perl script TestPingAndDPVariable.pl from your local host. Click
Next.
See Also:
1. In Cloud Control, from Enterprise menu, select Provisioning and Patching, then
click Procedure Library.
2. On the Procedure Library page, from the menu, select Create New, then click Go.
3. On the General Information page, enter a unique name for your procedure.
4. On the Create New Procedure page, click Procedure Variables. Click Add Row,
and enter the following details:
5. Click Procedure Steps tab. Select the Default phase, then click Insert to add the
following steps to the default phase:
a. Add a Host Command step to print the value of the variable before running
the procedure. Here are the steps: On the Create page, select Host Command
from the menu, enter a unique name for the step, then click Next. On the
Enter Command page, enter the value: echo Before String is: "$
{data.isPingSuccessful}".
Note: The expression to access the DP variable isPingSuccessful in DP
code is ${data.isPingSuccessful}.
Click Next. Review the details, click Finish.
b. Add a Directive step called ping to call the directive that was uploaded to
Software Library. To do so, on the Create page, enter a unique name for the
step, then select Directive from the Type menu. On the Select Directive page,
search with a string %Ping%.Select the Directive that you uploaded, and click
Next. You can leave all the default values as is on Map Properties page, and
Review page, then click Finish.
c. Add a Host Command step to print the value of the variable after running the
procedure. Here are the steps: On the Create page, select Host Command
from the menu, enter a unique name for the step, then click Next. On the
Enter Command page, enter the value: echo After String is: "$
{data.isPingSuccessful}". Click Next. Review the details, click Finish.
52.3.4 Step 4: Launching the Deployment Procedure, and Providing the Variable Values
at Runtime
Run the procedure as follows:
1. In Cloud Control, from Enterprise menu, select Provisioning and Patching, then
click Procedure Library.
2. On the Procedure Library page, select the procedure you created in the previous
step, then click Launch. The Procedure Interview page is displayed.
3. On the Select Targets page, select a target on which you want to run this procedure.
Click Next.
4. On the Set variables Page, for the isPingSuccessful variable, enter the
following value:
Click Next.
1. On the Procedure Activity page, select the procedure instance that you submitted.
2. On the Procedure Instance Execution page, select the Step Print Before. This step
prints the value of the variable entered during submission.
Note:
The Ping step ran, and the ping command was successful. However, the print
statements printing the command block string in the Perl script are not
displayed. This is because, these special command block strings were parsed
by job system and used to process the procedure variable assignment logic.
4. On the Procedure Instance Execution page, select Step Print After. This step prints
the value of the variable as true, this was set by the directive that ran on the host.
This way you can use a directive step to manipulate procedure variables at runtime
depending on the output returned by some other command that ran on the target.
• Inherit - Inherits the error handling mode that was set for the enclosing phase.
(When set for a step that is outside a phase, it inherits the error handling mode
from the Deployment Procedure).
• Skip Target - Ignores the failed target on the list and continues with other targets.
For more information about steps, see Phases and Steps.
To change the error handling modes, follow these steps:
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Procedure Library.
3. On the Create Like Procedure page, click the Procedure Steps tab, and then select a
phase or step, and change the Error Handling Mode.
Once the mode is selected from the list, Cloud Control automatically refreshes the
page with the newly selected mode for that phase or step.
The following is an example that illustrates how you customize the Oracle Database
Provisioning Deployment Procedure to change the error handling mode of the
Destination User Permission Checks phase:
Note:
As a prerequisite, you are expected to have configured the Mail Server and set
up the e-mail address in Cloud Control.
Note:
1. In Cloud Control, from the Setup menu, select Notification, then select
Notifications Method.
2. On the Setup page, in the Mail Server section, enter one or more outgoing mail
servers and optional port number (if left blank, port 25 is used by default).
3. Enter the mail server authentication credentials like UserName, Password. The
UserName and Password fields allow you to specify a single set of authentication
credentials to be used for all mail servers. If no mail server authentication is
required, leave the User Name, Password (and Confirm Password) fields blank.
4. Enter the name you want to appear as in the sender of the notification messages
field Identify Sender As.
5. Enter the e-mail address you want to use to send your e-mail notifications in the
Sender's E-mail Address. When using incident rules, any e-mail delivery problems
will be automatically sent to the Sender's E-mail Address.
Note:
The e-mail address you specify on this page is not the e-mail address to which
the notification is sent. You will have to specify the e-mail address (where
notifications will be sent) from the Preferences General page. For information
on specifying e-mail addresses for e-mail notification, see Adding E-mail
Addresses for Enterprise Manager Notifications
6. The Use Secure Connection option allows you to choose the SMTP encryption
method to be used. Three options are provided:
For example,
In the following example, two mail servers are specified--smtp01.example.com on
port 587 and smtp02.example.com on port 25 (default port). A single administrator
account (myadmin) is used for both servers.
Outgoing Mail (SMTP) Server smtp01.example.com:587,
smtp02.example.com
User Name myadmin
Password ********
Confirm Password ********
1. In Cloud Control, from the Setup menu, select Notification, then select My
Notifications Schedule.
3. If no previous e-mail addresses have been defined for the administrator, a message
displays prompting your to define e-mail addresses for the administrator. Click
Click here to set e-mail addresses. The General page appears.
4. Click Add Another Row to create a new e-mail entry field in the E-mail Addresses
table.
5. Specify the e-mail address associated with your Enterprise Manager account. All e-
mail notifications you receive from Enterprise Manager will be sent to the e-mail
addresses you specify. For example, user1@example.com,
user2@example.com, and so on.
6. If you need to additional e-mail addresses, click Add Another Row, enter the e-
mail address and select the format.
7. You can test if the e-mail address is properly configured to receive e-mails from
Enterprise Manager by selecting it and clicking Test.
Once you have defined your e-mail notification addresses, they will be shown when
you define a notification schedule. For example, user1@example.com,
user2@example.com, user3@example.com. You can choose to use one or more of
these e-mail addresses to which e-mail notifications for the Incident Rule will be sent.
52.6.1 Prerequisites for Copying Customized Provisioning Entities from One Enterprise
Manager Site to Another
Ensure that:
• Destination site has similar setup as source site, that is, both have the same version
of Enterprise Manager installed.
52.6.2 Copying Customized Provisioning Entities from One Enterprise Manager Site to
Another
Follow these steps:
2. For importing the provisioning entities, you can use emctl partool as follows:
emctl partool <deploy|view> -parFile <file> -force(optional)
emctl partool <deploy|view> -parFile <file> -force(optional) -ssPasswd
<password>
emctl partool <deploy|view> -parDir <dir> -force(optional)
3. Alternatively, you can import the PAR file from Cloud Control as explained in the
following steps.
a. From the Enterprise menu, select Provisioning and Patching and then select
Procedure Library.
b. On the Procedure Library page, from the list, select Import and click Go.
• Upload From Local Machine, if the PAR files are stored on your local machine.
Click Browse and select the PAR File to Upload. Click Import.
• Upload From Management Agent Machine, if you have stored the PAR files on
the Management Agent machine. Click Target and select the Host. Click Select
File and select the PAR file. Click Import.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Software Library.
2. On the Software Library page, from the table, expand the Software Library and
the other levels under this category to reach the directive you want to copy.
For example, if you want to copy the Apply Patch directive of a patching
operation, then expand Software Library and then expand Patching. From this
level, expand Common, and then All, and finally Generic. Under Generic, you
should see the directive Apply Patch.
3. Select the directive you want to copy and click Create Like, and store the copy of
the directive in a custom folder called Directive.
b. On the Configure page, click Add to specify the command line arguments to
be passed to the Directive. Set the Shell Type to Perl because you are adding
a Perl script. If the script is neither Perl nor Bash, then set it to Defined in
Script.
Each entry represents a single command line argument. Each argument may
include a variable to be set later, a prefix and suffix. The prefix and suffix text
are appended before and after the property value to produce the command
line argument.
Repeat this step to add all the command line arguments.
c. On the Select Files page, select Upload Files. In the Specify Source section,
select Local Machine, and click Add to select the modified perl file.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Procedure Library.
2. On the Provisioning page, select the Deployment Procedure for which you want
to use this new directive, and click Create Like.
3. On the Create Like Procedure page, select Procedure Steps tab, and do the
following:
a. From the table that lists all the steps within that Deployment Procedure, select
the directive step with which you want to associate the new directive, and
click Edit Step.
ii. On the Select Directive page, select Select New Directive. Then search
and select the new directive you created, and click Next.
iii. On the Map Properties page, specify the values for the directive
properties, and click Next.
c. Click Save.
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Procedure Library.
2. On the Provisioning page, select the customized Deployment Procedure and click
Schedule Deployment.
• Troubleshooting Issues
A
Using Enterprise Manager Command Line
Interface
This chapter explains how to use Enterprise Manager Command Line Interface (EM
CLI) to deploy patches using Patch Plans, provision procedures, and perform some of
the Software Library tasks.
Note:
For information about Enterprise Manager 13c verb usage, syntax, and
examples, see Oracle Enterprise Manager Command Line Interface
• Overview
• Prerequisites
Note:
A.1 Overview
Enterprise Manager Command Line Interface (EM CLI) is a command line utility
available for power users in Oracle Enterprise Manager Cloud Control (Cloud
Control) that enables you to perform most of the console-based operations. It enables
you to access Cloud Control functionality from text-based consoles (shells and
command windows) for a variety of operating systems.
Using EM CLI you can:
• Use the functions available with EM CLI called verbs, to build your custom scripts
on various programming environments like Operating System shell, Perl, Python,
and so on. This in turn allows you to closely integrate Oracle Enterprise Manager
functionality with your own enterprise business process.
• Carry out operations with the same security and confidentiality as the Cloud
Control console.
A.2 Prerequisites
Before using EM CLI, ensure that you meet the following requirements:
• EM CLI client must be set up. To do so, see Oracle Enterprise Manager Command Line
Interface.
• If you are patching in the offline mode, with no internet connectivity, then ensure
that the patches are available in the Software Library before running the EM CLI
commands.
Note:
You can either use Enterprise Manager UI or the command line utility (EM
CLI) to retrieve the folder id and the entity revision id. To do so, and for a
comprehensive example on how to effectively use the EM CLI verbs to
perform a number of Software Library tasks listed in the following table, see
the workflow example Creating a New Generic Component by Associating a
Zip File.
Following are some of the important EM CLI verbs used to perform some Software
Library actions:
Table A-3 (Cont.) Software Library EM CLI Verbs and Their Usage
Table A-3 (Cont.) Software Library EM CLI Verbs and Their Usage
Note:
• The -swlib argument works for cloning only Oracle Database 9i Release
2. Do NOT use this argument for later releases.
1. To retrieve the GUID or the Name of the procedure, run the following command:
emcli get_procedures
[-type={procedure type}]
Example:
./emcli get_procedures -type=DBPROV
Output:
B3FCE84B1ED96791E040578CD7810EC5, DBPROV,
Prov_112_db_using_SH_locked_acc_without_env_shift_ssubbura11,
Prov_112_db_using_SH_locked_acc_without_env_shift_ssubbura11, 1.0, SSUBBURA1,
SIHA_SIDB_PROC
B35E10B1F430B4EEE040578CD78179DC, DBPROV, DBREPLAYCLIENTDP_NG, Provision Oracle
Database Client, 6.1, ORACLE
B35E10B1F427B4EEE040578CD78179DC, DBPROV, SIHA_SIDB_PROC, Provision Oracle
Database, 1.0, ORACLE
2. Use the GUID or the name in the following command to generate a template
properties file. Use the following command when you are running the Deployment
Procedure for the first time, or when you do not have too many variables in your
procedure to update:
emcli describe_procedure_input
[-procedure={procedure GUID}]
[-name={procedure name or procedure configuration}]
[-owner={owner of the procedure or procedure configuration}][-
parent_proc={procedure of the procedure configuration. this only applies to
describe a procedure configuration with the same name}]
The following examples describe how to use the procedure GUID to generate the
properties file template:
This EM CLI verb describes the input data of a deployment procedure or a procedure
configuration in a name-value pair format, which is also called as the properties file
format. The advantage of this name-value file format for a procedure is that it is flexible
enough to accept multiple destination targets.
Note:
For example properties file, see sections Provisioning Oracle Database
Software or Provisioning Oracle WebLogic Server.
Step 3: Submitting the Procedure With The Updated Properties File as Input
Once the properties file is ready with the correct name-value pair required to run the
Deployment procedure, you must use the EM CLI verb submit_procedure, which
accepts the edited properties file as the input.
emcli submit_procedure
[-name={name of the procedure}]
[-owner={owner of the procedure}]
Starting with Cloud Control 12c, you can submit the procedure either using the
procedure GUID or using the procedure name/owner pair, as described in the
following example:
Output:
Verifying parameters ...
B35E10B1F427B4EEE040578CD78179DC
Deployment procedure submitted successfully
Note: The instanceId is B35E10B1F427B4EEE040578CD78179F1
This verb functions in a non-waiting mode, which means it submits the procedure for
execution and returns without waiting for it to complete. The output of this verb
indicates if the submission of the procedure was successful or if any errors were
encountered. A successful submission displays the Instance GUID as the output.
Example:
Output:
B35E10B1F427B4EEE040578CD78179F1, WEBLOGIC_WSM, DANS_SCALEUP_WSM12, FAILED
1. To retrieve the GUID or the Name of the procedure, run the following command:
emcli get_procedures
[-type={procedure type}]
[-parent_proc={procedure associate with procedure configuration}]
Example:
./emcli get_procedures -parent_proc=SIHA_SIDB_PROC
Output:
B3FCE84B1ED96791E040578CD7810EC5, DBPROV,
Prov_112_db_using_SH_locked_acc_without_env_shift_ssubbura11,
Prov_112_db_using_SH_locked_acc_without_env_shift_ssubbura11, 1.0, SSUBBURA1,
SIHA_SIDB_PROC
emcli get_instances
[-type={procedure type}]
Example:
./emcli get_instances -type=DBPROV
Output:
B3FE0C8302EA4A4CE040578CD781133C, B3FE0C8302F64A4CE040578CD781133C, DBPROV,
Prov_112_db_using_SH_locked_acc_without_env_shift_ssubbur, Failed
B3FE34D472C00AD9E040578CD781107B, B3FE34D472CC0AD9E040578CD781107B, DBPROV,
Prov_112_db_using_SH_locked_acc_without_env_shift_ssubbura1, Failed
3. Use the Instance ID in the following command to retrieve the input properties file
of the instance:
emcli get_instance_data
[-instance={instance guid}]
[-exec=execution guid]
The following examples describe how to use the procedure GUID to generate the
properties file template:
The main goal of this step is to update the values in the properties file (if required).
To do so, use any editor to open the properties file and enter the updated values
against the names. After updating the required fields, save and close the properties
file.
Example:
vi instanceData.properties
To run the procedures from the command line you must use the EM CLI verb
submit_procedure as described in Creating the Properties File to Submit a
Deployment Procedure
To verify the status of the procedure, see Creating the Properties File to Submit a
Deployment Procedure.
3. Save the Procedure Configuration with the updated Properties file, and the other
attributes like job grants, schedules, and notifications. To do so, see Saving a
Procedure Configuration of a Procedure.
Note:
Example:
emcli save_procedure_input -name=procConfiguration -procedure=ComputeStepTest -
input_file=data:/tmp/instanceData.properties -grants="user1:VIEW_JOB;
user2:FULL_JOB" -notification="scheduled, action required, running" -
schedule="start_time:2012/12/25 00:00;tz:American/New York;grace_period:60"
Example:
emcli update_procedure_input -name=procConfiguration -input_file=data:/tmp/
instanceData.properties -grants="user1:VIEW_JOB;user2:FULL_JOB"
-notification="scheduled, action required, running" -schedule="start_time:
2012/12/25 00:00;tz:American/New York;grace_period:60"
For information on the prerequisites for creating a new PDB, view Prerequisites for
Creating a New Pluggable Database. For information on the prerequisites for
creating a PDB by plugging in an unplugged PDB, view Prerequisites for Plugging
In an Unplugged Pluggable Database. For information on the prerequisites for
creating a PDB by cloning an existing PDB, view Prerequisites for Cloning a
Pluggable Database.
$<OMS_HOME>/bin/emcli create_pluggable_database
-cdbTargetName=<Specify the CDB target name for creating new PDB>
-cdbTargetType=<Specify the CDB target type - oracle_database, rac_database>
-cdbHostCreds=<Specify the host credentials on which the CDB target is located>
[-cdbTargetCreds=<Specify the credentials of container database on which the new
PDB will be created.>]
-pdbName=<Specify a name for the new PDB>
[-numOfPdbs=<Specify the number of PDBs to be created>]
For example, you can run the following commands to create new PDBs:
$<OMS_HOME>/bin/emcli create_pluggable_database -cdbTargetName=database -
cdbTargetType=oracle_database -pdbName=pdb -sourceType=UNPLUGGED_PDB -
unpluggedPDBType=ARCHIVE -sourcePDBArchiveLocation=/u01/app/oracle/product/
12.1.0/dbhome_2/assistants/dbca/templates/a.tar.gz
Ensure that you meet the following prerequisites before provisioning a PDB using a
snapshot profile:
• The 12.1.0.6 Enterprise Manager for Oracle Database plug-in, or a higher version,
must be downloaded and deployed in your system.
• The prerequisites for cloning a PDB using the Snap Clone method, (described in
Prerequisites for Cloning a Pluggable Database) must be met.
$<OMS_HOME>/bin/emcli create_pluggable_database
-cdbTargetName=<Specify the CDB target name for creating new PDB>
-cdbTargetType=<Specify the CDB target type - oracle_database, rac_database>
-cdbHostCreds=<Specify the host credentials on which the CDB target is located>
-pdbName=<Specify a name for the new PDB>
-sourceType=CLONE
-sourcePDBName=<Specify the name of the existing PDB that you want to clone>
-sourceCDBCreds=<Specify the credentials of the CDB within which the source PDB
is present>
-useSnapClone
-sourceCDBHostCreds=<Specify the host credentials for the source CDB.>
-mountPointPrefix=<Specify the mount point prefix for the cloned volumes.>
-writableSpace=<Specify the writable space, in GB, for the cloned volumes.>
-privHostCreds=<Specify the privileged host credentials required to mount the
This command clones the source PDB using the Snap Clone feature, and creates a
snapshot profile of the source PDB, which is stored at the specified location in
Software Library.
$<OMS_HOME>/bin/emcli list_swlib_entities
-name="<The name of the snapshot profile>"
-show_entity_rev_id
$<OMS_HOME>/bin/emcli create_pluggable_database
-cdbTargetName=<Specify the CDB target name for creating new PDB>
-cdbTargetType=<Specify the CDB target type - oracle_database, rac_database>
-cdbHostCreds=<Specify the host credentials on which the CDB target is located>
-pdbName=<Specify a name for the new PDB>
-sourceType=PROFILE
-profileURN=<URN of the snapshot profile that you want to use to provision PDBs>
[-cdbTargetCreds=<Specify the credentials of container database on which the new
PDB will be created.>]
[-numOfPdbs=<Specify the number of PDBs to be created>]
[-pdbAdminCreds=<Name of pdb credentials with admin role>]
[-useOMF=<Specifies that the datafiles can be stored in OMF location>]
[-sameAsSource=<Specifies that the datafiles of new PDB can be stored in the same
location as that of source CDB>]
[-newPDBFileLocation=<Specify the storage location for datafiles of the created
PDB.>]
[-noUserTablespace=<Specifies that the new DEFAULT PDB will not be created with
USERS tablespace.>]
$<OMS_HOME>/bin/emcli migrate_noncdb_to_pdb
-cdbTargetName=<EM CDB target into which the database will be added as PDB>
-cdbTargetType=<EM CDB target type (oracle_database|rac_database)>
-cdbDBCreds=<Named DB credentials of CDB user having sysdba privileges>
-cdbHostCreds=<Named host credentials for Oracle Home owner of CDB>
-migrationMethod=<Migration method to be used (DATAPUMP|PLUG_AS_PDB)>
-noncdbTargetName=<EM non-CDB target to be migrated>
-noncdbTargetType=<EM non-CDB target type (oracle_database|rac_database)>
-noncdbDBCreds=<Named DB credentials for non-CDB user having sysdba privileges>
-noncdbHostCreds=<Named host credentials for Oracle Home owner of non-CDB>
-pdbName=<Name of the PDB to be created on the CDB>
-pdbAdminName=<Username of the PDB administrator to be created>
-pdbAdminPassword=<Password for the PDB administrator>
[-exportDir=<Temporary file system location on the non-CDB host where the
exported files will be stored>]
[-importDir=<Temporary file system location on the CDB host used to stage the
migration metadata and/or datafiles>]
[-useOMF=<Use OMF for datafile location if CDB is OMF enabled (Y|N)>]
[-dataFilesLoc=<Location on the CDB host where datafiles for the newly created DB
will be stored. Disk Group name in case of ASM>]
[-encryptionPwd=<Password to decrypt/encrypt datapump dump file. Mandatory if non-
CDB contains encrypted tablespaces>]
[-cdbWalletPwd=<Wallet password of the CDB. Mandatory if non-CDB contains
encrypted tablespaces>]
[-objectExistsAction=<Action to be taken when the exported object with same name
is found on the newly created PDB (SKIP|REPLACE). Defaulted to SKIP>]
[-precheck=<Perform pre-requisite checks (YES|NO|ONLY). Defaulted to YES>]
[-ignoreWarnings=<Ignore the warnings from precheck (Y|N)>]
For example, you can run the following command to migrate a non-CDB as a PDB:
$<OMS_HOME>/bin/emcli migrate_noncdb_to_pdb -migrationMethod=datapump -
noncdbTargetName=NON_CDB_NAME -noncdbTargetType=oracle_database -
noncdbHostCreds=NON_CDB_HOST_CREDS -noncdbDBCreds=NON_CDB_DB_CREDS -
cdbTargetName=CDB_NAME -cdbTargetType=oracle_database -
cdbHostCreds=CDB_HOST_CREDS -cdbDBCreds=CDB_DB_CREDS -pdbName=NEW_PDB -
pdbAdminName=pdbAdmin -pdbAdminPassword=welcome -precheck=ONLY -ignoreWarnings
For information on the prerequisites for unplugging and dropping a PDB, view
Prerequisites for Unplugging and Dropping a Pluggable Database.
$<OMS_HOME>/bin/emcli unplug_pluggable_database
-cdbTargetName=<Specify the CDB target name from which PDB needs to be unplugged>
-cdbTargetType=<Specify the CDB target type - oracle_database, rac_database>
-cdbHostCreds=<Specify the host credentials on which the CDB target is located>
-cdbTargetCreds=<Specify the credentials of container database on which the PDB
will be unplugged.>
-pdbName=<Specify name of the PDB that needs to be unplugged>
[-unplugPDBToSWLIB=<Specifies that unplugged PDB should be uploaded to Software
Library (SWLIB)>]
[-pdbTemplateNameInSWLIB=<If -unplugPDBToSWLIB, specify the name to be used for
PDB Template component in SWLIB.>]
[-tempStagingLocation=<If -unplugPDBToSWLIB, specify a temporary working
directory for copying staging SWLIB files.>]
-unplugPDBTemplateType=<Specify the PDB template type - ARCHIVE, RMAN, XML.>
[-pdbArchiveLocation=<If -unplugPDBTemplateType=ARCHIVE, this is fully qualified
archive location with file name>]
[-pdbMetadataFile=<If -unplugPDBTemplateType=RMAN or XML, this is fully qualified
path for the PDB metadata file>]
[-pdbDatabackup=<If -unplugPDBTemplateType=RMAN, this is fully qualified path for
the PDB datafile backup>]
For example, you can run the following command to unplug and drop a PDB:
$<OMS_HOME>/bin/emcli unplug_pluggable_database -cdbTargetName=db -
cdbTargetType=oracle_database -cdbHostCreds=HOST_CREDS -cdbTargetCreds=CDB_CREDS -
pdbName=db_pdb -unplugPDBTemplateType=ARCHIVE -pdbArchiveLocation=/u01/app/
unplugged/db_pdb.tar.gz
1. Target information like: target name, target type, target version, release number,
platform, product and so on ready.
2. Patch information like: Patch Name (Patch Number), Release ID, Platform ID, and
Language ID ready.
3. Set at least one of the following credentials on the Oracle home of the target host
machines before beginning the patching process:
• Online Mode: This mode is helpful when you have internet connectivity.
However, to search and download the patches from My Oracle Support, you
need to set the preferred credentials for My Oracle Support.
• Offline Mode: This mode can be used for patching provided you have already
downloaded the patches to the Software Library. You can then search for them
on Software Library.
Case 1 Creating a new properties To patch targets using a fresh properties file, follow these steps:
file for patching targets.
1. Select the targets, and seach for the patches that you want to
add to the plan.
Case 2 Updating the properties To update a properties file retrieved from an existing patch plan,
file of an existing patch follow these steps:
plan to patch the targets.
1. Get the user-editable data for a given patch plan, and save the
output as a properties file.
1. Select the targets that need to be patched. To do so, run the following EM CLI
command:
emcli get_targets
[-targets="[name1:]type1;[name2:]type2;..."]
For example:
emcli get_targets -targets=oracle_emd
Output:
Displays all the Management Agent targets.
1 Up oracle_emd slc01nha.us.example.com:11852
1 Up oracle_emd slc00bng.us.example.com:1833
1 Up oracle_emd adc2101349.us.example.com:1832
2. Search for the patches that you want to apply. To find the relevant patches for
your plan, you either need to use the Patch ID (Basic Search), or use a combination
of Release ID, Platform ID, and Product ID (Advanced Search) and drill down to
the patches required. To do so, run the following EM CLI command:
emcli search_patches
[-swlib]
[-patch_name="patch_name"]
[-product="product id" [-include_all_products_in_family]]
[-release="release id"]
[-platform="platform id" | -language="language id"]
[-type="patch | patchset"]
[-noheader]
[-script | -xml | -format=
[name:<pretty|script|csv>];
[column_separator:"column_sep_string"];
[row_separator:"row_sep_string"];
]
Note:
a. (Basic Search) To search for the patches using the Patch ID, do the following:
emcli search_patches
[-swlib]
[-patch_name="patch_name"]
[-product="product id" [-include_all_products_in_family]]
[-release="release id"]
[-platform="platform id" | -language="language id"]
[-type="patch | patchset"]
[-noheader]
[-script | -xml | -format=
[name:<pretty|script|csv>];
[column_separator:"column_sep_string"];
[row_separator:"row_sep_string"];
]
Output:
11993573 Agent Plugin PATCH Cloud Control (Agent)
12.1.0.1.0 Linux x86-64 American English General
Enterprise Manager Base Platform - Plugin
Output:
11993573 Agent Plugin PATCH Cloud Control (Agent)
12.1.0.1.0 Linux x86-64 American English General
Enterprise Manager Base Platform - Plugin
b. (Advanced Search) Use the Product ID, Release ID, and Plaform ID (or
Language ID) to get the patch details that you want to add to the patch plan.
Example:
To search for patches using a combination of Product ID, Release ID, and
Platform ID (obtained from the earlier steps):
emcli search_patches -product=12383 -release=9800371121010 -platform=226
Output:
13491785 ENTERPRISE MANAGER BASE PLATFORM - AGENT 12.1.0.1.0 BP1
(PORT) Cloud Control (Agent) 12.1.0.1.0 Linux x86-64 American
English Recommended Enterprise Manager Base
Platform13481721 WRONG ERROR MESSAGE RETURNED FROM NMO Cloud
Control (Agent) 12.1.0.1.0 Linux x86-64 American English
General Enterprise Manager Base Platform
3. Create a patch-target map (stored in the properties file) using any editor, and
supply information like Patch ID, Release ID, Platform ID, and Language ID. Here
is a sample properties file:
vi demo.props
patch.0.patch_id=13426630
patch.0.release_id=9800371121010
patch.0.platform_id=2000
patch.0.language_id=0
patch.0.target_name=abc1.example.com:1836
patch.0.target_type=oracle_emd
patch.1.patch_id=13426630
patch.1.release_id=9800371121010
patch.1.platform_id=2000
patch.1.language_id=0
patch.1.target_name=abc2.us.example.com:1839
patch.1.target_type=oracle_emd
4. Run the create_patch_plan command to create the plan, and supply the newly
created properties file (demo.props) as input:
emcli create_patch_plan
-name="name"
-input_file=data:"file_path"
[-impact_other_targets="add_all | add_original_only | cancel"]
Example:
emcli create_patch_plan -name=demo_agent -input_file=data:demo.props -
impact_other_targets=add_all
Note:
If the selected target impacts other targets, then you need to add
impact_other_targets with the value "add_all". For example, if
one of the agents running on the NFS home is selected for patching, other
agent based on the same NFS home will also be impacted while patching, so
they are all required to present in the patch plan.
5. After you have created the patch plan with all the relevant data, you can submit
your patch plan in the Analyze mode to verify if the plan is deployable or not. To
do so, run the follwing command:
emcli submit_patch_plan -name=demo_agent -action=analyze
Output:
The action "analyze" is successfully submitted on the Patch Plan "demo_agent",
now "analyze" is in progress.
The Analyze mode facilitates the plan to perform all the validations to ensure that
the plan is deployable. Only once the analysis is successful you should deploy the
plan.
6. To verify the status of the patch plan, run the following EM CLI command:
emcli show_patch_plan -name=demo_agent -info | grep plan_status
Output:
<plan_status>CONFLICTS</plan_status>
If you see any conflicts, then you must resolve them before deploying the plan.
You can use the User Interface to resolve the issues, and then rerun the plan until
the status is CLEAN.
7. After a successful analysis, you can deploy the patch plan. To do so, run the
following command with action deploy:
emcli submit_patch_plan -name=agent_demo -action=deploy
Output:
The action "deploy" is successfully submitted on the Patch Plan "demo_agent",
now "deploy" is in progress
8. To verify the status of the plan, run the EM CLI command show_patch_plan as
mentioned in step 6. Only when the output of the command is
DEPLOY_SUCCESS, it means that the plan has been successfully deployed, and
the targets mentioned in the patch plan have been patched.
emcli show_patch_plan -name=demo_agent -info
Output:
<plan>
<planDetails>
<plan_id>79CAF6A6DAFCFEE6654C425632F19411</plan_id>
<name>demo</name>
<type>PATCH</type>
<description/>
<conflict_check_date>Tue Feb 21 18:04:04 PST 2012</conflict_check_date>
<conflict_check_date_ms>1329876244000</conflict_check_date_ms>
<is_deployable>1</is_deployable>
<plan_status>CONFLICTS</plan_status>
<review_status>CONFLICT_FREE</review_status>
<created_date>Tue Feb 21 17:40:47 PST 2012</created_date>
<created_date_ms>1329874847000</created_date_ms>
<created_by>SYSMAN</created_by>
<last_updated_date>Tue Feb 21 17:58:29 PST 2012</last_updated_date>
<last_updated_date_ms>1329875909000</last_updated_date_ms>
<last_updated_by>SYSMAN</last_updated_by>
<grant_priv>yes</grant_priv>
<user_plan_privilege>FULL</user_plan_privilege>
<see_all_targets>N</see_all_targets>
<homogeneousGroupLabel>Database Instance 10.2.0.1.0 (Linux x86-64)</
homogeneousGroupLabel>
<executeGuid/>
<executeUrl/>
<planDetails/>
9. To get the details of the patching procedure/job that you submitted, use the GUID
of the execution in the command get_job_execution_details as follows:
emcli get_job_execution_detail
-execution={execution_id}
[-xml [-showOutput [-tailLength={length}]]]
For Example:
emcli get_job_execution_detail -execution=79CAF6A6DAFCFEE6654C425632F19411 -xml
A.5.2.2 Using the Properties File of an Existing Patch Plan to Patch the targets
To edit an existing patch plan after it has been created for updating the patch-target
pairs or generic information or deployment options, you can follow the steps listed
here:
Output:
name=demo_agent
description=
deployment_date=
planPrivilegeList=VIEWER:ADMIN:VIEW;OPER:ADMIN:VIEW;DESIGNER:ADMIN:VIEW
patch.0.patch_id=13426630
patch.0.release_id=9800371121010
patch.0.platform_id=2000
patch.0.language_id=0
patch.0.target_name=abc1.example.com:1836
patch.0.target_type=oracle_emd
patch.1.patch_id=13426630
patch.1.release_id=9800371121010
patch.1.platform_id=2000
patch.1.language_id=0
patch.1.target_name=abc2.example.com:4473
patch.1.target_type=oracle_emd
deploymentOptions.StageLocation=%emd_emstagedir%
deploymentOptions.AdvancedOPatchOptions=null
deploymentOptions.StagePatches=true
2. Edit the properties file (demo_agent.props) using any editor. You can change
the storage location as follows:
name=demo_agent
description=
deployment_date=
planPrivilegeList=VIEWER:ADMIN:VIEW;OPER:ADMIN:VIEW;DESIGNER:ADMIN:VIEW
patch.0.patch_id=13426630
patch.0.release_id=9800371121010
patch.0.platform_id=2000
patch.0.language_id=0
patch.0.target_name=abc1.example.com:1836
patch.0.target_type=oracle_emd
patch.1.patch_id=13426630
patch.1.release_id=9800371121010
patch.1.platform_id=2000
patch.1.language_id=0
patch.1.target_name=abc2.example.com:4473
patch.1.target_type=oracle_emd
deploymentOptions.StageLocation=%emd_emstagedir%/demo
deploymentOptions.AdvancedOPatchOptions=null
deploymentOptions.StagePatches=true
3. To save the patch plan with the new edited data, run set_patch_plan_data
command as follows:
Output:
It is successfully on updating deployment options from the patch plan.
Note:
If the selected target impacts other targets, then you need to add
impact_other_targets with the value "add_all". For example, if
one of the agents running on the NFS home is selected for patching, other
agent based on the same NFS home will also be impacted while patching, so
they are all required to present in the patch plan.
4. Follow steps 5, 6, 7, 8, and 9 mentioned in the Creating a New Properties File for
Patching Targets to complete the patching process.
Note:
1. To retrieve the GUID of the Deployment Procedure, run the following command:
For Example:
./emcli get_procedures | grep DB
B3F.CE84B1ED96791E040578CD7810EC5, DBPROV,
Prov_112_db_using_SH_1ocked_acc_without_env_shift_username1,
Prov_112_db_using_SH_1ocked_acc_without_env_shift_ssubbura11, 1.0, USERNAME1,
SIHA_SIDB_PROC
B35E10B1F42AB4EEE040578CD78179DC, DB_PROV_UPGRADE, DbProvUpgradeDP, Upgrade
Oracle Database, 1.0, ORACLE
B35E10B1F427B4EEE040578CD78179DC, DBPROV, SIHA_SIDB_PROC, Provision Oracle
Database, 1.0, ORACLE
For example, here is a sample properties file used with the values updated in them:
# The Procedure Configuration with name emcli_11202 has input and arguments as
follows:
# Input properties are:
DB_COMPONENT=11<ADMIN_NAME>/Oracle Database Installation Media
DB_HOST_NORMAL_CREDNAMES=AIME_USER1:<USERNAME>
DB_HOST_ROOT_CREDNAMES=AIME_ROOT:<USERNAME>
DB_ORACLE_BASE_LOC=/scratch/db11202
DB_ORACLE_HOME_LOC=/scratch/db11202/app/product/11.2.0/db
DB_PRODUCT_VERSION=11.2.0.2.0
DEPLOY_MODE=DEPLOY_DB
OINSTALL_GROUP=svrtech
OSDBA_GROUP=dba
OSOPER_GROUP=oper
PAUSE_AFTER_PREREQ=false
RAC_HOME_SHARED=false
SOURCE_TYPE=SOFTWARE_LIBRARY
TARGET_HOST_LIST=host.us.example.com
WORK_DIR_LOC=/tmp
• Create the WebLogic Domain Provisioning Profile, this ensures that the domain
selected and its Middleware Home are archived and stored in the software library
for future cloning operations. You can use this profile while cloning a WebLogic
domain.
1. To retrieve the GUID of the Deployment Procedure, run the following command:
For example:
./emcli get_procedures | grep FMWPROV_
B35E10B1F154B4EEE040578CD78179DC, FMW Provisioning, FMWPROV_DP, Provision
Middleware, 2.0, ORACLE
2. Use the GUID retrieved in the previous step to prepare the Properties File template
using the following command:
For example:
For example, here is a sample properties file used with the values updated. For
information about these parameters, see Table A-5.
FMW_PROFILE_LOCATION=Fusion Middleware Provisioning/Profiles/WLS 11g IM Profile
DEST_MIDDLEWARE_BASE=/scratch/oracle_wls/mwBase
DEST_ADMIN_SERVER_PASSWORD=<password>
DEST_ADMIN_SERVER_USERNAME=weblogic
DEST_JDK_HOME=/usr/local/packages/jdk7
ANALYZE_MODE=false
DEST_HOST_LIST.0=example.com
DEST_HOST_CREDENTIAL_LIST.0=user1:<password>
PROVISIONING_MODE=BASIC
SUBMITTED_FROM_UI=true
Table A-5 Description of the Parameters Used in a Properties File That Is Used
for Provisioning Oracle WebLogic Server with a Provisioning Profile
Parameter Description
FMW_PROFILE_LOCATION Absolute path to the fusion middleware profile.
For example, Fusion Middleware
Provisioning/Profiles/WLS 11g IM
Profile
Table A-5 (Cont.) Description of the Parameters Used in a Properties File That Is
Used for Provisioning Oracle WebLogic Server with a Provisioning Profile
Parameter Description
DEST_ADMIN_SERVER_USERN User name of the administration server on the
AME destination host.
For example, weblogic
1. To retrieve the GUID of the Deployment Procedure, run the following command:
For example:
./emcli get_procedure | grep SCALEUP
B35E10B1F154B4EEE040578CD78179DC, FMW Provisioning, SCALEUP DP, Scale up/Scale
out Middleware, 2.0, ORACLE
2. Use the Instance GUID retrieved in the previous step to get input properties of an
instance of the procedure:
For example:
Note:
This step is valid only if the instances of the procedure is available, which
means that the procedure should have been submitted at least once in the
past. If you have never submitted the procedure, then you may see an error
message as follows:
Instance with GUID=<guid> is not found in repository.
Please make sure the value is correct and try again.
For example, here is a sample properties file used with the values updated. For
information about these parameters, see Table A-6.
COHERENCE_ENABLED=false
DOMAIN_TARGET_NAME=/Farm03_base_domain/base_domain
ONLINE_MODE=true
DOMAIN_TYPE=General
TEMPLATE=template.jar
TEMPLATE_NAME=mytemplate
APPS_ARCHIVE=apps.zip
ADMIN_HOST_NAME=example.com
ADMIN_LISTEN_ADDRESS=10.240.34.37
ADMIN_LISTEN_PORT=7002
ADMIN_PROTOCOL=t3
ADMIN_WLS_USERNAME=weblogic
ADMIN_WLS_PASSWORD=<password>
ADMIN_WORK_DIR_LOC=/tmp/scaleUpSrc
FARM_PREFIX=Farm03
OMS_WORK_DIR_LOC=/tmp
ARCHIVE_FILE_NAME=archive.jar
CLONING_JAR_NAME=cloningclient.jar
SESSION_TS_LOC=/20151013122409043
REF_PREREQ_ZIP=prereq.zip
REF_SIZE_FILE=sizeprereq
IS_CLONE=false
IS_PLAIN=true
NOT_WINDOWS=true
USE_OWNER_CREDENTIALS=true
APPS_LIST_FILE_NAME=files.list
PREREQ_ONLY_DP=false
COMPUTE_SKIP_CLONE=true
USE_EXISTING_HOME=false
IS_OHS_EXIST=false
IS_CUSTOM_LOCAL_DOMAIN=false
IS_SERVER_MIGRATION=false
WLS_VERSION=10.3.6.0
IS_EXEC_PREREQ=false
CONFIGURE_JOC=false
ADMIN_HOST.0.name=example.com
ADMIN_HOST.0.type=host
ADMIN_HOST.0.normalHostCreds=NAME:<password>:user1
ADMIN_HOST.0.SERVER_ADDRESS=example.com
ADMIN_HOST.0.IS_SERVER_MIGRATABLE=false
ADMIN_HOST.0.ORACLE_HOME=n/a
ADMIN_HOST.0.SERVER_PORT=19030
ADMIN_HOST.0.DOMAIN_HOME_ADMIN_HOST=/scratch/oracle_wls/mwBase/domains/base_domain
ADMIN_HOST.0.MIDDLEWARE_HOME=/scratch/oracle_wls/mwBase/middleware
ADMIN_HOST.0.TLOG_DIR=/scratch/oracle_wls/mwBase/domains/base_domain/servers/
Managed_Server1/tlogs
ADMIN_HOST.0.CLUSTER_NAME_ADMIN_HOST=Cluster_1
ADMIN_HOST.0.JDK_LOC=/ade_autofs/gd29_3rdparty/nfsdo_generic/JDK7/MAIN/
LINUX.X64/150608.1.7.0.85.0B015/jdk7/jre
ADMIN_HOST.0.SSL_PORT=0
ADMIN_HOST.0.EX_PATTERN=*.mem
ADMIN_HOST.0.MS_OVERRIDE_PORT=0
ADMIN_HOST.0.CLONE_MODE=true
ADMIN_HOST.0.WORK_DIR_LOC_ADMIN_HOST=/tmp/scaleUpSrc
ADMIN_HOST.0.MACHINE_NAME_ADMIN_HOST=Machine_1
ADMIN_HOST.0.WLS_HOME_ADMIN_HOST=/scratch/oracle_wls/mwBase/middleware/
wlserver_10.3
ADMIN_HOST.0.SOURCE_SERVER_NAME=Managed_Server_1
ADMIN_HOST.0.SERVER_NAME_ADMIN_HOST=Managed_Server1
DEST_MANAGED_SERVERS.0.name=example.com
DEST_MANAGED_SERVERS.0.type=host
DEST_MANAGED_SERVERS.0.normalHostCreds=NAME:<password>:user1
DEST_MANAGED_SERVERS.0.START_SERVER_REQUIRED=true
DEST_MANAGED_SERVERS.0.JOC_PORT=9988
DEST_MANAGED_SERVERS.0.WLS_HOME_DEST_MANAGED_SERVERS=/scratch/oracle_wls/mwBase/
middleware/wlserver_10.3
DEST_MANAGED_SERVERS.0.MACHINE_NAME=Machine_1
DEST_MANAGED_SERVERS.0.SERVER_NAME_DEST_MANAGED_SERVERS=Managed_Server1
DEST_MANAGED_SERVERS.0.JRE_LOC=/usr/local/packages/jdk7
DEST_MANAGED_SERVERS.0.START_NM=false
DEST_MANAGED_SERVERS.0.START_MS_USE_NM=true
DEST_MANAGED_SERVERS.0.NM_LISTEN_ADDRESS=example.com
DEST_MANAGED_SERVERS.0.CLUSTER_NAME_DEST_MANAGED_SERVERS=Cluster_1
DEST_MANAGED_SERVERS.0.DOMAIN_HOME_DEST_MANAGED_SERVERS=/scratch/oracle_wls/
mwBase/domains/base_domain
DEST_MANAGED_SERVERS.0.NM_LISTEN_PORT=5558
DEST_MANAGED_SERVERS.0.FMW_HOME_DEST_MANAGED_SERVERS=/scratch/oracle_wls/mwBase/
middleware
DEST_MANAGED_SERVERS.0.WORK_DIR_LOC_DEST_MANAGED_SERVERS=/tmp/scaleUpDest
DEST_MANAGED_SERVERS.0.MS_PORT_DETAILS=19030:Listen Port
DEST_MANAGED_SERVERS.0.DEST_MANAGED_SERVER_LISTEN_ADDRESS=example.com
Table A-6 Description of the Parameters Used in a Properties File for Scaling Up or Scaling Out a
WebLogic Server
Parameter Description
COHERENCE_ENABLED If set to True, then Coherence is configured through
deployment procedure. If set to False, then Coherence is not
configured through deployment procedure.
ONLINE_MODE If set to True, then the scaleup operation is done when other
servers and applications are up and running. If set to False,
then the scaleup operation is done when other servers and
applications are not up and running.
DOMAIN_TYPE Domain type. For wls, the value is General. For SOA, the
value is soa.
FARM_PREFIX Farm prefix of the domain where you want to add a server.
Table A-6 (Cont.) Description of the Parameters Used in a Properties File for Scaling Up or Scaling
Out a WebLogic Server
Parameter Description
APPS_LIST_FILE_NAME Set the value to files.list. files.list that contains the list of all
the applications running on the server.
USE_EXISTING_HOME Set to True, if the server is created on the same host where
the middleware home is already present. Else, set to False
and create mwh.
Table A-6 (Cont.) Description of the Parameters Used in a Properties File for Scaling Up or Scaling
Out a WebLogic Server
Parameter Description
ADMIN_HOST. Cluster name present in the Administration Server host.
0.CLUSTER_NAME_ADMIN_HOST
ADMIN_HOST.0.SOURCE_SERVER_NAME Set to True, and then specify the source server name.
DEST_MANAGED_SERVERS.0.JOC_PORT Set the Java Object Cache port number to 9988, if it needs to
be configured for the destination managed server.
Table A-6 (Cont.) Description of the Parameters Used in a Properties File for Scaling Up or Scaling
Out a WebLogic Server
Parameter Description
DEST_MANAGED_SERVERS.0.START_NM If destination managed server is associated with a existing
Machine in running state, then START_NM is set to False. If
destination managed server is associated with a new
machine, then set to True and start the machine.
4. Submit the procedure with the generated properties file as the input:
the instance GUID of the procedure submitted. Minor modifications are made to the
properties file, and then submitted through EM CLI.
• Create Software Library component containing hotspot JRE6 for Linux in the
following directory: /software_library/provisioning/
hotspot_jre6_linux32.
A.6.3.2 Adding Steps and Phases to User Defined Deployment Procedure Using GUI
To add phases and steps to User Defined Deployment Procedure (UDDP), log in to
Cloud Control as a Designer, and follow these steps:
1. In Cloud Control, from the Enterprise menu, select Provisioning and Patching,
then select Procedure Library.
2. On the Provisioning page, from the Actions menu select Create New, and click
Go.
3. Provide a unique name for your procedure UDDPTest , and click Procedure
Steps tab.
5. Select the Default Phase, and click Insert to add a new step to the phase. On the
Create wizard select Type as Library:Component. The page refreshes, and a five-
step wizard appears.
a. On the Create page, enter a unique name Transfer JRE, and then click Next.
d. On the Map Properties page, map the directive properties with the variables
defined. For example, set the destination_path directive property to
Choose Variable, and then choose the procedure variable that you set
destination_path.
6. Select the step Transfer JRE, and click Insert. On the Create Wizard, select Type
Host Command. The page refreshes, and a three-step wizard appears.
a. On the Create page, enter a unique name Check JRE Version, and then click
Next.
7. Go back to the Procedure Library page, and select the UDDPTest procedure that
you just created, and click Launch. To complete the wizard enter the following
details: target where you want to provision your procedure, variable (destination
path: /tmp), credential info, and notification information.
8. Once you have provided all the details, click Submit. Enter the a unique
Submission name FirstUDDP.
9. After the procedure has run, verify the output of the Check JRE Version step.
Ideally the version should be JRE6.
1. Run the following command to retrieve a list of all the procedures that you have
submitted, and note down the instance ID:
emcli get_instances
For example: emcli get_instances -type=DemoNG
2. Run the following command to get a list of inputs submitted for your procedure:
3. Edit the file (mydp.properties), and change the values of the property
destination path to /scratch.
4. Submit the procedure with the modified properties file as the input:
1. Run the following command to search for the release ID of the Oracle WebLogic
Release 10.3.5:
2. Run the following command to search for the produc ID of Oracle WebLogic:
3. Run the following command to search for the platform ID of a Generic Platform:
4. Search for the Patch ID using the product, release, and platform details that you
have from the previous steps as followings:
Output:
9561331 Generic PLATFORM - 10.3.5 Oracle WebLogic Server 10.3.5
Generic American English Recommended Generic Platform
5. Create a patch-target map (properties) file using the vi editor, and supply
information like Patch ID, Release ID, and Platform ID, Language ID, and so on.
Here is a sample properties file:
vi create.props
patch.0.patch_id=9561331
patch.0.release_id=8191035000
patch.0.platform_id=2000
patch.0.language_id=0
patch.0.target_name=/Farm01_soa_domain/soa_domain
patch.0.target_type=weblogic_domain
6. Run the following command to create the plan, and supply the newly created
properties file (create.props) as input:
Output:
The Patch Plan "demo1" is successfuly created.
7. To view the user-editable fields of an existing plan, and save the output to a
properties file run the following command:
vi set.props
Output:
name=demo1
description=
deployment_date=
planPrivilegeList=VIEWER:ADMIN:VIEW;OPER:ADMIN:VIEW;DESIGNER:ADMIN:VIEW
patch.0.patch_id=9561331
patch.0.release_id=8191035000
patch.0.platform_id=2000
patch.0.language_id=0
patch.0.target_name=/Farm01_soa_domain/soa_domain
patch.0.target_type=weblogic_domain
deploymentOptions.StageLocation=%emd_emstagedir%
deploymentOptions.AdvancedOPatchOptions=AllNodes
deploymentOptions.StagePatches=true
deploymentOptions.rollbackMode=false
8. Edit the properties file (set.props) using any editor to change the rollback mode
to true:
name=demo1
description=
deployment_date=
planPrivilegeList=VIEWER:ADMIN:VIEW;OPER:ADMIN:VIEW;DESIGNER:ADMIN:VIEW
patch.0.patch_id=9561331
patch.0.release_id=8191035000
patch.0.platform_id=2000
patch.0.language_id=0
patch.0.target_name=/Farm01_soa_domain/soa_domain
patch.0.target_type=weblogic_domain
deploymentOptions.StageLocation=%emd_emstagedir%
deploymentOptions.AdvancedOPatchOptions=AllNodes
deploymentOptions.StagePatches=true
deploymentOptions.rollbackMode=true
9. To save the patch plan with the new edited data, run the following command:
Output:
It is successfully updating deployment options from the patch plan.
10. To verify the status of the patch plan, run the following EM CLI command:
Output:
<plan>
<planDetails>
<plan_id>EDD74FFF006DD0EE6D28394B8AAE</plan_id>
<name>demo1</name>
<type>PATCH</type>
<description/>
<conflict_check_date>Tue Feb 21 18:04:04 PST 2012</conflict_check_date>
<conflict_check_date_ms>1329876244000</conflict_check_date_ms>
<is_deployable>1</is_deployable>
<plan_status>CONFLICTS</plan_status>
<review_status>CONFLICT_FREE</review_status>
<created_date>Tue Feb 21 17:40:47 PST 2012</created_date>
<created_date_ms>1329874847000</created_date_ms>
<created_by>SYSMAN</created_by>
<last_updated_date>Tue Feb 21 17:58:29 PST 2012</last_updated_date>
<last_updated_date_ms>1329875909000</last_updated_date_ms>
<last_updated_by><USERNAME></last_updated_by>
<grant_priv>yes</grant_priv>
<user_plan_privilege>FULL</user_plan_privilege>
<see_all_targets>N</see_all_targets>
<homogeneousGroupLabel>Oracle WebLogic Domain 10.3.5.0 (Linux x86-64)</
homogeneousGroupLabel>
<executeGuid/>
<executeUrl/>
<planDetails/>
11. After you have created and updated the patch plan with all the relevant data, you
can submit your patch plan in the following sequence of modes. The EM CLI
command used to submit the patch plan is:
Output:
The action "analyze" is successfully submitted on the Patch Plan "demo1", now
"analyze" is in progress.
12. To verify the status of the patch plan submitted, run the following EM CLI
command:
Output:
<plan>
<planDetails>
<plan_id>EDD74FFF006DD0EE6D28394B8AAE</plan_id>
<name>demo1</name>
<type>PATCH</type>
<description/>
<conflict_check_date>Tue Feb 21 18:04:04 PST 2012</conflict_check_date>
<conflict_check_date_ms>1329876244000</conflict_check_date_ms>
<is_deployable>1</is_deployable>
<plan_status>CONFLICTS</plan_status>
<review_status>CONFLICT_FREE</review_status>
<created_date>Tue Feb 21 17:40:47 PST 2012</created_date>
<created_date_ms>1329874847000</created_date_ms>
<created_by>USERNAME</created_by>
<last_updated_date>Tue Feb 21 17:58:29 PST 2012</last_updated_date>
<last_updated_date_ms>1329875909000</last_updated_date_ms>
<last_updated_by>USERNAME</last_updated_by>
<grant_priv>yes</grant_priv>
<user_plan_privilege>FULL</user_plan_privilege>
<see_all_targets>N</see_all_targets>
<homogeneousGroupLabel>Oracle WebLogic Domain 10.3.5.0 (Linux x86-64)</
homogeneousGroupLabel>
<executeGuid/>
<executeUrl/>
<planDetails/>
13. To check if there are any conflicts, run the following command:
Output:
<plan_status>CONFLICTS</plan_status>
You can verify the plan you have created by logging in to Enterprise Manager
Cloud Control, from Enterprise menu, select Provisioning and Patching, then
select Patches and Updates. On the home page, you will see the patch plan demo1
that you have created using the command line as follows:
You can resolve the conflicts using the UI, and then submit the patch plan.
14. Run the command show_patch_plan after resolving the conflicts to verify the
status of the plan as follows:
Output:
<plan>
<planDetails>
<plan_id>EDD74FFF006DD0EE6D28394B8AAE</plan_id>
<name>demo</name>
<type>PATCH</type>
<description/>
<conflict_check_date>Tue Feb 21 18:04:04 PST 2012</conflict_check_date>
<conflict_check_date_ms>1329876244000</conflict_check_date_ms>
<is_deployable>1</is_deployable>
<plan_status>INPROGRESS</plan_status>
<review_status>CONFLICT_FREE</review_status>
<created_date>Tue Feb 21 17:40:47 PST 2012</created_date>
<created_date_ms>1329874847000</created_date_ms>
<created_by>USERNAME</created_by>
<last_updated_date>Tue Feb 21 17:58:29 PST 2012</last_updated_date>
<last_updated_date_ms>1329875909000</last_updated_date_ms>
<last_updated_by>USERNAME</last_updated_by>
<grant_priv>yes</grant_priv>
<user_plan_privilege>FULL</user_plan_privilege>
<see_all_targets>N</see_all_targets>
<homogeneousGroupLabel>Oracle WebLogic Domain 10.3.5.0 (Linux x86-64)</
homogeneousGroupLabel>
<executeGuid>BA8E3904DDB36CFFE040F00A5E644D13</executGuid>
<executeUrl>/em/console/paf/procedureStatus?
executionGUID=BA8E3904DDB36CFFE040F00A5E644D13</executeUrl>
<planDetails/>
15. Run the following command to determine the status of the patch plan execution:
Output:
BA8E3904DDB36CFFE040F00A5E644D13, PatchOracleSoftware, demo1_Analysis_Tue Mar 06
02:08:02 PST 012, EXECUTING
16. After a successful analysis, you can deploy/prepare the patch plan. To do so, run
the following command with action deploy:
Output:
The action "deploy" is successfully submitted on the Patch Plan "demo1", now
"deploy" is in progress
17. Use the Cloud Control UI to see of the submitted plan has successfully been
deployed. Alternately, you can verify the same using the EM CLI command:
Output:
Folder myFolder is created in Software Library folder, identifier is
oracle:defaultService:em:provisioning:
1:cat:C771B5A38A484CE3E40E50AD38A69D2.
You can use the identifier of the newly created folder that is part of the output
message when creating or modifying entities, or for creating other sub-folders.
1. From the Enterprise menu, select Provisioning and Patching, then select Software
Library.
2. On the Software Library home page, from View menu select Columns, and then
select Internal ID. By default, the Internal ID column is hidden.
Output:
Java EE Provisioning,Java EE Application Provisioning
Entities,oracle:defaultService:em:provisioning:
1:cat:C771B5AAF4A4EED9E040E50AD38A6E98
MultiOMS,List of Oracle shipped
Directives,oracle:defaultService:em:provisioning:
1:cat:C771B5AAF1ACEED9E040E50AD38A6E98
myFolder,myFolder
description,oracle:defaultService:em:provisioning:
1:cat:C771B5A38A484CE3E040E50AD38A69D2
OSBProvisioning,OSBProvisioning
Entities,oracle:defaultService:em:provisioning:
1:cat:C771B5AAF3F1EED9E040E50AD38A6E98
..........
If the folder you want to access is a sub-folder of myFolder, then use the following
verb to list the sub-folders by specifying the identifier of myFolder, as follows:
emcli list_swlib_folders
-parent_id='oracle:defaultService:em:provisioning:
1:cat:C771B5A38A484CE3E040E50AD38A69D2'
-show_folder_id
Output:
mySubFolder,mySubFolder
description,oracle:defaultService:em:provisioning:
1:cat:C771B5A38A494CE3E040E50AD38A69D2
Output:
Component, COMP_Component
Directives, COMP_Directives
Bare Metal Provisioning, BMPType
Virtualization, Virtualization
Output:
Generic Component, SUB_Generic
Oracle Database Software Clone, SUB_OracleDB
Configuration Template, SUB_ConfigTmpl
SUB_OracleAS
Self Update, SUB_SelfUpdate
Oracle Clusterware Clone, SUB_OracleCRS
Service Bus Resource, SUB_OSBResource
Oracle Software Update, SUB_OraSoftUpdate
Java EE Application, SUB_JavaEEApplication
Installation Media, SUB_InstallationMedia
Database Template, SUB_DbCreateTemplate
Database Provisioning Profile, SUB_DbProfile
WebLogic Domain Provisioning Profile, SUB_FMWBundle
WebLogic Domain Clone, SUB_WLSTemlpate
Oracle Middleware Home Gold Image, SUB_FMWImage
Note:
The type and subtype options are optional when creating a Generic
Component, but has been used explicitly for this illustration.
Output:
Entity 'myEntity' is created in 'mySubFolder' folder, identifier is
'oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_Generic:C77200CA9DC1E7AAE040E50AD38A1599:0.1'
Note:
You can use the identifier of the newly created entity that is part of the output
message when uploading files or modifying the entity.
To verify the newly created entity, use the following verb:
emcli list_swlib_entities
-name=myEntity
-folder_id='oracle:defaultService:em:provisioning:
1:cat:C771B5A38A494CE3E040E50AD38A69D2'
Output:
myEntity,0.1,myEntity description,Ready,Component,Generic
Component,Untested,SYSMAN
Note:
A new revision of the entity myEntity will be created after the upload is
complete.
Output:
Upload of file(s) initiated, this may take some time to complete...
Upload of file(s) completed successfully.
Entity 'myEntity (0.2)' in 'mySubFolder' folder has been created, identifier is
'oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_Generic:C77200CA9DC1E7AAE040E50AD38A1599:0.2'.
Output:
Entity 'myEntity (0.2)' in 'mySubFolder' folder has been created, identifier is
'oracle:defaultService:em:provisioning:
1:cmp:COMP_Component:SUB_Generic:C77200CA9DC1E7AAE040E50AD38A1599:0.2'.
Output:
myEntity,0.1,myEntity description,Ready,Component,Generic Component,Untested,USERNAME
Sample Output:
secondLoc, /u01/swlib/, Active
Verifying the Status of Source OMS Agent File System Storage Location
Immediately after initiating the migrate and remove operation for storage location
'firstLoc', the location will be marked 'Inactive' to stop any new file uploads to this
location. To verify the status of the location, use the following command.
emcli list_swlib_storage_locations
-type=OmsAgent
Sample Output:
firstLoc, /u01/swlib/, Inactive
Once the migrate job is complete, the 'firstLoc' location is not listed, as it is removed.
Note:
Ensure that you have uploaded an ATS Service Test type to the Test
Repository before proceeding with this procedure. For more information on
this, see Configuring and Using Services chapter of the Oracle Enterprise
Manager Cloud Control Administrator's Guide.
To create an ATS Test instance using the service test available in the repository, and to
customize the test by applying a custom databank, follow these steps:
1. Create an ATS Service Test instance called my_service for the existing service:
Where,
-input_file=template:'my_template.xml' contains all the ATS test related
information.
-swlibPath: holds the path to retrieve the ATS Zip file from software library.
</mgmt_bcn_transaction>
</transactions>
</transaction-template>
2. Upload new databank file called ATSTest1 to ATS test type for the service
my_service:
1. To retrieve the GUID of the Deployment Procedure, run the following command:
./emcli get_procedures
For example:
2. Use the GUID retrieved in the previous step to prepare the Properties File template
using the following command:
For example:
emcli describe_procedure_input -procedure=F7A60FD5AF1B3E8FE043D97BF00AC094 >
inst.properties
properties file with the name inst.properties is created
destinationDir
domains.0.javaeeApps.0.copyingComponentsList.0.fileNameWithoutPath=ExampleApp.war
domains.0.javaeeApps.0.copyingComponentsList.0.name=host.us.example.com
domains.0.javaeeApps.0.copyingComponentsList.0.type=host
domains.0.javaeeApps.0.defaultHostCred=PREF:HostCredsNormal
domains.0.javaeeApps.0.deleteTarget=
domains.0.javaeeApps.0.deplMode=Deploy
domains.0.javaeeApps.0.domainName=
domains.0.javaeeApps.0.isSharedLib=false
domains.0.javaeeApps.0.name=host.us.example.com
domains.0.javaeeApps.0.planPath=n/a
domains.0.javaeeApps.0.postDeployScript=n/a
domains.0.javaeeApps.0.preDeployScript=n/a
domains.0.javaeeApps.0.retirementPolicy=true
domains.0.javaeeApps.0.retirementTimeout=0
domains.0.javaeeApps.0.runExecutionScript=false
domains.0.javaeeApps.0.stageMode=DEFAULT
domains.0.javaeeApps.0.startMode=full
domains.0.javaeeApps.0.targets="ManagedServer_1"
domains.0.javaeeApps.0.type=host
domains.0.javaeeApps.0.wlsAdminURL=t3s://host.us.example.com:7022
# domains.0.javaeeApps.0.wlsDomainPassword=***
domains.0.javaeeApps.0.wlsDomainUserName=weblogic
domains.0.javaeeApps.0.wlsHome=/tmp/work/middleware/wlserver_10.3
domains.0.name=host.us.example.com
domains.0.stopOnDeployError=false
domains.0.type=host
undeployMode=false
4. Submit the procedure with the generated inst.properties properties file as the
input:
For example:
• You cannot add or edit steps and phases using EM CLI commands. To do so, you
must log in to Cloud Control, and follow the steps described in the section Section
33.2.1.
• You cannot define new variables to be used in the deployment procedures through
EM CLI, this can be done only through the Cloud Control UI. For more information
about procedure variables, see Section 32.3.2.
• You cannot track the detailed execution info (such as failures) of an instance
through EM CLI, which is possible through the Cloud Control UI.
• To set the My Oracle Support preferred credentials, you must log in to the Enterprise
Manager Cloud Control. There is no command line option to do so.
This appendix describes the settings you must make on the hosts before you can use
them for provisioning and patching tasks. In particular, this appendix covers the
following:
• Shell Limits
• Environment Settings
• Storage Requirements
– groupadd oinstall
– groupadd dba
– groupadd oper
– groupadd asmadmin
• To add a host user to these groups, run the following command, and enter the
password when prompted.
useradd -u 500 -g oinstall -G dba,oper,asmdba oracle
Where,
-u option specifies the user ID.
-g option specifies the primary group, which must be the Oracle Inventory group,
for example oinstall.
-G option specifies the secondary groups, which must include the OSDBA group,
and, if required, the OSOPER and ASMDBA groups, for example, dba, asmdba, or
oper.
• Add the following values into the limits.conf file located under the /etc/
security/ directory:
• Add the following line into the /etc/pam.d/login file, or edit the /etc/
pam.d/login file to include the following if it does not exist already:
session required pam_limits.so
• Kernel Requirements
• Package Requirements
Note:
For details about all the recommended parameters, refer the following link:
http://www.oracle.com/technetwork/topics/linux/validated-
configurations-085828.html
Note:
To change the current kernel parameters, run the following command with
root user privileges:
/sbin/sysctl -p
Parameter Command
semmsl, semmns, # /sbin/sysctl -a | grep sem
semopm, and semmni This command displays the value of the semaphore parameters in
the order listed.
Note:
For more information about the Kernel requirements, see the Oracle Database
Installation Guide available in the following location: http://
www.oracle.com/pls/db112/portal.portal_db?
selected=11&frame=#linux_installation_guides
2. The following table describes the relationship between the installed RAM and the
configured swap space recommendation:
3. To determine the amount of disk space available in the /tmp directory, run the
following command:
df -kh /tmp
Each network adapter must support TCP/IP The interconnect must support the user
datagram protocol (UDP) using high-speed
network adapters and switches that support
TCP/IP (Gigabit Ethernet or better required).
Note: For the private network, the endpoints
of all designated interconnect interfaces must
be completely reachable on the network.
There should be no node that is not
connected to every private network interface.
You can test whether an interconnect
interface is reachable using a ping command.
Before starting the installation, you must have the following IP addresses available for
each node:
1. An IP address with an associated host name (or network name) registered in the
DNS for the public interface. If you do not have an available DNS, then record the
host name and IP address in the system hosts file, /etc/hosts.
2. One virtual IP (VIP) address with an associated host name registered in a DNS. If
you do not have an available DNS, then record the host name and VIP address in
the system hosts file, /etc/hosts.
To enable VIP failover, the configuration shown in the preceding table defines the
public and VIP addresses of both nodes on the same subnet, 143.46.43.
• Oracle Automatic Storage Management (Oracle ASM): You can install Oracle
Clusterware files (Oracle Cluster Registry and voting disk files) in Oracle ASM disk
groups.
• A supported shared file system: Supported file systems include the NFS & OCFS.
The following table describes the various storage options for Oracle Clusterware and
Oracle RAC:
The following table displays the File System Volume Size requirements:
Note:
Ensure that the user performing the installation has write access on the mount
points. To verify that the user has the required permissions, run the following
command:
chown -R oracle:oinstall <mount point>
For example:
If the permission is denied while mounting:
[root@node2-pub ~]# mkdir -p /u01/app/test
[root@node2-pub ~]# permission denied
To resolve the permission issue, run the following command:
[root@node2-pub root]# chown -R oracle:oinstall /u01
This appendix introduces you to the emctl partool utility and explains how you can
use it to perform critical tasks such as exporting Deployment Procedures as PAR files,
importing PAR files, and so on. In particular, this appendix covers the following:
Note:
Provisioning Archive Files that were exported from any Enterprise Manager
version prior to 12c can not be imported into Enterprise Manager 12c.
Note:
For importing PAR files that contain Software Library entities, ensure that
your Software Library is configured. For information on Configuring Software
Library, see Oracle Enterprise Manager Cloud Control Administrator's Guide.
from one instance of Cloud Control, and deploy them to another instance of Cloud
Control.
emctl partool utility is a tool offered by Cloud Control that helps you perform these
functions using the command line interface. Essentially, emctl partool utility helps
you:
• Import PAR files to the same instance or any other instance of Cloud Control
The emctl partool utility is located in the $ORACLE_HOME/bin directory.
The following is the usage information displayed when you run
$ORACLE_HOME/bin/emctl partool:
emctl partool <deploy|view> -parFile <file> -force(optional)
emctl partool <deploy|view> -parFile <file> -force(optional) -ssPasswd <password>
emctl partool <deploy|view> -parDir <dir> -force(optional)
emctl partool export -guid <procedure guid> -file <file> -displayName <name> -
description <desc> -metadataOnly(optional)
emctl partool check
emctl partool help
Table C-1 describes the additional options that can be used with the emctl partool
utility.
Option Description
-repPasswd <repPassword> Indicates the repository password. User will be prompted for the
repository password if -repPasswd is not specified on the command
line.
Note: Providing a password on the command line is insecure and
should be avoided in a production environment.
Option Description
-description <description> PAR file description.
1. In Cloud Control, from the Enterprise menu select Provisioning and Patching, and
then click Procedure Library.
2. On the Provisioning page, right click the deployment procedure name and from the
menu select Copy Link Location.
3. Paste the copy the link to a notepad, and then search for guid.
Alternately you can use the following EMCLI command to retrieve the GUID of the
procedure:
emcli get_procedures [-type={procedure type}] [-parent_proc={procedure associate
with procedure configuration}]
Example:
emcli get_procedures -type=DemoNG -parent_proc=ComputeStepTest
Output Column:
GUID, Procedure type, name, display name, version, Parent procedure name
Note:
Note:
When a procedure is exported using emctl partool, any directives or
components referred by the procedure are also exported. However, only the
latest revision of these directives or components will be exported. If you do
not want to export components or directives, you can specify the -
metadataOnly flag when running emctl partool.
Note:
Importing an existing PAR file (from the previous releases) into Enterprise
Manager Cloud Control 12c is not supported. For example, you cannot import
a Enterprise Manager 11g Grid Control Release 1 (11.1.0.0) to Enterprise
Manager Cloud Control 12c.
Note:
Note:
If you have multiple OMSes in your environment, then you need run the
emctl partool utility only once to deploy any PAR files or to perform other
related operations.
While importing PAR files if the user procedure already exists in the setup, then it will
always import this procedure with revised (bumped up) version.
1. In Cloud Control, from the Enterprise menu select Provisioning and Patching, and
then click Procedure Library.
2. On the Deployment Procedure Manager page, select the procedure, and from the
drop down menu select Import, and then click Go.
• Upload from Local Machine to upload the PAR file from the local machine.
Click Browse to select the PAR file. Click Import to import the file.
See Also:
For a usecase on using the Upload Procedure File feature, see Copying
Customized Provisioning Entities from One Enterprise Manager Site to
Another.
Note:
When importing or exporting components and/or directives that contain
properties with secret values, you must use the -ssPasswd command and
provide the secret store password to create Oracle Wallet. This ensures that
the properties with secret values are securely stored using an Oracle Wallet,
and can be accessed while importing with only the Oracle Wallet password.
For more information about the -ssPasswd command, see Table C-1.
This appendix explains PXE booting and kickstart technology in the following section:
1. Target Machine (either bare metal or with boot sector removed) is booted.
2. The Network Interface Card (NIC) of the machine triggers a DHCP request.
3. DHCP server intercepts the request and responds with standard information (IP,
subnet mask, gateway, DNS etc.). In addition, it provides information about the
location of a TFTP server and boot image (pxelinux.0).
4. When the client receives this information, it contacts the TFTP server for obtaining
the boot image.
5. TFTP server sends the boot image (pxelinux.0), and the client executes it.
6. By default, the boot image searches the pxelinux.cfg directory on TFTP server for
boot configuration files on the TFTP server using the following approach:
First, it searches for the boot configuration file that is named according to the
MAC address represented in lower case hexadecimal digits with dash separators.
For example, for the MAC Address "88:99:AA:BB:CC:DD", it searches for the file
01-88-99-aa-bb-cc-dd.
Then, it searches for the configuration file using the IP address (of the machine
that is being booted) in upper case hexadecimal digits. For example, for the IP
Address "192.0.2.91", it searches for the file "C000025B".
If that file is not found, it removes one hexadecimal digit from the end and tries
again. However, if the search is still not successful, it finally looks for a file named
"default" (in lower case).
For example, if the boot file name is /tftpboot/pxelinux.0, the Ethernet MAC
address is 88:99:AA:BB:CC:DD, and the IP address 192.0.2.91, the boot image
looks for file names in the following order:
/tftpboot/pxelinux.cfg/01-88-99-aa-bb-cc-dd
/tftpboot/pxelinux.cfg/C000025B
/tftpboot/pxelinux.cfg/C000025
/tftpboot/pxelinux.cfg/C00002
/tftpboot/pxelinux.cfg/C0000
/tftpboot/pxelinux.cfg/C000
/tftpboot/pxelinux.cfg/C00
/tftpboot/pxelinux.cfg/C0
/tftpboot/pxelinux.cfg/C
7. The client downloads all the files it needs (kernel and root file system), and then
loads them.
E.2.4 Monitor the Progress and Report the Status of the Change Activities
Monitor the progress of the various change activities, track the status of the patch
rollout operation, and identify any drifts, and report the overall status to your higher
management.
Create a Patch Plan, Test the Patches, and Certify the Patches EM_PATCH_DESIGNER
EM_CAP_USER BASIC_CAP_ACCESS
For instructions to create administrators with these roles, see the instructions outlined
in the following URL:
http://docs.oracle.com/cd/E24628_01/em.121/e27046/
infrastructure_setup.htm#BABGJAAC
E.3.3 Analyze the Environment and Identify Whether Your Targets Can Be Patched
Role: EM_PATCH_DESIGNER
Before creating a patch plan to patch your targets, Oracle recommends that you view
the patchability reports to analyze the environment and identify whether the targets
you want to patch are suitable for a patching operation. These reports provide a
summary of your patchable and non patchable targets, and help you create deployable
patch plans. They identify the problems with the targets that cannot be patched in
your setup and provide recommendations for them.
Patchability reports are available for Oracle Database, Oracle WebLogic Server, and
Oracle SOA Infrastructure targets.
To view the patchability reports, see the instructions outlined in the following URL:
http://docs.oracle.com/cd/E24628_01/em.121/e27046/
pat_mosem_new.htm#BGGDJDEC
E.3.5 Create a Patch Plan, Test the Patches, and Certify the Patches
Role: EM_PATCH_DESIGNER
Create a patch plan with the recommended patches, test the patches using the patch
plan, diagnose and resolve all patch conflicts beforehand. Once the patch plan is
deployable, certify the patch plan by converting it to a template.
To create a patch plan, see the instructions outlined in the following URL:
http://docs.oracle.com/cd/E24628_01/em.121/e27046/
pat_mosem_new.htm#CHDHBDCE
To access the newly created patch plan, see the instructions outlined in the following
URL:
http://docs.oracle.com/cd/E24628_01/em.121/e27046/
pat_mosem_new.htm#CHDIGHCF
To add patches to the patch plan, to analyze and test the patches, and to save the patch
plan as a patch template, follow Step (1) to Step (5) as outlined in the following URL,
and then for Step (6), on the Review & Deploy page, click Save as Template. In the
Create New Plan Template dialog, enter a unique name for the patch template, and
click Create Template.
http://docs.oracle.com/cd/E24628_01/em.121/e27046/
pat_mosem_new.htm#CHDHBEBD
This appendix provides solutions to common issues you might encounter when using
provisioning and patching Deployment Procedures. In particular, this appendix
covers the following:
• Refreshing Configurations
F.1.1.1 Issue
Grid Infrastructure root script fails.
F.1.1.2 Description
After Grid Infrastructure bits are laid down, the next essential step is Grid
Infrastructure root script execution. This is the most process intensive phase of your
deployment procedure. During this process, the GI stack configures itself and ensures
all subsystems are alive and active. The root script may fail to run.
F.1.1.3 Solution
1. Visit each node that reported error and run the following command on n-1 nodes:
2. If the root script did not run successfully on any of the nodes, pass the -lastNode
switch on nth node (conditionally) to the final invocation as shown below.
Now, retry the failed step from the Procedure Activity page.
F.1.2.1 Issue
A SUDO error occurs while performing a deployment.
F.1.2.2 Description
While performing a deployment, all root-related operations are performed over
sudo. To improve security, production environments tend to fortify sudo. Therefore,
you may encounter errors related to sudo.
F.1.2.3 Solution
Make the following changes in your sudoer's file:
2. If sudoers file contains entry Default env_reset, add the following entries after
this parameter:
F.1.3.1 Issue
Prerequisites checks fail when submitting a deployment procedure
F.1.3.2 Cause
Perform a meticulous analysis of output from prerequisite checks. While most
prerequisite failures are automatically fixed, it is likely that the deployment procedure
failed due to auto-fix environment requirements. Some likely cases are:
• Group membership for users that are not local to the system. Since users are
registered with a directory service, even root access does not enable the
deployment procedure to alter their attributes.
• Zone separation in Solaris. If the execution zone of deployment procedure does not
have privilege to modify system attributes, auto-fix operations of the deployment
procedure will fail.
F.1.3.3 Solution
Ensure that the deployment procedure has appropriate privileges.
F.1.4 Oracle Automatic Storage Management (Oracle ASM) Disk Creation Failure
See the details below.
F.1.4.1 Issue
Oracle ASM disk creation fails
F.1.4.2 Cause
ASM disks tend to be used and purged over time. If an ASM instance is purged and
physical ASM disks are left in their existing spurious state, they contain diskgroup
information that can interfere with future ASM creation. This happens if the newly
created ASM uses the same diskgroup name as exists in the header of such a raw disk.
If such a spurious disk exists in the disk discovery path of the newly created ASM it
will get consumed and raise unexpected error.
F.1.4.3 Solution
Ensure that disk discovery path is as restrictive as possible. Also, ASM disks should be
zeroed out as soon as ASM is being purged. Deployment procedures that support post
11.2 RDBMS have elaborate checks to detect the use case and warn the user
beforehand.
F.1.5.1 Issue
Encountered an Oracle ASM Disk permissions error
F.1.5.2 Description
Unlike NFS mounted storage wherein permissions set on any one node are visible
throughout, ASM diskgroups require permissions to be set to each raw disk for all
participating nodes.
F.1.5.3 Solution
For all participating nodes of the cluster, set 660 permissions to each raw disk being
consumed.
1. Log in as a designer, and from the Enterprise menu, select Provisioning and
Patching, then select Database Provisioning.
3. In the Create Like Procedure page, in the General Information tab, provide a name
for the deployment procedure.
In the Procedure Utilities Staging Path, specify the directory you want to use
instead of /tmp, for example, /u01/working.
4. Click Save.
F.1.7.1 Issue
During deployment procedure execution, the steps to create database and Oracle ASM
storage fails.
F.1.7.2 Solution
When a step in a deployment procedure executes successfully, it returns a positive exit
code. If the step fails, and the exit code is not positive, it raises an incident which is
stored in the OMS. All the associated log files and diagnosability information such as
memory usage, disk space, running process info and so on are packaged and stored.
You can access the Incident Console and package this information as a Service Request
and upload it to My Oracle Support. Click Help on the Incident Manager page for
more information on creating a new Service Request.
F.1.9.1 Issue
Deployment procedure execution fails
F.1.9.2 Solution
If you deployment procedure execution has failed, check the job run details. You can
retry a failed job execution or choose to ignore failed steps. For example, if your
deployment procedure execution failed due to lack of disk space, you can delete files
and retry the procedure execution. For certain issues which may not affect the overall
deployment procedure execution, such as cluvy check failure, you may want to ignore
the failed step and run the deployment procedure.
Retrying a job execution creates a new job execution with the status Running. The
status of the original execution is unchanged.
To ignore a failed step and retry the procedure, follow these steps:
2. In the Job Status page, click on the status of the failed step.
3. In the Step Status page, click Ignore. In the Confirmation page, click Yes.
• Collection Issues
F.2.1.1.1 Issue
While analyzing the patch plan, the patch plan fails with an unexpected error,
although the credentials are correct and although you have write permission on the
EM stage location.
For example, the following error is seen on job output page:
“Unexpected error occurred while checking the Normal Oracle Home Credentials"
F.2.1.1.2 Cause
You might have set up the Software Library using the OMS Agent file system, and you
might not have access to the named credentials that were used for setting up the
Software Library.
F.2.1.1.3 Solution
To resolve this issue, explicitly grant yourself access to the named credentials that
were used to set up the Software Library.
F.2.1.2.1 Issue
When you upload a patch set using the Upload Patches to Software Library page, you
might see an error stating that the operation failed to read the patch attributes.
For example,
ERROR:Failed to read patch attributes from the uploaded patch file <filename>.zip
F.2.1.2.2 Cause
Although you upload a metadata file along with a patch set ZIP file, sometimes the
patch attributes are not read from the metadata file. As a result, the operation fails
stating that it could not read the attributes.
F.2.1.2.3 Solution
To resolve this issue, manually enter all the patch attributes in the Patch Information
section of the Upload Patches to Software Library page. For example, patch number,
created on, description, platform, and language.
F.2.1.3 OPatch Update Job Fails When Duplicate Directories Are Found in the
Software Library
See the details below.
F.2.1.3.1 Issue
When you run an OPatch Update job, sometimes it might fail with the following error:
2011-11-28 10:31:19,127 RemoteJobWorker 20236 ERROR em.jobs startDownload.772-
OpatchUpdateLatest: java.lang.NullPointerException: Category, 'Oracle Software
Updates', has no child named, 'OPatch' at
oracle.sysman.emInternalSDK.core.patch.util. ComponentUtil.getComponentCategory
(ComponentUtil.java:854)
Even after applying the Cloud Control patch released in January 2012, you might see
the following error:
Category, 'Oracle Software Updates' already exists.
F.2.1.3.2 Cause
The error occurs when two Patch Components directories are found in the Software
Library. Particularly when you run two patch upload or download jobs, for example,
an OPatch patch download job and a regular patch download job, a race condition is
created, which in turn creates two directories with the name Patch Components. The
Software Library does not display any error while creating these duplicate directories,
but when you run the OPatch Update job, the job fails with a NullPointerException.
F.2.1.3.3 Solution
To resolve this issue, do one of the following:
If you see two Patch Components directories in the Software Library, then delete the one
that has fewer entries, and retry the failed patch upload or download job. To access the
Software Library, from the Enterprise menu, select Provisioning and Patching, and
click Software Library.
If you see only one Patch Components directory, but yet see the error that states that the
Oracle Software Updates already exists, then retry the failed patch upload or
download.
• Error Occurs While Testing the Proxy Server That Supports Only Digest
Authentication
F.2.2.1 Error Occurs While Testing the Proxy Server That Supports Only Digest
Authentication
See the details below.
F.2.2.1.1 Issue
On the Proxy Settings page, in the My Oracle Support and Proxy Connection tab,
when you provide the manual proxy configuration details, you might see an exception
error as shown in Figure F-1.
F.2.2.1.2 Cause
You might have provided the configuration details of a proxy server that supports
only the Digest authentication schema. By default, the proxy server is mapped only to
the Basic authentication schema, and currently there is no support for Digest
authentication schema.
F.2.2.1.3 Solution
To resolve this issue, reconfigure your proxy server to make it to use the Basic
authentication schema.
Note:
• Cannot Create Log Files When You Set Privileged Credentials as Normal Oracle
Home Credentials
F.2.3.1 Cannot Create Log Files When You Set Privileged Credentials as Normal
Oracle Home Credentials
See the details below.
F.2.3.1.1 Issue
While creating a patch plan, if you choose to override the Oracle home preferred
credentials, and set privileged credentials as normal Oracle home credentials
inadvertently as shown in Figure F-2, then you will see an error stating that log files
cannot be created in the EMStagedPatches directory.
For example,
“Unable to create the file <RAC_HOME>/EMStagedPatches/
PA_APPLY_PATCH_09_02_2011_14_27_13.log"
F.2.3.1.2 Cause
When a patch plan is deployed, the patch plan internally uses a deployment
procedure to orchestrate the deployment of the patches. While some of the steps in the
deployment procedure are run with normal Oracle home credentials, some of the steps
are run with privileged Oracle home credentials. However, when you set normal
Oracle home credentials as privileged Oracle home credentials, then the deployment
procedure runs those steps as a root user instead of the Oracle home owner, and as a
result, it encounters an error.
F.2.3.1.3 Solution
To resolve this issue, return to the Create Plan Wizard, in the Deployment Options
page, in the Credentials tab, set normal credentials as normal Oracle home credentials.
F.2.4.1.1 Issue
When you view a patch plan, sometimes the Create Plan Wizard does not display the
expected information. Some pages or sections in the wizard might be blank.
F.2.4.1.2 Cause
The issue might be with the details in the Management Repository or with the Create
Plan Wizard.
F.2.4.1.3 Solution
Identify whether the issue is with the Management Repository or with the Create Plan
Wizard. To do so, try retrieving some details from the Management Repository using
the commands and URLs mentioned in this section.
• If the URLs return the correct information but the console does not display it, then
there might be some technical issue with the Create Plan Wizard. To resolve this
issue, contact Oracle Support.
• If the URLs return incorrect information, then there might be some issue with the
Management Repository. To resolve this issue, re-create the patch plan.
To retrieve some details from the Management Repository, do the following:
• Retrieve the GUID of the patch plan. To do so, run the following command:
select plan_guid from em_pc_plans where name='<name of the
plan>";
For example,
select plan_guid from em_pc_plans where name='t8';
The result of the command looks like the following. Note down the GUID of the
plan.
PLAN_GUID
--------------------
96901DF943F9E3A4FF60B75FB0FAD62A
• Retrieve general information about a patch plan such as its name, type, status, and
plan privileges. To do so, use the following URL. This type of information is useful
for debugging the Plan Information step and the Review and Deploy step.
https://<hostname>:<port>/em/console/CSP/main/patch/plan?
planId=<plan_guid>&client=emmos&cmd=get&subset=planInfo
Note:
Before retrieving any information about a patch plan using the preceding
URL, log in to the Cloud Control console, and from the Enterprise menu,
select Provisioning and Patching, and then click Patches & Updates.
• Retrieve information about the patches and the associated targets that are part of
the patch plan. To do so, use the following URL. This type of information is useful
for debugging the Patches step and the Review step.
https://<hostname>:<port>/em/console/CSP/main/patch/plan?
planId=<plan_guid>&client=emmos&cmd=get&subset=patches
• Retrieve information about the deployment options selected in the patch plan. To
do so, use the following URL. This type of information is useful for debugging the
Deployment Options step and the Credentials step.
https://<hostname>:<port>/em/console/CSP/main/patch/plan?
planId=<plan_guid>&client=emmos&cmd=get&subset=deploymentOpti
ons
• Retrieve information about the preferred credentials set in the patch plan. To do so,
use the following URL. This type of information is useful for debugging the
Credentials step.
https://<hostname>:port/em/console/CSP/main/patch/plan?
planId=<plan_guid>&client=emmos&cmd=get&subset=preferredCrede
ntials
• Retrieve information about the target credentials set in the patch plan. To do so,
use the following URL. This type of information is useful for debugging the
Credentials step.
https://<hostname>:port/em/console/CSP/main/patch/plan?
planId=<plan_guid>&client=emmos&cmd=get&subset=targetCredenti
als
• Retrieve information about the conflict-free patches in the patch plan. To do so, use
the following URL. This type of information is useful for debugging the Validation
step and the Review & Deploy step.
https://<hostname>:port/em/console/CSP/main/patch/plan?
planId=<plan_guid>&client=emmos&cmd=get&subset=conflictFree
• Retrieve information about the suppressed patches in the patch plan. To do so, use
the following URL. This type of information is useful for debugging the Patches
step.
https://<hostname>:port/em/console/CSP/main/patch/plan?
planId=<plan_guid>&client=emmos&cmd=get&subset=removedPatchLi
st
F.2.4.2.1 Issue
While creating a new patch plan or editing an existing patch plan, when you add a
new target, you might see the following error:
Wrong Platform. Expected: Oracle Solaris on SPARC (64-bit), found: null
F.2.4.2.2 Cause
The Management Repository might not have the platform information for that target.
By default, for every target, the inventory details are regularly collected from the
oraclehomeproperties.xml file that resides in the Oracle home of the target.
Sometimes, the inventory collection might not have completed or might have failed,
resulting in missing data in the Management Repository. Due to these reasons, you
might not be able to add those targets.
F.2.4.2.3 Solution
To resolve this issue, forcefully recollect the inventory details from the Oracle home of
the target.
To retrieve the Oracle home details, follow these steps:
2. On the All Targets page, from the left hand Refine Search pane, click Target Type
menu to expand it.
4. All the targets of type Oracle Home are listed. You may search for the host name to
drill down to the Oracle home details you are looking for.
To retrieve the inventory details from the Oracle Home on the target host, run the
following command from the $EMDROOT/bin directory:
F.2.5.1.1 Issue
After installing the Management Agent on Oracle Exadata targets, the patch
recommendations do not appear.
F.2.5.1.2 Cause
The patch recommendations do not appear because the Exadata plug-ins are not
deployed.
F.2.5.1.3 Solution
To resolve this issue, explicitly deploy the Exadata plug-ins on Exadata targets. To do
so, follow these steps:
2. On the Plug-ins page, in the table, select the Oracle Exadata plug-in version you
want to deploy.
• Instances Not to Be Migrated Are Also Shown as Impacted Targets for Migration
• Cluster ASM and Its Instances Do Not Appear as Impacted Targets While Patching
a Clusterware Target
• Error #1009 Appears in the Create Plan Wizard While Creating or Editing a Patch
Plan
F.2.6.1.1 Issue
The patch plan fails stating it is a nondeployable plan.
F.2.6.1.2 Cause
You can add a patch to a target in a patch plan only if the patch has the same release
and platform as the target to which it is being added. You will receive a warning if the
product for the patch being added is different from the product associated with the
target to which the patch is being added. The warning does not prevent you from
adding the patch to the patch plan, though. However, when you try to deploy, the
plan might fail.
F.2.6.1.3 Solution
To make a nondeployable patch plan deployable, divide the patch plan into smaller
deployable plans that contain only homogenous patches and targets.
F.2.6.2 Instances Not to Be Migrated Are Also Shown as Impacted Targets for
Migration
See the details below.
F.2.6.2.1 Issue
When you deploy a patch plan in out-of-place patching mode, sometimes even the
instances that are not selected for migration are identified as impacted targets as
shown in Figure F-3.
F.2.6.2.2 Cause
By default, the patch plan calculates the impacted targets based on only one mode,
which in-place patching mode. Therefore, although you have selected out-of-patching
mode, the patch plan ignores it and considers only the in-place patching mode as the
option selected, and displays all the targets are impacted targets for migration.
F.2.6.2.3 Solution
To resolve this issue, ignore the targets you have not selected for migration. They will
not be shut down or migration in any case.
F.2.6.3 Cluster ASM and Its Instances Do Not Appear as Impacted Targets While
Patching a Clusterware Target
See the details below.
F.2.6.3.1 Issue
While creating a patch plan for patching a clusterware target, on the Deployment
Options page, the What to Patch section does not display the cluster ASM and its
instances are affected targets. They do not appear in the Impacted Targets section,
either. And after deploying the patch plan in out-of-place mode, the cluster ASM and
its instances show metric collection error.
F.2.6.3.2 Cause
This issue might occur if the clusterware target name in Cloud Control and the
clustername target name in the mgmt$target_properties table are not matching.
F.2.6.3.3 Solution
To resolve this issue, run the following query to verify the target property
ClusterName of the clusterware target:
select property_value from mgmt$target_properties where
target_name=<CRS Target Name> and property_name="ClusterName"
If the returned value is different from the clusterware target name in Cloud Control,
then delete the clusterware target and other associated targets, and rediscover them.
This time while rediscovering them, ensure that the clusterware target name matches
with the name returned by the preceeding query.
F.2.6.4.1 Issue
When you create a patch plan to patch multiple Oracle homes in out-of-place patching
mode, and when you click Prepare in the Create Plan Wizard to prepare the patch
plan before actually deploying it, sometimes the preparation operation fails with the
message Preparation Failed.
F.2.6.4.2 Cause
The patch plan might have successfully cloned and patched some of the selected
Oracle homes, but might have failed on a few Oracle homes. The overall status of the
patch plan is based on the patching operation being successful on all the Oracle
homes. Even if the patching operation succeeds on most of the Oracle homes and fails
only on a few Oracle homes, the overall status is shown as if the patch plan has failed
in one of the steps.
F.2.6.4.3 Solution
To resolve this issue, fix the errors on failed Oracle homes. Then, go to the procedure
instance page and retry the failed steps.
F.2.6.5 Error #1009 Appears in the Create Plan Wizard While Creating or Editing a
Patch Plan
See the details below.
F.2.6.5.1 Issue
While creating new patch plan or editing an existing patch plan, you might see the
following error in the Create Plan Wizard:
Error #1009
F.2.6.5.2 Cause
This error occurs while accessing the Management Repository to extract any details
about the patch plan, the targets, or the operation being committed. Usually,
SQLException, NullPointerException, or Unhandled exceptions cause these errors.
F.2.6.5.3 Solution
To resolve this issue, review the following file, make a note of the exact error or
exception logged, and communicate it to Oracle Support.
$MIDDLEWARE_HOME/gc_inst/em/EMGC_OMS1/sysman/log/emoms.log
F.2.6.6.1 Issue
After you successfully analyze the patch plan, when you navigate to the Review of the
Create Plan Wizard, you might see the Deploy button disabled. Also, the table on the
Review page appears empty (does not list any patches.) As a result, you might not be
able to deploy the patch plan.
F.2.6.6.2 Cause
This error occurs if the patches in the patch plan have already been applied on the
target Oracle home. In such a case, the Validation page confirms that the patches have
already been applied and therefore they have been skipped, and on the Review page,
Deploy button is disabled.
F.2.6.6.3 Solution
The patches have already been applied, so you do not have to apply them again. If
required, you can manually roll back the patch from the target Oracle home and try
applying the patch again.
F.2.6.7 Patch Plan Fails When Patch Plan Name Exceeds 64 Bytes
See the details below.
F.2.6.7.1 Issue
On non-English locales, patch plans with long plan names fail while analyzing,
preparing, or deploying, or while switching back. No error is displayed; instead the
patch plan immediately reflects the Failed state, and logs an exception in the
<INSTANCE_HOME>/sysman/log/emoms.log file.
F.2.6.7.2 Cause
The error occurs if the patch plan name is too long, that is, if it exceeds 64 bytes. The
provisioning archive framework has a limit of 64 bytes for instance names, and
therefore, it can accept only plan names that are lesser than 64 bytes. Typically, the
instance name is formed using the patch plan name, the plan operation, and the time
stamp (PlanName_PlanOperation_TimeStamp). If the entire instance name exceeds 64
bytes, then you are likely see this error.
F.2.6.7.3 Solution
To resolve this issue, do one of the following:
If the patch plan failed to analyze, prepare, or deploy, then edit the plan name and
reduce its length, and retry the patching operation.
If the patch plan was deployed successfully, then the patch plan gets locked, and if
switchback fails with this error, then you cannot edit the plan name in the wizard.
Instead, run the following SQL update command to update the plan name in the
Management Repository directly:
update em_pc_plans set name = 'New shorter name' where name =
'Older longer name';
commit;
F.2.6.8.1 Issue
The out-of-place patching fails to unlock the cloned Oracle home in the Prepare phase
of the patch plan, thus causing the patch plan to fail on the cloned Oracle home. The
step Run clone.pl on Clone Oracle Home fails.
F.2.6.8.2 Cause
This issue occurs if the new Oracle home is different from the Oracle home mentioned
in the files <gi_home>/crs/utl/crsconfig_dirs and crsconfig_fileperms
that are present in the Grid Infrastructure home. For 11.2.0.3 Exadata Clusterware, the
unlock framework works by operating on these files.
F.2.6.8.3 Solution
To resolve this issue, you can do one of the following:
• Create a new patch plan for the Exadata Cluster, select the required patch, select
In-Place in the How to Patch section, and deploy the patch plan.
• Manually apply the patch on the Clusterware Oracle homes of all the nodes of the
cluster. Then, clean up the partially cloned Oracle homes on all the nodes, and retry
the Prepare operation from the patch plan.
• Patch Plan Remains in Analysis State Even After the Deployment Procedure Ends
• Patch Plan Analysis Fails When the Host's Node Name Property Is Missing
• Raising Service Requests When You Are Unable to Resolve Analysis Failure Issues
F.2.7.1 Patch Plan Remains in Analysis State Even After the Deployment Procedure
Ends
See the details below.
F.2.7.1.1 Issue
When you analyze a patch plan, sometimes the patch plan shows that analysis is in
progress even after the underlying deployment procedure or the job ended
successfully.
F.2.7.1.2 Cause
This issue can be caused due to one of the following reasons:
• Delayed or no notification from the job system about the completion of the
deployment procedure. Typically, after the deployment procedure ends, the job
system notifies the deployment procedure. Sometimes, there might be a delay in
such notification or there might be no notification at all from the job system, and
that can cause the status of the patch plan to show that is always in the analysis
state.
• Delay in changing the status of the patch plan. Typically, after the job system
notifies the deployment procedure about its completion, the job system submits a
new job to change the status of the patch plan. Sometimes, depending on the load,
the new job might remain the execution queue for a long time, and that can cause
the status of the patch plan to show that is always in the analysis state.
• Failure of the job that changes the status of the patch plan. Sometimes, after the
new job is submitted for changing the status of the patch plan, the new job might
fail if there are any Management Repository update issues or system-related issues.
• Time zone issues with the Management Repository. If the Management Repository
is configured on an Oracle RAC database, and if each instance of the Oracle RAC is
running on a different time zone, then when a query is run to know the current
system time, it can return incorrect time details depending on which instance
serviced the request. Due to incorrect time details, the job that changes the status of
the patch plan might not run at all. This can cause the status of the patch plan to
show that is always in the analysis state.
F.2.7.1.3 Solution
For time zone-related issue, then first correct the time zone settings on the Oracle RAC
nodes, and then restart them. For all other issues, collect the logs and contact Oracle
Support.
F.2.7.2 Patch Plan Analysis Fails When the Host's Node Name Property Is Missing
See the details below.
F.2.7.2.1 Issue
When you validate a patch plan created for patching Oracle Clusterware, the
validation fails as shown in Figure F-4, stating that the node name is missing for the
target <target_name> of the type host. Also, the solution mentioned on the Validation
page is incorrect.
Figure F-4 Analysis Fails Stating Node Name Property Is Missing for a Target
F.2.7.2.2 Cause
The error occurs because the Create Plan Wizard does not sync up with the actual
query or the job you are running. Also, the property NodeName is a dynamic property
for HAS target, which is not marked as a critical property, and therefore, this property
could be missing from the Management Repository sometimes. Ideally, it should state
that the node name property is missing for the HAS target.
F.2.7.2.3 Solution
To resolve this issue, run the following command to reload the dynamic properties for
the HAS target from each node of the Oracle Clusterware.
emctl reload dynamicproperties -upload_timout 600
<target_name>:has
For example,
emctl reload dynamicproperties -upload_timout 600 <myhastarget1>:has
F.2.7.3.1 Issue
After you analyze a patch plan, the text Analysis In Progress on the Validation page
appears smaller than normal, and the here link for progress details does not work as
shown in Figure F-5.
F.2.7.3.2 Cause
You see this error because of a technical issue in rendering this link.
F.2.7.3.3 Solution
To resolve this issue, exit the Create Plan Wizard. On the Patches & Updates page, in
the Plans region, click on the status Analysis in Progress against the patch plan where
you faced this issue.
F.2.7.4 Raising Service Requests When You Are Unable to Resolve Analysis Failure
Issues
As described in the preceding subsections, there can be several causes for analysis
failures, including My Oracle Support connectivity issues, ARU issues, or issues while
accessing the Management Repository to extract any details related to the patch plan
or targets or the operation being committed. If you encounter any of these issues,
follow the solution proposed in the preceding sections, and if you are still unable to
resolve the issue, follow these steps, and raise a service request or a bug with the
information you collect from the steps.
• (Online Mode Only) Verify if the My Oracle Support Web site being used is
currently available.
• (Online Mode Only) If the plan analysis is failing prior to target analysis being
submitted, then verify if the patch analysis is working as expected by running the
following URL Replace <em_url> with the correct EM URL, and <plan_name>
with the actual patch plan name.
<em_url>/em/console/CSP/main/patch/plan?
cmd=getAnalysisXML&type=att&planName=<plan_name>
Verify if the returned XML includes conflict check request and response XMLs for
each Oracle home included in the patch plan.
• Open the following file and check the exact error or exception being logged and
communicate it to Oracle Support.
$<MIDDLEWARE_HOME>/gc_inst/em/EMGC_OMS1/sysman/log/emoms.log
• Out-of-Place Patching Errors Out If Patch Designers and Patch Operators Do Not
Have the Required Privileges
F.2.8.1 Out-of-Place Patching Errors Out If Patch Designers and Patch Operators Do
Not Have the Required Privileges
See the details below.
F.2.8.1.1 Issue
When you try out-of-place patching, the patch plan fails with the following error while
refreshing the Oracle home configuration:
12:58:38 [ERROR] Command failed with error: Can't deploy oracle.sysman.oh on https://
<hostname>:<port>/emd/main/
F.2.8.1.2 Cause
The error occurs because you might not have the following roles as a Patch Designer or
a Patch Operator:
F.2.8.1.3 Solution
Grant these roles explicitly while creating the user accounts. Alternatively, grant the
provisioning roles because EM_PROVISIONING_OPERATOR and
EM_PROVISIONING_DESIGNER already include these roles. After granting the
privileges, retry the failed deployment procedure step to complete the out-of-patching
preparation.
After visiting some other page, I come back to "Setup Groups" page; I do not
see the links to the jobs submitted. How can I get it back?
Click Show in the details column.
Package Information Job fails with "ERROR: No Package repository was found"
or "Unknown Host" error. How do I fix it?
Package Repository you have selected is not good. Check if metadata files are created
by running yum-arch and createrepo commands. The connectivity of the RPM
Repository from OMS might be a cause as well.
Even after the deployment procedure finished its execution successfully, the
Compliance report still shows my Group as non-compliant, why?
Compliance Collection is a job that runs once in every 24 hour. You should wait for the
next cycle of the job for the Compliance report to update itself. Alternately, you can go
to the Jobs tab and edit the job to change its schedule.
Package Information Job fails with "ERROR: No Package repository was found"
or "Unknown Host" error. How do I fix it?
The package repository you have selected is not good. Check if the metadata files are
created by running yum-arch and createrepo commands. The connectivity of the
RPM Repository from OMS might be a cause as well.
I see a UI error message saying "Package list is too long". How do I fix it?
Deselect some of the selected packages. The UI error message tells you from which
package to unselect.
Bare metal machine is not coming up since it cannot locate the Boot file.
Verify the dhcp settings (/etc/dhcpd.conf) and tftp settings for the target machine.
Check whether the services (dhcpd, xinted, portmap) are running. Make the necessary
setting changes if required or start the required services if they are down.
Even though the environment is correctly setup, bare metal box is not getting
booted over network
OR
DHCP server does not get a DHCPDISCOVER message for the MAC address of
the bare metal machine.
Edit the DHCP configuration to include the IP address of the subnet where the bare
metal machine is being booted up.
Agent Installation fails after operating system has been provisioned on the bare
metal box?
OR
No host name is assigned to the bare metal box after provisioning the operating
system?
This might happen if the get-lease-hostnames entry in the dhcpd.conf file is set
to true. Edit the dhcpd.conf file to set get-lease-hostnames entry to false.
Also, ensure that length of the host name is compatible with length of the operating
system host name.
Bare metal machine hangs after initial boot up (tftp error/kernel error).
This may happen if the tftp service is not running. Enable the tftp service. Go to
the /etc/xinetd.d/tftp file and change the disable flag to no (disable=no).
Also verify the dhcp settings.
Kernel panic occurs when the Bare Metal machine boots up.
Verify the dhcp settings and tftp settings for the target machine and make the
necessary changes as required. In a rare case, the intird and vmlinuz copied may be
corrupted. Copying them from RPM repository again would fix the problem.
Bare metal machine hangs after loading the initial kernel image.
This may happen if the network is half duplex. Half duplex network is not directly
supported but following step will fix the problem:
Bare metal machine cannot locate the kickstart file (Redhat screen appears for
manually entering the values such as 'language', 'keyboard' etc).
This happens if STAGE_TOP_LEVEL_DIRECTORY is not mountable or not accessible.
Make sure the stage top level is network accessible to the target machine. Though very
rare but this might also happen because of any problem in resolving the stage server
hostname. Enter the IP address of the stage or the NAS server instead of hostname on
which they are located, and try the provisioning operation again.
Bare metal machine does not go ahead with the silent installation (Redhat
screen appears for manually entering the network details).
Verify that DNS is configured for the stage server host name, and that DHCP is
configured to deliver correct DNS information to the target machine. If the problem
persists, specify the IP address of the stage or NAS server instead of hostname, and try
the provisioning operation again.
Bare metal box fails to boot with "reverse name lookup failed" error.
Verify that the DNS has the entry for the IP address and the host name.
Fetching properties from reference machine throws the error: " Credentials
specified does not have root access"
Verify if the credentials specified for the reference machine has sudo access.
Can my boot server reside on a subnet other than the one on which the bare
metal boxes will be added?
Yes. But it is a recommended best practice to have boot server in the same subnet on
which the bare metal boxes will be added. If the network is subdivided into multiple
virtual networks, and there is a separate DHCP/PXE boot server in each network, the
Assignment must specify the boot server on the same network as the designated
hardware server.
If one wants to use a boot server in a remote subnet then one of the following should
be done:
-- Router should be configured to forward DHCP traffic to a DHCP server on a remote
subnet. This traffic is broadcast traffic and routers do not normally forward broadcast
traffic unless configured to do so. A network router can be a hardware-based router,
such as those manufactured by the Cisco Corporation or software-based such as
Microsoft's Routing and Remote Access Services (RRAS). In either case, you need to
configure the router to relay DHCP traffic to designated DHCP servers.
-- If routers cannot be used for DHCP/BOOTP relay, set up a DHCP/BOOTP relay
agent on one machine in each subnet. The DHCP/BOOTP relay agent relays DHCP
and BOOTP message traffic between the DHCP-enabled clients on the local network
and a remote DHCP server located on another physical network by using the IP
address of the remote DHCP server.
Can I use the Agent rpm for installing Agent on Stage and Boot Server?
This is true only if the operating system of the Stage or Boot Server machine is RedHat
Linux 4.0, 3.1 or 3.0 or Oracle Linux 4.0 or later. Refer to section Using agent rpm for
Oracle Management Agent Installation on the following page for more information:
http://www.oracle.com/technology/software/products/oem/htdocs/
provisioning_agent.html
Can the yum repository be accessed by any protocol other than HTTP?
Though the rpm repository can be exposed via file:// or ftp:// as well, the
recommended method is to expose it via http://. The latter is faster and more secure.
What is the significance of the Status of a directive? How can one change it?
Look at the following table to know the possible Status values and what they signify.
Status Description
Incomplete This Status signifies that some step was not completed during
the directive creation, for example uploading the actual script
for the directive, or a user saved the directive while creating it
and still some steps need to be performed to make complete the
directive creation.
Ready his signifies that the directive creation was successful and the
directive is now ready to be used along with any component/
image.
For creating the Linux OS component does the Reference Machine need to have
a management agent running on it?
Yes. Reference Machine has to be one of the managed targets of the Enterprise
Manager.
What is the significance of the Status of a component? How can one change it?
Status of a component is similar to that of a directive. Refer to What is the significance
of the Status of a directive? How can one change it?.
1. In Cloud Control, from the Enterprise menu, select Configuration, and then, click
Refresh Host Configuration.
2. On the Refresh Host Configuration page, from the Available Hosts pane, select the
hosts that the Deployment Procedure will use, and move them to the Selected
Hosts pane.
Note:
2. On the All Targets page, from the Refine Search section, click Target Type to
expand the menu, and from the menu click Others, and then click Oracle Home.
On the right hand side of the page gets refreshed, and only the Oracle Home
targets appear.
4. On the <target_name> home page, from the Oracle Home menu, select
Configuration, and then click Last Collected.
The following example describes the steps to refresh the Oracle home configuration
for the target 11107CRSHome_1_slc00eii:
• Advanced Options
EMSTATE/sysman/log/gcagent.log
EMSTATE/sysman/log/gcagent.trc
Note:
Oracle recommends you to archive the old logs and have a fresh run after
resetting the log level to capture the fresh logs.
1. Open the following file available in the Oracle home of the OMS:
$<ORACLE_HOME>/sysman/config/emomslogging.properties
EMSTATE/sysman/config/emd.properties
Index-1
change activity plans (continued) comparison template (continued)
printing, 50-14 Member Settings tab, 45-18, 45-19
roles and privileges, 50-1 Rules for Matching tab, 45-20
summary, 50-16 Template Settings tab, 45-18
task definitions, 50-3 viewing, 45-20
task groups, 50-5 comparison templates
tasks, 50-5, 50-17 creating templates, 45-18
terminology, 50-2 editing templates, 45-18
using for patching, 50-22 property settings, 45-19
viewing tasks, 50-18 rules for matching, 45-20
change management, 48-1 template settings, 45-19
change plan comparison wizard, 45-29
creating change plan, 48-16 comparisons, 45-28
submitting change plan, 48-20 compliance
change plans accessing, 46-4
overview, 48-15 compliance frameworks, 46-28
changing owner of change activity plans, 50-14 compliance standard rule folders, 46-49
clearing violations, 46-99 compliance standard rules, 46-50
clone compliance standards, 46-36
cloning a running Oracle database replay client, configuring, 46-27
12-2 dashboard, 46-4, 46-9
cloning a running Oracle RAC instance, 9-3 evaluating, 46-7
cloning database evaluation
existing backup, 14-21 operations on real-time observations, 46-25
RMAN backup, 14-17 results, 46-10, 46-11
staging areas, 14-19 importance, 46-19
cloning database methods, 14-17 investigating evaluation errors, 46-18
cloning pluggable databases, 17-17 library, 46-4
cluster verification utility, 4-20 managing, 46-27
Coherence Node Provisiong overview, 46-1
Deploying Coherence Nodes and Clusters privileges needed to use, 46-4
Troubleshooting, 32-16 real-time
Coherence Node Provisioning monitoring facets, 46-79
Deploying Coherence Nodes and Clusters
observations, 46-4
Creating a Coherence Component, 32-3
reports, 46-18
Deployment Procedure, 32-4
results, 46-4
Prerequisites, 32-2
roles needed to use, 46-4
Getting Started, 32-1
score, 46-19
Supported Releases, 32-2
statistics, accessing, 46-8
collected configurations for targets, 45-1
summary information, 46-10
columnar
terms used in, 46-2
parser parameters, 45-65
violations
parsers, 45-64
accessing, 46-11
commands
examples of viewing, 46-14
hosts, 40-11
managing, 46-11
comparing data, 48-23
comparison manual rules, 46-11
rules, 1-4 of a target, 46-14
comparison results suppressing, 46-11
standard target, 45-38 unsuppressing, 46-11
synchronizing configuration files, 45-38 Compliance Framework privilege, 46-6
comparison template compliance frameworks
creating or editing, 45-18 about, 46-28
deleting, 45-20 accessing, 46-28
exporting, 45-21 adding compliance standard to, 46-30
importing, 45-21 advantages of using, 46-28
managing, 45-20, 45-46 compliance of database targets, 46-36
Index-2
compliance frameworks (continued) compliance standards (continued)
compliance score, 46-21 about, 46-36
definition, 46-2 accessing, 46-36
editing importance, 46-30 adding to another compliance standard, 46-39
errors, 46-35 adding to compliance framework, 46-30
evaluation results, 46-34, 46-35 advantages of using, 46-36
lifecycle status, 46-30 associating with targets, 46-45
operations on definition, 46-2
browsing, 46-34 errors, 46-45
creating, 46-30 evaluation results, 46-44
creating like, 46-31 investigating violations, 46-13
deleting, 46-33 operations on
editing, 46-32 browsing, 46-44
exporting, 46-33 creating, 46-39
importing, 46-33 creating like, 46-42
searching, 46-34 customizing, 46-42
provided by Oracle, 46-28 deleting, 46-42
reasons for using, 46-28 editing, 46-42
user-defined, 46-28 exporting, 46-43
violations details, 46-14 importing, 46-43
compliance roles searching, 46-44
EM_COMPLIANCE_DESIGNER, 46-5 security metrics, enabling, 46-48
EM_COMPLIANCE_OFFICER, 46-5 setting up for auditing use, 46-36
compliance score violations details, 46-14
compliance framework, 46-21 compliance violations, investigating, 46-13
compliance standard, 46-21 configuration browser
compliance standard rule, 46-20 inventory and usage details, 45-11
definition, 46-3 saved configurations, 45-10
parent node, 46-22 viewing configuration data, 45-8
real-time monitoring rules, 46-20 Configuration Browser, viewing configurations, 45-7
compliance standard rule folders configuration changes, 45-12
about, 46-49 configuration collections, 45-41
creating, 46-49 configuration comparison
definition, 46-3 ignore rule, 45-23
managing in compliance standard, 46-49 ignore rule example, 45-27
compliance standard rules matching rule, 45-22
about, 46-50 matching rule example, 45-25
agent-side rules, 46-50 rule examples, 45-25
definition, 46-2 rule language, 45-23
operations on rules, 45-22
browsing, 46-77 value constraint rule, 45-22
creating agent-side rules, 46-66, 46-91 wizard, 45-29
creating like, 46-74 configuration data collections, 45-51
creating manual rules, 46-68, 46-91 configuration data, extending collections, 45-51
creating real-time monitoring rules, 46-60 configuration extension
creating repository rules, 46-53 blueprints, 45-54
creating WEbLogic Server Signature rules, creating, 45-42
46-55 credentials, 45-45
deleting, 46-75 database roles, 45-45
editing, 46-75 deleting, 45-48
exporting, 46-76 deploying, 45-50
importing, 46-76 editing, 45-42
searching, 46-77 editing deployment, 45-50
repository rules, 46-51 enabling facet synchronization, 45-47
types, 46-50 encoding, 45-44
compliance standards exporting, 45-47
Index-3
configuration extension (continued) container databases (CDBs) (continued)
Files ((amp)) Commands tab, 45-43 procedure, 16-3, 16-8, 16-12
importing, 45-48 controlling appearance of information on a graph,
post-parsing rule, 45-75, 45-77, 45-79 45-88
privileges, 45-48 CPU statistics
roles, 45-48 hosts monitoring, 39-1
rules, 45-46 Create Compliance Entity privilege, 46-5
sample non-XML parsed file, 45-76 creating
sample parsed SQL query, 45-78 agent-side compliance standard rules, 46-66,
sample XML parsed file, 45-75 46-91
undeploying, 45-50 change activity plans
versioning, 45-48 like another plan, 50-12
viewing collection data, 45-51 task definitions, 50-8
viewing specification details, 45-47 task groups, 50-11
XML parsed example (default), 45-58 comparison template, 45-18
XML parsed example (generic), 45-59 compliance frameworks, 46-30
XML parsed example (modified), 45-59 compliance manual rules, 46-91
XPath, 45-46 compliance standard example, 46-96
configuration extension, creating, 46-91 compliance standard rule folders, 46-49
configuration extensions compliance standards, 46-39
commands tab, 45-43 compliance standards, considerations, 46-48
creating, 45-42 configuration extension, 46-91
deployment, 45-49 custom target type, 45-42
enabling facet synchronization, 45-47 manual compliance standard rules, 46-68
privileges, 45-48 new relationships, 45-80
setting up credentials, 45-45 real-time monitoring compliance standard rules,
setting up rules, 45-46 46-60
SQL tab, 45-45 real-time monitoring facet folders, 46-84
versioning, 45-48 real-time monitoring facets, 46-83
Configuration Extensions privilege, 46-6 repository compliance standard rules, 46-53
configuration history WebLogic Server Signature compliance standard
accessing, 45-13 rules, 46-55
annotating, 45-15 creating change plans using external clients, 48-19
creating notification list, 45-15 creating database provisioning entities, 4-16
scheduling, 45-15 creating database templates, 4-13
creating databases
searching history, 45-14
prerequisites, 16-2
configuration history, job activity, 45-17
procedure, 16-3
configuration management, 45-1
creating Disk Layout component, 35-18
configuration searches
managing configuration searches, 45-3 creating installation media, 4-12
creating like
setting up, 45-5
compliance frameworks, 46-31
Configuration Topology Viewer, 45-81
compliance standard rules, 46-74
configuration topology, viewing, 45-82
compliance standards, 46-42
configurations
hardware and software, 45-1 real-time monitoring facets, 46-85
creating Oracle Database
history, 1-4
prerequisites, 16-2
searching, 1-4, 45-3
creating Oracle RAC One Node Database
viewing, 45-7
prerequisites, 16-11
configure Grid Infrastructure, 7-8, 7-21
procedure, 16-12
configuring audit status
creating Oracle Real Application Clusters Database
real-time monitoring rules, 46-60
prerequisites, 16-7
configuring hosts, 40-1
procedure, 16-8
Connect((colon))Direct parser, 45-59
creating provisioning profiles, 4-8
consumption summary, storage, 39-4
creating relationships to a target, 45-87
container databases (CDBs)
creating repository rule based on custom configuration
prerequisites for creating, 16-2, 16-7, 16-11 collections
Index-4
creating repository rule based on custom configuration collections (continued)
database provisioning (continued)
examples, 46-88 Oracle Databases with Oracle ASM, 5-9
creating users Oracle Grid Infrastructure and Oracle Database
designers, 2-8 software, 6-8
operators, 2-9 Oracle Grid Infrastructure and Oracle Databases
credentials with Oracle ASM, 6-2
host, 37-2 Oracle Grid Infrastructure for Oracle Real
setting up monitoring, 37-3 Application Clusters Databases, 7-1
credentials, for configuration extensions, 45-45 Oracle RAC database with file system on existing
Cron Access parser, 45-64 cluster, 7-13
Cron Directory parser, 45-64 Oracle RAC database with file system on new
CSV parser, 45-64 cluster, 7-19
Custom CFG parser, 45-67 Oracle Real Application Clusters One Node
custom configuration databases, 8-1
custom target type, 45-42 prerequisites for designers, 4-7
customization prerequisites for operators, 4-8
changing error handling modes, 52-14 provisioning and creating Oracle Databases, 5-4
customization types, 52-1 provisioning Oracle RAC, 9-1
directive workflow, 52-19 setup, 4-5
overview, 52-1 supported targets, 4-2
setting up e-mail notifications, 52-15 usecases, 4-3
customizing database provisioning overview, 4-1
hosts environment, 38-1 Database Query
customizing topology views, 45-86 parser, 45-59
CVU, 4-20 parser parameters, 45-61
Database Query Paired Colulmn
parser, 45-60
D parser parameters, 45-61
data comparison database templates
overview, 48-21 uploading to software library, 4-15
requirements, 48-21 databases
data discovery, 47-1 storage, 39-4
data discovery job, 47-5 Db2 parser, 45-60
deactivating
data discovery results, 47-6
change activity plans, 50-13
Data Guard Rolling Upgrade, 18-24
default system run level
Data Redaction, 47-2
in hosts, 40-6
database credentials, 45-45
delete Oracle RAC
database host readiness
prerequisites, 11-2
configuring SSH, B-2
delete Oracle RAC nodes, 11-5
environment settings
deleting
kernel requirements, B-3
change activity plans, 50-13
memory requirements, B-4
comparison template, 45-20
network and IP requirements, B-5
compliance frameworks, 46-33
node time requirements, B-4
compliance standard rules, 46-75
package requirements, B-4
compliance standards, 46-42
installation directories and Oracle inventory, B-7
configuration extension, 45-48
PDP setup, B-2
custom topology views, 45-86
setting user accounts, B-1
real-time monitoring facets, 46-85
shell limits, B-2
relationships from a target, 45-88
storage requirements, B-6
deleting Oracle RAC
database provisioning
deleting core components, 11-2
administrator privileges, 4-6
deleting entire Oracle RAC, 11-2
deployment procedures, 4-2
deleting pluggable databases, 17-30
getting started, 5-1
Dell PowerEdge Linux hosts monitoring, 40-20
host requirements, 4-6
dependency analysis, 45-85
Oracle Database software, 5-15
deployable patch plan, 41-5
Index-5
Deploying Coherence Nodes and Clusters editing (continued)
Deployment Procedure change activity plans, 50-13
Adding a Coherence, 32-7 comparison template, 45-18
Environment Variables, 32-9 compliance frameworks, 46-32
Sample Scripts, 32-10 compliance standard rules, 46-75
deploying configuration extension, 45-50 compliance standards, 46-42
deploying SOA composites, 33-9 configuration extension deployment, 45-50
Deploying, Undeploying or Redeploying Java EE host configuration, 40-21
Applications local host group, 40-11
Creating a Java EE Application Component, 31-3 local user, 40-9
Deploying a Java EE Application, 31-5 real-time monitoring facet folders, 46-84
Getting Started, 31-1 real-time monitoring facets, 46-83
Java EE Applications Deployment Procedure, EM_ALL_OPERATOR
31-5 EM_ALL_VIEWER, 51-4
Prerequisites, 31-3 EM_PATCH_OPERATOR, 51-4
Redeploying a Java EE Application, 31-9 EM_CAP_ADMINISTRATOR role, 50-1
Undeploying a Java EE Application, 31-11 EM_CAP_USER role, 50-1
Deploying, Undeploying, or Redeploying Java EE em_catalog.zip file, 41-23
Applications, 31-2 EM_COMPLIANCE_DESIGNER role, 45-49, 46-5
deployment procedures
EM_COMPLIANCE_OFFICER role, 46-5
editing the permissions, 51-18
EM_LINUX_PATCHING_ADMIN role, 42-8
phases and steps, 51-7
EM_PLUGIN_AGENT_ADMIN role, 45-49
target list, 51-6
EM_PLUGIN_OMS_ADMIN role, 45-48
tracking the status, 51-18
EMCLI
User, roles and privileges, 51-3 adding ATS service test, A-54
variables, 51-6 advantages, A-1
viewing, editing, and deleting, 51-17 creating a new generic component by associating
deployments a zip file, A-49
Management Repository, 45-1 creating properties file, A-13
Designer and Operator Roles, 4-1 deploying/undeploying Java EE applications,
determining configuration health compliance score, A-56
45-84 launching a procedure with an existing saved
dhcp server procedure, A-19
setting up, 35-8 limitations, A-57
diagnosing migrate and remove a software library storage
compliance violations, 46-13 location, A-53
Directory
overview, A-1
parser, 45-60
patching
parser parameters, 45-62 creating a new properties file, A-27
discovering hosts
using an existing properties file, A-31
automatically, 3-1
patching verbs, A-5
manually, 3-1
patching WebLogic Server Target, A-44
disks
prerequisites, A-2
statistics, hosts monitoring, 39-2
provisioning Oacle WebLogic Server
storage, 39-5
using provisioning profile, A-34
provisioning Oracle Database software, A-33
E provisioning Oracle WebLogic Server
scaling up or scaling out, A-36
E-Business Suite
provisioning pluggable databases
parser, 45-60
creating new pluggable databases, A-20
parser parameters, 45-62 migrating databases as pluggable databases,
e-mail notifications
A-23
configuring outgoing mail server, 52-15
using snapshot profiles, A-22
entering administrator e-mail and password,
provisioning user defined deployment procedure
52-17 adding steps/phases, A-43
overview, 2-11
prerequisites, A-43
editing
running the procedure, A-44
Index-6
EMCLI (continued) Execute Host Command (continued)
provisioning verbs, A-12 single host, 40-18
software library verbs, A-9 execution history
unplugging pluggable databases, A-24 host commands, 40-19
using an existing properties file, A-17 execution results
verbs, A-2 host command, 40-19
EMCLI verbs exporting
create_pluggable_database, A-20, A-22 change activity plans, 50-14
migrate_noncdb_to_pdb, A-24 comparison template, 45-21
unplug_pluggable_database, A-25 compliance frameworks, 46-33
upload_patches, 41-26 compliance standard rules, 46-76
emctl partool utility compliance standards, 46-43
emctl partool options, C-1 configuration extension, 45-47
exporting deployment procedure real-time monitoring facets, 46-86
creating PAR file, C-4 extend Oracle RAC
retrieving GUID, C-3 prerequisites, 10-2
exporting deployment procedures, C-3 extending Oracle RAC
importing PAR files prerequisites, 10-2
using cloud control, C-6 procedure, 10-2
using command line, C-5
overview, C-1 F
overview of PAR, C-1
software library, C-3 file synchronization, 45-38
enabling facet synchronization file systems
configuration extensions, 45-47 storage, 39-5
Enterprise Data Governance dashboard, 47-3 finding See accessing, 46-11
enterprise manager users format-specific parsers, 45-59
designers, 2-7 Full any Compliance Entity privilege, 46-5
operators, 2-7 FULL_LINUX_PATCHING_SETUP privilege, 42-6,
super administrators, 2-7 42-8
error handling modes
continue on error, 52-14
G
inherit, 52-14
skip target, 52-14 Galaxy CFG
stop on error, 52-14 parser, 45-60
errors parser parameters, 45-62
in compliance standards, 46-45 generic system, and new relationships, 45-80
evaluating compliance, 46-7 GNS settings, 7-22
evaluation errors gold image
compliance, 46-18 provisioning Oracle database replay client using
evaluation results gold image, 12-5
compliance, 46-10, 46-11 provisioning Oracle RAC using gold image, 9-10
compliance standards, 46-44 GPG keys, 42-6, 42-8
examples GPG signatures, 42-8
associating targets, 46-88 group administration, 40-9, 40-11
creating compliance standards, 46-88 groups
creating custom configuration, 46-88 in hosts, 38-2
creating custom-based repository rule based on
custom configuration collection, 46-88
using change activity plans, 50-20
H
viewing compliance results, 46-88 hardware configuration, collecting information, 45-1
viewing compliance violations, 46-14 history
excluding relationships from custom topology views, statistics, 36-2
45-87 storage, 39-7
Execute Host Command history job activity, 45-17
group, 40-17 history, of configuration changes, 45-12
multiple hosts, 40-15 history, of configurations, 1-4
Index-7
Host command importing (continued)
executing using sudo or PowerBroker, 40-12 compliance frameworks, 46-33
running, 40-14 compliance standard rules, 46-76
hosts compliance standards, 46-43
adding host targets, 40-14 configuration extension, 45-48
administering, 40-1 real-time monitoring facets, 46-86
administration, target setup, 37-4 incidents
commands viewing details, 46-25
execution history, 40-19 including relationships in custom topology views,
execution results, 40-19 45-87
configuration information map, 1-5
adding and editing, 40-21 infrastructure requirements, 2-1
configuring, 40-1 Introscope parser, 45-60
customizing environment, 38-1 inventory and usage details, 45-11
default system run level, 40-6
Dell PowerEdge Linux monitoring, 40-20
diagnosing problems on, 36-2
J
group administration, 40-9 Java Policy parser, 45-67
groups, 38-2 Java Properties parser, 45-67
installing YAST on, 37-1 job activity
log file alerts, 39-2 history, 45-17
lookup table Job System privilege, 46-6
host administration, 40-8
metric collection errors, 39-2
K
monitoring, 39-1
monitoring setting up, 37-3 Kernel Modules parser, 45-65
NFS clients, 40-8
overview, 36-1
L
preferred credentials, 37-2
running Host command, 40-14 layers, storage, 39-8
services, 40-5 LDAP parser, 45-67
setting up credentials, 37-2 library
statistics compliance, 46-4
CPU, 39-1 lifecycle management
disk, 39-2 overview, 1-1
memory, 39-2 solution areas
program resource utilization, 39-2 change management, 1-3
storage, 39-2 compliance management, 1-4
tools configuration management, 1-4
PowerBroker, 40-12 discovery, 1-3
Remote File Editor, 40-13 patching, 1-3
sudo command, 40-12 provisioning, 1-3
user administration, 40-9 solution descriptions, 1-2
viewing targets on, 36-2 lifecycle status
Hosts Access parser, 45-65 compliance frameworks, 46-30
Linux Directory List parser, 45-65
Linux hosts, installing YAST, 37-1
I linux patching
ignore rule example, in comparisons, 45-27 package compliance, 42-9
ignore rule, in comparisons, 45-23 prerequisites, 42-3
impact analysis, 45-85 registering with ULN, 42-6
importance setting up group, 42-7
definition, 46-3 setting up infrastructure, 42-3
editing in compliance standard, 46-30 setting up linux patching groups for compliance
importing reporting, 42-7
comparison template, 45-21 setting up RPM repository, 42-3, 42-6
linux patching groups
Index-8
linux patching groups (continued) middleware provisioning (continued)
jobs, 42-8 SOA artifacts, 22-9
local file systems, storage, 39-5 WebLogic Domain and Oracle Home
lock down, 51-23 Provisioning, 22-6
locking down feature, 4-1 Middleware Provisioning
log file alerts, hosts monitoring, 39-2 Middleware Provisioning and Scale Up / Scale
logical operators Out Best Practices, 29-7
AND/OR, 45-27 middleware provisioning console, 22-2
middleware provisioning solutions, 22-1
migrating databases as pluggable databases, 17-25
M Mime Types parser, 45-67
Manage and Target Metric privilege, 46-5 monitoring
Manage any Target Compliance privilege, 46-5 credentials, setting up, 37-3
Management Repository, 45-1 hosts, 39-1
managing NFS mounts, 39-6
change activity plans, 50-15 moving
compliance standard rule folders, 46-49 task definitions in change activity plans, 50-8
mandatory infrastructure requirements MQ-Series
creating user accounts, 2-7 parser, 45-60
setting up credentials, 2-4 parser parameters, 45-63
mandatory infratructure requirements
setting up software library, 2-2 N
manual rules
definition, 46-3 named credentials, host, 37-2
violations, 46-11 network cards
matching rule configuring, 40-7
examples, 45-25 in hosts, 40-7
matching rule example, for comparisons, 45-25 network file systems See NFS, 39-5
matching rule, in comparisons, 45-22 NFS (network file systems)
memory statistics clients
hosts monitoring, 39-2 adding and editing, 40-8
metadata discovery, 47-1 host administration, 40-8
metadata discovery job, 47-3 monitoring mounts, 39-6
metadata discovery results, 47-5 storage, 39-5
metadata XML files notifications, 2-11
downloading files, 41-23
uploading files, 41-23 O
metric collection errors
hosts moniroting, 39-2 observations
middleware notifying user, 46-27
enabling as a service (MWaaS), 23-2, 24-1, 25-1, Odin parser, 45-60
26-1 OPlan, 41-43, 41-47
Middleware as a Service (MWaaS) optional infratructure requirements
enabling, 23-2, 24-1, 25-1, 26-1 host configurations, F-26
middleware provisioning self update for provisioning, 2-10
coherence nodes and clusters, 22-9 setting up e-mail notifications, 2-11
deploying/redeploying/undeploying Java EE Oracle clusterware clone, 4-18
Applications, 22-9 Oracle Clusterware Clone, 4-19
deployment procedures, 22-1 Oracle Database Clone, 4-17
introduction, 22-1 Oracle Database topology, 5-2
key concepts, 22-4 Oracle Label Security, 47-2
Oracle Application server, 22-10 Oracle ORA parser, 45-60
overview, 22-1 Oracle RAC database topology, 7-3
profiles, 22-1 Oracle Real Application Clusters Database topology,
scaling SOA, Service Bus, and WebLogic Servers, 7-3
22-7 Oracle Service Bus, 34-1
service bus resources, 22-10 OS script, load, 40-18
Index-9
P parsers (continued)
Unix Login, 45-67
PAM Configuration parser, 45-65 Unix Passwd, 45-65
parsers Unix PROFTPD, 45-68
AIX Installed Packages, 45-67 Unix Protocols, 45-65
Apache HTTPD, 45-67 Unix Recursive Directory List, 45-61
Autosys, 45-67 Unix Resolve, 45-68
Blue Martini DNA, 45-59 Unix Services, 45-65
columnar, 45-64 Unix Shadow, 45-65
Connect((colon))Direct, 45-59 Unix SSH Config, 45-68
Cron Access, 45-64 Unix System, 45-68
Cron Directory, 45-64 Unix System Crontab, 45-65
CSV, 45-64 Unix VSFTPD, 45-68
Custom CFG, 45-67 Unix XINETD, 45-68
Database Query, 45-59 WebAgent, 45-68
Database Query Paired Column, 45-60 WebLogic (attribute-keyed), 45-57
Db2, 45-60 WebSphere (attribute-keyed), 45-57
Directory, 45-60 WebSphere (generic), 45-58
E-Business Suite, 45-60 Windows Checksum, 45-68
format-specific, 45-59 XML (generic), 45-57
Galaxy CFG, 45-60 XML default (attribute-keyed), 45-56
Hosts Access, 45-65 patch management solution
Introscope, 45-60 accessing the screen, 41-3
Java Policy, 45-67 conflict checks, 41-45
Java Properties, 45-67 create plan wizard, 41-6
Kernel Modules, 45-65 customizing deployment procedures, 41-43,
LDAP, 45-67 41-78
Linux Directory List, 45-65 diagnosing and resolving patching issues, 41-67
Mime Types, 45-67 downloading catalog file, 41-23
MQ-Series, 45-60 introduction, 41-2
Odin, 45-60 knowledge articles, 41-32
Oracle ORA, 45-60 overview, 41-1
PAM Configuration, 45-65 patch recommendations, 41-29
Process Local, 45-65 patch templates, 41-49, 41-73
properties, 45-67 patchability reports, 41-27
Radia, 45-67 patching modes
Sectioned Properties, 45-67 in-place mode, 41-14, 41-40
Secure TTY, 45-65 offline mode, 41-13, 41-22
Siebel, 45-60 online mode, 41-13, 41-21
SiteMinder Agent, 45-67 out-of-place mode, 41-14, 41-40
SiteMinder Registry, 45-67 parallel mode, 41-16, 41-40
SiteMinder Report, 45-67 rolling mode, 41-16, 41-40
SmWalker, 45-67 patching Oracle Data Guard targets, 41-54
Solaris Installed Packages, 45-65 patching Oracle Exadata, 41-51
Sun ONE Magnus, 45-67 patching Oracle Grid Infrastructure targets, 41-51
Sun ONE Obj, 45-67 patching Oracle Identity Management targets,
Tuxedo, 45-67 41-60
UbbConfig, 45-60 patching Oracle Siebel targets, 41-60
Unix Config, 45-67 patching workflow, 41-17
Unix Crontab, 45-65 performing switchback, 41-48
Unix Directory List, 45-65 preparing database targets, 41-38
Unix Groups, 45-65 registering proxy details, 41-21
Unix GShadow, 45-65 resolving patch conflicts, 41-77
Unix Hosts, 45-65 rolling back patches, 41-44, 41-72, 41-79
Unix INETD, 45-65 scheduling patch plans, 41-47
Unix Installed Patches, 45-60 searching for patches, 41-32
Index-10
patch management solution (continued) patching (continued)
setting up infrastructure, 41-18 using change activity plans, 50-22
supported targets, 41-8 patching linux hosts
uploading catalog file, 41-23 concepts, 42-1
uploading patches to Software Library, 41-23 deployment procedures, 42-2
validating patch plans, 41-45 meeting prerequisites, 42-3
patch plans overview, 42-1
accessing, 41-37 package compliance, 42-9
adding patches, 41-39 registering with ULN, 42-6
adding targets, 41-39, 41-75 setting up infrastructure, 42-3
analyzing, 41-45 setting up linux patching groups for compliance
creating, 41-35 reporting, 42-7
customizing deployment procedures, 41-43, setting up RPM repository, 42-3, 42-6
41-78 supported linux releases, 42-2
deleting, 41-74 phases
deleting plans, 41-74 parallel, 51-7
enabling notifications, 41-44 rolling, 51-7
enabling or disabling conflict checks, 41-45 pluggable database administration
overview, 41-4 altering pluggable database state, 17-43
patch conflicts, 41-7 opening/closing pluggable databases, 17-43
patch plan types, 41-5 switching between pluggable databases, 17-43
preparing patch plans, 41-38 pluggable database jobs
roles and privileges, 41-19 create, 17-10, 17-17, 17-39
saving as patch templates, 41-49 creates, 17-25
scheduling, 41-47 delete, 17-38, 17-41
specifying credentials, 41-42 unplug, 17-35, 17-40
specifying deployment options, 41-40 pluggable database management requirements, 17-3
pluggable database provisioning
specifying plan information, 41-38
cloning pluggable databases
staging patches, 41-41
using Full Clone, 17-17
supported patch types, 41-4
using Snap Clone, 17-18
switchback, 41-48
creating new pluggable databases, 17-4
using create plan wizard, 41-6 migrating databases as pluggable databases,
validating, 41-45 17-25
validation and conflict resolution, 41-6 plugging in pluggable databases, 17-10
patch recommendations, 41-29 pluggable database removal
patch templates deleting pluggable databases, 17-35
deleting, 41-75 unplugging pluggable databases, 17-30
deleting templates, 41-75 pluggable databases
downloading patches, 41-74 administering, 17-42
modifying templates, 41-73 getting started, 17-1
using edit template wizard, 41-8 jobs, 17-38
viewing templates, 41-73 overview, 17-2
patching provisioning, 17-3
analyzing the environment, 41-27 removing, 17-30
diagnosing issues, 41-69 post-parsing rules, 45-74
identifying applicable patches PowerBroker tool
searching in software library, 41-33 executing host command, 40-12
searching on MOS, 41-32 preferred credential, hosts, 37-2
using knowledge articles, 41-32 printing
using patch recommendations, 41-29 change activity plans, 50-14
introduction, 41-1 privilege delegation setting, 37-2
linux patching, 42-1 privileges
patch management solution, 41-2 Compliance Framework, 46-6
patchability reports, 41-27 Configuration Extensions, 46-6
patching linux hosts, 42-1 Create Compliance Entity, 46-5
resolving issues, 41-71 Full any Compliance Entity, 46-5
Index-11
privileges (continued) provisioning database client
Job System, 46-6 getting started, 12-1
Manage any Target Compliance, 46-5 provisioning Oracle Database client, 12-9
Manage any Target Metric, 46-5 provisioning Oracle RAC
used in change activity plans, 50-1 archived software binaries, 9-16
View any Compliance Framework, 46-5 gold image, 9-10
View any Target, 46-5 no root credentials, 9-25
privileges, for configuration extensions, 45-48 provisioning Oracle Real Application Clusters One
problems database, 8-2
diagnosing on hosts, 36-2 provisioning Oracle standby database
Process Local parser, 45-65 creating logical standby database, 13-4
program resource utilization statistics, hosts creating physical standby database, 13-1
monitoring, 39-2 creating primary database backup, 13-9
properties parser constructs managing existing standby database, 13-8
delimited section, 45-74 provisioning Oracle standby databases, 13-1
delimited structure, 45-73 provisioning pluggable databases, 17-3
element cell, 45-74 provisioning profiles, 4-1
explicit property, 45-71 provisioning SOA artifacts
implicit property, 45-72 gold image, 33-7
INI section, 45-73
keyword name property, 45-71 R
keyword property, 45-71
reserved directive, 45-72 Radia parser, 45-67
reserved function, 45-72 real-time monitoring facet folders, editing, 46-84
simple property, 45-71 real-time monitoring facets
structure, 45-73 about, 46-79
XML structure, 45-72 changing base attributes, 46-87
properties parsers creating, 46-83, 46-84
advanced constructs, 45-70 creating like, 46-85
advanced parameters, 45-69 definition, 46-3
basic parameters, 45-68 deleting, 46-85
Protection Policy editing, 46-83
Data Redaction, 47-2 entity types, 46-80
Oracle Label Security, 47-2 exporting, 46-86
Transparent Data Encryption, 47-2 importing, 46-86
Virtual Private Database, 47-2 operations on, 46-81
provision database patterns, 46-81
Grid Infrastructure and Oracle RAC Database, 7-5 viewing library, 46-82
Oracle Grid Infrastructure and Oracle Real real-time monitoring rules
Application Clusters, 7-1 compliance score, 46-20
provision Linux definition, 46-3
getting started, 35-1 target property filters, 46-62
provision Oracle RAC database using facets in, 46-60
with file system on a new cluster, 7-19 real-time monitoring, warnings, 46-47
with file system on an existing cluster, 7-13 real-time observation audit status, definition, 46-3
provision Oracle RAC databases, 7-19 real-time observation bundle lifetimes, controlling,
provisioning 46-60
deleting or scaling down Oracle RAC (Real real-time observation bundles, definition, 46-4
Application Cluster), 11-1 real-time observations
extending Oracle RAC (Real Application Cluster), definition, 46-3, 46-4
10-1 investigating, 46-22
provisioning linux operating system, 35-1 manually setting, 46-26
provisioning Oracle Application Server, 33-1 notifying user, 46-27
provisioning Oracle database replay client, 12-1 operations on, 46-25
provisioning Oracle Service Bus resources, 34-1 rules, types of actions, 46-60
storage, 39-3 viewing, 46-23
provisioning bare metal servers, 35-21
Index-12
reference host, 35-4 schema change plans, 48-2
refresh, storage, 39-8 schema comparison, 48-2
relationships, 45-80 schema comparison versions, 48-8
Relocating Pluggable Databases, 17-29 schema comparisons
Remote File Editor tool, 40-13 comparison options, 48-7
removing pluggable databases, 17-30 overview, 48-6
reports, compliance, 46-18 schema map, 48-7
repository rules scope specification, 48-7
definition, 46-2 schema synchronization
in compliance standards, 46-51 overview, 48-9
results versions, 48-12
compliance evaluation, 46-10, 46-11 schema synchronization version
reverse transform, 45-40 schema synchronization cycle, 48-12
roles schema synchronizations
EM_CAP_ADMINISTRATOR, 50-1 schema map, 48-10
EM_CAP_USER, 50-1 scope specification, 48-9
for change activity plans, 50-1 synchronization mode, 48-11
roles, for configuration extensions, 45-48 synchronization options, 48-10
Rolling Upgrade, 18-24 scope specification, 48-3
Rolling Upgrade procedure, 18-26 search configurations
rollup options, 45-12 predefined, 1-4, 45-3
root cause analysis. See dependency analysis, 45-85 user-defined, 1-4, 45-3
routing configuration, network cards, 40-7 searching
RPM packages, 42-2, 42-6 compliance frameworks, 46-34
RPM repository compliance standard rules, 46-77
overview, 35-4 compliance standards, 46-44
setting up, 35-10, 42-3 Sectioned Properties parser, 45-67
rule examples, for comparisons, 45-25 Secure TTY parser, 45-65
rule expressions, in comparisons, 45-23 security metrics
rules enabling, 46-48
include or exclude rule, 45-23 in compliance standards, 46-48
matching rule, 45-22 sensitive column type, 47-4
value constraint rule, 45-22 sensitive data discovery, 47-3
rules expression, 45-23 Service Bus provisioning, 23-1, 24-1, 25-1, 26-1
rules syntax, 45-23 services
rules, in comparisons, 1-4, 45-22 in hosts, 40-5
rules, in configuration extensions, 45-46 setting dependencies in task definitions, 50-8
setting up
environment to monitor hosts, 37-1
S host credentials, 37-2
save as draft, configuration extension, 45-48 host monitoring, 37-3
saved configurations, 45-10 monitoring credentials, 37-3
scale down Oracle RAC, 11-5 target for host administration, 37-4
Scaling Up / Scaling Out WebLogic Domains Setting Up MOS, 2-10
Prerequisites, 29-2 Siebel
Running the Scale Up / Scale Out Middleware parser, 45-60
Deployment Procedure, 29-3 parser parameters, 45-63
Scaling Up/Scaling Out SOA, Service Bus, and SiteMinder Agent parser, 45-67
WebLogic Server Domains, 29-1 SiteMinder Registry parser, 45-67
schema baseline SiteMinder Report parser, 45-67
multiple versions, 48-4 SmWalker parser, 45-67
schema baseline version, 48-4 SOA provisioning, 23-1, 24-1, 25-1, 26-1
schema baselines software library
export, 48-5 uploading patches, 41-23
import, 48-5 Software Library Administration, 2-2, 2-3
overview, 48-2 Software Library console, 2-2
Index-13
Solaris Installed Packages parser, 45-65 topology viewer (continued)
specify OS users, 7-19 controlling appearance of information on a graph,
specifying rules, 45-21 45-88
stage server creating
overview, 35-4 relationships to a target, 45-87
statistics customizing views, 45-86
hosts, 36-1 deleting custom views, 45-86
storage, 36-2 deleting relationships from a target, 45-88
statistics, accessing compliance, 46-8 dependency analysis, 45-85
steps excluding relationships from custom views, 45-87
action, 51-7 impact analysis, 45-85
computational, 51-8 including relationships in custom views, 45-87
file transfer, 51-7 tracking configuration changes, 45-12
host command, 51-9 Transparent Data Encryption, 47-2
job, 51-7 troubleshooting missing property errors, 41-67
library component, 51-7 troubleshooting unsupported configuration errors,
library directive, 51-9 41-68
manual, 51-7 Tuxedo parser, 45-67
storage
file systems, 39-5
U
history, 39-7
hosts monitoring, 39-2 UbbConfig parser, 45-60
layers, 39-8 ULN channels, 42-1, 42-6
network file systems, 39-5 ULN configuration channel, 42-2
refresh, 39-8 ULN custom channel, 42-2
statistics, 36-2 Unbreakable Linux Network (ULN), 42-2
utilization, 39-3 undeploying configuration extension, 45-50
vendor distribution, 39-7 Unix Config parser, 45-67
volumes, 39-6 Unix Crontab parser, 45-65
sudo command Unix Directory List parser, 45-65
executing host command, 40-12 Unix Groups parser, 45-65
summary Unix GShadow parser, 45-65
change activity plans, 50-16 Unix Hosts parser, 45-65
Sun ONE Magnus parser, 45-67 Unix INETD parser, 45-65
Sun ONE Obj parser, 45-67 Unix Installed Patches
suppressing violations, 46-11, 46-98 parser, 45-60
synchronizing files, 45-38 parser parameters, 45-63
system component structure, 45-83 Unix Login parser, 45-67
Unix Passwd parser, 45-65
T Unix PROFTPD parser, 45-68
Unix Protocols parser, 45-65
target setup Unix Recursive Directory List
host administration, 37-4 parser, 45-61
target type, custom, 45-42 parser parameters, 45-64
targets Unix Resolve parser, 45-68
associating compliance standard with, 46-45, Unix Services parser, 45-65
46-97 Unix Shadow parser, 45-65
host, 36-2 Unix SSH Config parser, 45-68
task groups in change activity plans, 50-5 Unix System Crontab parser, 45-65
tasks Unix System parser, 45-68
in change activity plans, 50-17 Unix VSFTPD parser, 45-68
tools Unix XINETD parser, 45-68
hosts, 40-11 unplugging pluggable databases, 17-30
topology
unsuppressing violations, 46-11
Oracle RAC database topology, 7-3
up2date patching tool, 42-3, 42-4, 42-8
topology viewer
upgrading a database instance, 18-19
Index-14
upgrading database violations (continued)
getting started, 18-1 compliance
upgrading databases managing, 46-11
database upgrade wizard, 18-19 viewing examples, 46-14
deployment procedure, 18-3 details
prerequisites, 18-4 compliance framework, 46-14
upgrading Oracle cluster database, 18-5 compliance standard, 46-14
upgrading Oracle clusterware, 18-11 of a target, 46-14
upgrading Oracle database instance, 18-15 suppressing, 46-11, 46-98
user accounts unsuppressing, 46-11
overview, 2-7 Virtual Private Database, 47-2
user administration, 40-9, 40-11 volumes, storage, 39-6
User Defined Deployment Procedure, 51-21
UTF-8, encoding in configuration extensions, 45-44
W
web-based enterprise management (WBEM) fetchlet
V metrics, 40-19
value constraint rule, in comparisons, 45-22 WebAgent parser, 45-68
vendor distribution, storage, 39-7 WebLogic parser (attribute-keyed), 45-57
Verifying Rolling Upgrade, 18-29 WebLogic Server provisioning, 23-1, 24-1, 25-1, 26-1
View any Compliance Framework privilege, 46-5 WebSphere parser (attribute-keyed), 45-57
View any Target privilege, 46-5 WebSphere parser (generic), 45-58
viewing Windows Checksum parser, 45-68
change activity plan tasks, 50-18
comparison template, 45-20
X
configuration data, 45-8
configuration extension specification details, XML default parser (attribute-keyed), 45-56
45-47 XML parser (generic), 45-57
configuration health problem details, 45-84 XPath
incident details, 46-25 conditions and expressions, 45-74
real-time monitoring facet library, 46-82 configuration extension, 45-46
real-time observations, 46-23
violation details
compliance standard, 46-14
Y
violations YAST, installing on Linux hosts, 37-1
clearing, 46-99 yum patching tool, 42-3, 42-8
Index-15
Index-16