Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

ALMA Database Replication: ALMA-xx - XX.XX - XX-XXX-X-XXX Status: Draft 2011-07-25

Download as pdf or txt
Download as pdf or txt
You are on page 1of 59

ALMA Database Replication

ALMA-xx.xx.xx.xx-xxx-X-XXX Version: 1.0 Status: Draft 2011-07-25


Prepared By: Alessio Checcucci Approved by: Organization ESO Organization Date 2011-07-25 Date

Released by IPT Lead(s):

Organization

Date

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

2 of 59

Table of Contents
1 Overview of ALMA Database Replication................................................................... 5 1.1 Oracle Streams generalities ........................................................................................ 7 1.1.1 Terminology ...................................................................................................... 8 1.1.2 Streams data flow mechanism ........................................................................... 8 1.2 ALMA streams design................................................................................................ 9 1.2.1 Tiered Structured ............................................................................................. 11 1.2.2 Type and main characteristics of the ALMA database replication ................. 13 2 Oracle Streams structure............................................................................................. 13 2.1 Logical Configuration of Streams processes for ALMA ......................................... 14 2.1.1 Information Capture ........................................................................................ 14 2.1.1.1 Logical Change Records ...................................................................................... 14 2.1.1.2 Objects instantiation ............................................................................................. 14 2.1.1.3 Capture rules ........................................................................................................ 15 2.1.1.4 Capture Data Types .............................................................................................. 15 2.1.1.5 Supplemental logging ........................................................................................... 16 2.1.1.6 System Change Numbers related to a Capture process ...................................... 16 2.1.1.7 Capture process components................................................................................ 16 2.1.1.8 Capture process checkpoints ............................................................................... 17 2.1.1.9 The Streams data dictionary ................................................................................ 18 2.1.2 Information Propagation.................................................................................. 18 2.1.2.1 Propagation rules ................................................................................................. 19 2.1.2.2 Queue-to-queue propagation ............................................................................... 19 2.1.2.3 Message Delivery .................................................................................................. 19 2.1.2.4 Propagation jobs ................................................................................................... 20 2.1.2.5 The Streams data dictionary ................................................................................ 20 2.1.2.6 Streams Tags......................................................................................................... 20 2.1.2.7 Combined Capture and Apply .............................................................................. 20 2.1.3 Information Apply ........................................................................................... 21 2.1.3.1 Apply rules ............................................................................................................ 21 2.1.3.2 Message Processing for an Apply Process .......................................................... 21 2.1.3.3 Apply Process Components .................................................................................. 22 2.1.3.4 The Streams data dictionary ................................................................................ 22 3 ALMA Oracle Streams Setup ..................................................................................... 23 3.1.1 Streams Installation pre-requisites .................................................................. 23 3.1.1.1 Oracle Patches ...................................................................................................... 24 3.1.2 Streams Cleanup.............................................................................................. 25 3.1.3 Oracle SQL*Net Setup .................................................................................... 25 3.1.3.1 OSF Configurations ............................................................................................. 26 3.1.3.2 SCO Configurations ............................................................................................. 26

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

3 of 59

3.1.4 Setup of Instance Parameters .......................................................................... 29 3.1.5 Log Mode (OSF and SCO only)...................................................................... 29 3.1.6 The Streams Administrator User ..................................................................... 30 3.1.7 LogMiner Configuration (OSF and SCO only)............................................... 30 3.2 OSF/SCO bi-directional configuration..................................................................... 30 3.3 OSF configuration .................................................................................................... 31 3.3.1 Configuration steps.......................................................................................... 31 3.3.1.1 Schema supplemental logging @ OSF ................................................................ 31 3.3.1.2 Apply/Destination Queue creation @ OSF.......................................................... 31 3.3.1.3 Apply Process Configuration @ OSF .................................................................. 31 3.3.1.4 Capture/Source Queue creation @ OSF ............................................................. 32 3.3.1.5 OSF SCO Propagation configuration @ OSF ............................................... 32 3.3.1.6 Capture Process Configuration @ OSF .............................................................. 33 3.3.1.7 Dump of the schema to be replicated @ OSF...................................................... 33 3.4 SCO configuration.................................................................................................... 34 3.4.1 Import of the Schema at SCO.......................................................................... 34 3.4.2 Schema fixing @ SCO .................................................................................... 35 3.4.3 Object exclusion for the Capture process @ OSF........................................... 35 3.4.4 Apply/Destination Queue creation @ SCO..................................................... 36 3.4.5 Apply Process Configuration @ SCO............................................................. 37 3.4.6 Capture/Source Queue creation @ SCO ......................................................... 37 3.4.7 SCO OSF Propagation configuration @ SCO ........................................... 37 3.4.8 Capture Process Configuration @ SCO .......................................................... 38 3.4.9 Setup of instantiation SCN @ OSF ................................................................. 38 3.4.10 Object exclusion for the Capture process @ SCO ........................................ 39 3.4.11 Test startup of the data flow .......................................................................... 40 3.4.11.1 Streams processes startup................................................................................... 40 3.4.11.2 Streams processes shutdown ............................................................................. 41 3.4.12 Configuration of Steams processes parameters............................................. 41 3.4.13 Temporary configurations ............................................................................. 42 3.5 Tier-2/SCO spoke setup ........................................................................................... 42 3.5.1 Apply/Destination Queue creation @ Tier-2 .................................................. 43 3.5.2 Apply Process Configuration @ Tier-2........................................................... 43 3.5.3 SCO configuration extension pre-requisites.................................................... 44 3.5.4 SCO Tier-2 Propagation configuration @ SCO......................................... 44 3.5.5 Schema preparation for instantiation............................................................... 44 3.5.6 Dump of the schema to be replicated @ SCO................................................. 45 3.5.7 Import of the Schema at Tier-2 ....................................................................... 45 3.5.8 Schema fixing @ Tier-2 .................................................................................. 46 3.5.9 Configuration of Steams processes parameters............................................... 47 3.5.10 Startup of Streams processes ......................................................................... 47

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

4 of 59

3.5.10.1 Startup of Tier-2 Streams processes .................................................................. 47 3.5.10.2 Startup of the SCO Tier-2 Propagation........................................................ 47 4 Control of ALMA Streams processes ......................................................................... 48 4.1 Control of the Capture process ................................................................................. 48 4.1.1 Starting Capture process.................................................................................. 48 4.1.2 Stopping the Capture process .......................................................................... 48 4.2 Control of Propagation schedules............................................................................. 48 4.2.1 Starting the Propagation Schedule .................................................................. 49 4.2.2 Stopping the Propagation Schedule................................................................. 49 4.3 Control of Apply process ......................................................................................... 49 4.3.1 Starting the Apply process .............................................................................. 49 4.3.2 Stopping the Apply process............................................................................. 49 5 Monitoring the ALMA Streams environment ........................................................... 49 5.1 Enterprise Manager .................................................................................................. 50 5.2 Global Monitoring script .......................................................................................... 50 5.3 Capture process monitoring...................................................................................... 50 Display of Queue, Rule Sets, and Status of each Capture Process ......................... 50 5.4 Propagation monitoring ............................................................................................ 51 Display of ANYDATA Queues in a Database ........................................................ 52 5.5 Apply process monitoring ........................................................................................ 53 Determining the Queue, Rule Sets, and Status for Each Apply Process....................... 53 5.6 Rules Monitoring...................................................................................................... 54 Summary of rules associated to each ruleset for a specific Streams process ................ 54 6 ALMA Streams Troubleshooting................................................................................ 54 6.1 Streams Alerts .......................................................................................................... 54 6.2 Management of Capture problems ........................................................................... 55 6.3 Management of Propagation problems..................................................................... 57 6.4 Management of Apply errors ................................................................................... 57 6.4.1 Errors related to the Streams data dictionary .................................................. 59 6.5 Management of object exclusions ............................................................................ 60 6.6 Management of ALMA software release migration................................................ 60 7 Appendix A Streams Split and Merge ..................................................................... 61

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

5 of 59

1 Overview of ALMA Database Replication The ALMA Project comprises multiple sites around the world, to work properly data must be shared among these entities, both meta data and bulk data are synchronized between sites to get seamless access to information from the closer possible place. NGAS mirroring will use an internal mechanism to diffuse data from the OSF through SCO to the ARCs. Database replication take advantage of an Oracle Enterprise Edition feature called Streams to replicate data in an hub-and-spokes like structure, with uni-directional and bidirectional branches. The overall ALMA data flow is sketched in this picture:

Illustration 1: Global ALMA data flow

1.1

Oracle Streams generalities

Oracle Streams enables information sharing, each data unit is called a message, these messages are shared in a stream. The stream can propagate information from one database to another, routing specified information to specified destinations.

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

6 of 59

With Oracle Streams there is full control about whta information is put into a stream and how the stream flows, what happens to messages in the stream as they flow into each database, and how the stream terminates. Streams can capture, stage, and manage messages in the database automatically, including data manipulation language (DML) changes and data definition language (DDL) changes. Streams can propagate the pieces of information to other databases automatically. When messages reach a destination, Oracle Streams can consume them based on specifications. This document won't depict or investigate any possible configuration option of Oracle Streams, but it is limited to how database replication is used in the ALMA environment. Oracle Streams is an extremely flexible tool that may fit a lot of different environment and configurations. ALMA uses it for pure data replication, in this case Streams capture data manipulation language (DML) and data definition language (DDL) changes made to database objects and replicate those changes to one or more other databases. The destination databases can allow DML and DDL changes to the same database objects, and these changes might or might not be propagated to the other databases in the environment. Oracle Streams environment can be configured with one database that propagates changes, or where changes are propagated between databases bidirectionally. 1.1.1 Terminology The database where changes captured by a capture process are generated in a redo log A database where messages are consumed. Messages can be consumed when they are dequeued implicitly from a queue by a propagation or apply process The abstract storage unit used by a messaging system to store messages A unit of shared information in an Oracle Streams environment A message with a specific format that describes a database change The system change number (SCN) for a table which specifies that only changes that were committed after this SCN at the source database are applied by an apply process A database object that enables a client to perform an action

Source database Destination database

Queue

Message Logical change record (LCR) Instantiation SCN

Rule

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

7 of 59

when an event occurs and a condition is satisfied. Ruleset A group of rules

1.1.2

Streams data flow mechanism

Oracle Streams implicitly capture DML and/or DDL changes to tables, schemas, or an entire database. Rules determine which changes are captured. Database changes are recorded in the redo log for the database. A capture process captures changes from the redo log and formats each captured change into a message called a logical change record (LCR). The messages captured by a capture process are called captured LCRs. The rules used by a capture process determine which changes it captures. Messages are stored (or staged) in a queue. In ALMA these messages are logical change records (LCRs). Capture processes enqueue messages into an ANYDATA queue, which can stage messages of different types. Oracle Streams propagations can propagate messages from one queue to another. These queues generally are in different databases. Rules determine which messages are propagated by a propagation. A message is consumed when it is de-queued from a queue. An apply process can dequeue messages implicitly. Rules determine which messages are de-queued and processed by an apply process. 1.2 ALMA streams design

The ALMA Oracle environment comprises six main RDBMS deployments, the following table summarizes names, locations and roles inside Streams: Database Name ALMA.OSF.CL ALMA.SCO.CL Geo-Location
OSF (II Region de Atacama, Chile) SCO (Santiago, Chile)

Role

Notes

Data Producer/Data Source of scientific data Consumer (ASDMs) Data Producer/Data Proposal submission and Consumer management (APDMs) Referred as: private DB Data Consumer SCO read-only database (ASA portal) Referred as: public DB Inserted data won't get

ALMA.SCOPUB.C SCO (Santiago, Chile) L ALMA.ARC.EA


East Asian Regional Center

Data Consumer

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

8 of 59

(Tokio, Japan)

replicated to the other sites Data Consumer Data Consumer Inserted data won't get replicated to the other sites Inserted data won't get replicated to the other sites

ALMA.ARC.EU ALMA.ARC.NA

European Regional Center (Garching b. Muenchen, DE) North American Regional Center (Charlottesville, US)

The following picture sketches the various entities, paths and data flows in the ALMA Streams environment:

Illustration 2: ALMA Streams replication: physical/data flow structure

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

9 of 59

1.2.1

Tiered Structured

Database replication in ALMA is structured as three tiers: the main data producer is located at the OSF (tier 0) where observations are taken and the status of the scheduling blocks updated. These data are replicated to the SCO (tier 1) where people are able to access them, at the same time proposal submission will take place using the Santiago private database, hence the SCO will act as a data producer, too. As clearly depicted on the image above, the SCO private database will be the pivot (called hub in Streams terminology) of the whole infrastructure. Information stored on that database, generated at the SCO and replicated from the OSF, will be captured and propagated to the various branches (called spokes in Streams terminology) of the Streams environment. Tier 2 is constituted by the RDBMS installations that are behaving as pure data consumers. They will exclusively receive data from the SCO, getting an almost complete copy of the ALMA schema. Information produced at the ARC database level will never be replicated backwards to the higher level tiers. Members of Tier 2 are: the ARCs (East Asian, European and North American) and the SCO public database. The latter should be considered a read-only copy of the private one. Summarizing, the ALMA replication is fully bi-directional between the OSF and SCO and uni-directional from the SCO to the tier 2 databases.

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

10 of 59

Illustrat ion 3: ALMA Streams replication: logical tiers

1.2.2

Type and main characteristics of the ALMA database replication

In the ALMA database the main container of information is the ALMA user schema. It collects scientific meta-data (ASDM), proposal, projects and scheduling blocks (APDM), the State Archive and some other additional entities. Given that all the information to be replicated is enclosed in one container, the ALMA Oracle Streams replication environment is schema based. In practice only the objects pertaining to this schema are eligible for replication in a default allow policy. Exclusion of schema entities has to be explicitly defined with proper rules. The other pieces of information are: monitoring and configuration data, XML logs, scheduling, NGAS meta-data. These data classes don't need replication at the Oracle level since they should remain confined to the OSF or can be dynamically generated by the ALMA schema tables.

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

11 of 59

For what concerns NGAS meta-data, they are replicated by means of an internal and auto-consistent NGAS mechanism and don't need any support from Oracle if not the definition of database links. 2 Oracle Streams structure Streams is an Enterprise Edition option of Oracle RDBMS. To get data to properly flowing from one edge to the other of the replication environment a group of Oracle processes must be spawned and properly configured. Depending on the database role (data source, consumer or both) the number of processes and the complexity of the configuration varies a lot. Oracle Streams configuration is performed by means of some a group of packages supplied as part of database creation. To segregate Streams related entities from SYS or SYSTEM one, it is advisable to create and use a separate Oracle user called Streams Administrator (strmadmin) for Streams related operations. Some instance related parameters are important for correct Streams behavior and must be enforced before any process is started. The ALMA RDBMS sites stay connected by means of database links owned by the streams administrator user. The following paragraphs will depict how Streams operations are configured for ALMA at the logical level. In the next section the whole configuration process will be examined. 2.1 2.1.1 Logical Configuration of Streams processes for ALMA Information Capture

Capturing information with Oracle Streams means creating a message that contains the information and enqueuing the message into a queue. The captured information describes a database change. In ALMA information is grabbed by implicit capture, data definition language (DDL) and data manipulation language (DML) changes are captured automatically by a capture process. A specific type of message called logical change record (LCR) describes these database changes. A capture process is able to filter database changes using user-defined rules. Therefore, only changes to specified objects are captured. A capture process retrieves data changes from the redo log, either by mining the online redo log or the mining archived log files when needed. After that, the capture process produces LCRs and enqueues them as part of a message. A message containing an LCR that was originally captured and enqueued by a capture process is called a captured LCR. A capture process always enqueues messages into a buffered queue, that is the portion of

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

12 of 59

a queue that uses the Oracle Streams pool to store messages in memory and a queue table to store messages that have spilled from memory. 2.1.1.1 Logical Change Records Captured LCRs can be classified in two groups: A row LCR describes a change to the data in a single row or a change to a single LONG column, LONG RAW column, LOB column, or XMLType stored as CLOB column in a row. The change results from a data manipulation language (DML) statement or a piecewise update to a LOB. A single DML statement can produce multiple row LCRs. That is, a capture process creates an LCR for each row that is changed by the DML statement. A DDL LCR describes a data definition language (DDL) change. A DDL statement changes the structure of the database. 2.1.1.2 Objects instantiation In the ALMA Streams environment where objects are shared among databases and that uses implicit capture to capture changes to the database object, the source database is the database where the change originated. After changes are captured, they can be applied locally or propagated to other databases and applied at destination databases. In an Oracle Streams environment the shared source database objects must be instantiated before changes to them can be dequeued and processed by an apply process. The database where changes to the source database objects will be applied must have a copy of these database objects. In Oracle Streams, the following steps instantiate a database object: Prepare the object for instantiation at the source database Should a copy of the object does not exist at the destination database, then create an object physically at the destination database based on an object at the source database Set the instantiation SCN for the database object at the destination database Using ALMA procedures most of this steps are automatically performed. 2.1.1.3 Capture rules A capture process either captures or discards changes based on rules that are configured during the set up. Each rule specifies the database objects and types of changes for which

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

13 of 59

the rule evaluates to TRUE. You can place these rules in a positive rule set or negative rule set for the capture process. If a rule evaluates to TRUE for a change, and the rule is in the positive rule set for a capture process, then the capture process captures the change. If a rule evaluates to TRUE for a change, and the rule is in the negative rule set for a capture process, then the capture process discards the change. If a capture process has both a positive and a negative rule set, then the negative rule set is always evaluated first. Streams rules can be defined at various levels: Global Schema Table 2.1.1.4 Capture Data Types A capture process can capture changes made to columns of the following data types: VARCHAR2 NVARCHAR2 FLOAT NUMBER LONG DATE BINARY_FLOAT BINARY_DOUBLE TIMESTAMP TIMESTAMP WITH TIME ZONE TIMESTAMP WITH LOCAL TIME ZONE INTERVAL YEAR TO MONTH INTERVAL DAY TO SECOND RAW LONG RAW CHAR NCHAR UROWID CLOB with BASICFILE storage NCLOB with BASICFILE storage BLOB with BASICFILE storage XMLType stored as CLOB

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

14 of 59

2.1.1.5 Supplemental logging Supplemental logging is required in Oracle Streams replication environments. Supplemental logging places additional column data into a redo log whenever an operation is performed. A capture process captures this additional information and places it in LCRs. An apply process needs the additional information in the LCRs to properly apply DML changes and DDL changes that are replicated from a source database to a destination database. 2.1.1.6 System Change Numbers related to a Capture process Some SCN numbers are important for a capture process: The captured SCN is the SCN that corresponds to the most recent change scanned in the redo log by a capture process The applied SCN for a capture process is the SCN of the most recent message dequeued by the relevant apply processes (it corresponds to the low-watermark SCN of the apply process) The first SCN is the lowest SCN in the redo log from which a capture process can capture changes The start SCN is the SCN from which a capture process begins to capture changes 2.1.1.7 Capture process components A capture process is composed of various entities, each one is creating some server processes at start up time: One reader server that reads the redo log and divides the redo log into regions One or more preparer servers that scan the regions defined by the reader server in parallel and perform prefiltering of changes found in the redo log One builder server that merges redo records from the preparer servers The capture process (CPnn) performs the following actions for each change when it receives merged redo records from the builder server: Formats the change into an LCR If the partial evaluation performed by a preparer server was inconclusive for the change in the LCR, then sends the LCR to the rules engine for full evaluation Receives the results of the full evaluation of the LCR if it was performed Discards the LCR if it satisfies the rules in the negative rule set for the capture process or if it does not satisfy the rules in the positive rule set

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

15 of 59

Enqueues the LCR into the queue associated with the capture process if the LCR satisfies the rules in the positive rule set for the capture process 2.1.1.8 Capture process checkpoints A checkpoint is information about the current state of a capture process that is stored persistently in the data dictionary of the database running the capture process. A capture process tries to record a checkpoint at regular intervals called checkpoint intervals. Required Checkpoint SCN: The system change number (SCN) that corresponds to the lowest checkpoint for which a capture process requires redo data Maximum Checkpoint SCN: The SCN that corresponds to the last checkpoint recorded by a capture process Checkpoint retention time: The checkpoint retention time is the amount of time, in number of days, that a capture process retains checkpoints before purging them automatically A capture process requires a data dictionary that is separate from the primary data dictionary for the source database. This separate data dictionary is called a LogMiner data dictionary. If the LogMiner data dictionary that is needed by a capture process does not exist, then the capture process populates it using information in the redo log when the capture process is started for the first time. If the first SCN value for an existing capture process is manually set or if the first SCN is reset automatically when checkpoints are purged, then Oracle automatically purges LogMiner data dictionary information prior to the new first SCN setting. If the start SCN for a capture process corresponds to redo information that has been purged, then Oracle Database automatically resets the start SCN to the same value as the first SCN. However, if the start SCN is higher than the new first SCN setting, then the start SCN remains unchanged. 2.1.1.9 The Streams data dictionary Propagations and apply processes use an Oracle Streams data dictionary to keep track of the database objects from a particular source database. An Oracle Streams data dictionary is populated whenever one or more database objects are prepared for instantiation at a source database. This Oracle Streams data dictionary is at the source database When you prepare a database object for instantiation, you are informing Oracle Streams that information about the database object is needed by propagations that propagate changes to the database object and apply processes that apply changes to the database object. Any database that propagates or applies these changes requires an Oracle Streams data dictionary for the source database where the changes originated.

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

16 of 59

After an object has been prepared for instantiation, the local Oracle Streams data dictionary is updated when a DDL statement on the object is processed by a capture process. An Oracle Streams data dictionary is multi-versioned. If a database has multiple propagations and apply processes, then all of them use the same Oracle Streams data dictionary for a particular source database. 2.1.2 Information Propagation

Oracle Streams uses queues to stage messages. Staged messages are consumed or/and propagated. In ALMA Staged messages can be consumed by an apply process, a messaging client, or a user application. A running apply process implicitly dequeues messages. A capture processes can only enqueue LCRs into a buffered queue. LCRs enqueued into a buffered queue by a capture process can be dequeued only by an apply process. Oracle Streams is configured to propagate messages between two queues. These queues can reside in different databases. Oracle Streams uses Oracle Scheduler jobs to propagate messages. A propagation is always between a source queue and a destination queue. Although propagation is always between two queues, a single queue can participate in many propagations. A propagation can propagate all of the messages in a source queue to a destination queue, or a propagation can propagate only a subset of the messages. 2.1.2.1 Propagation rules A propagation either propagates or discards messages based on rules that you define. Each rule specifies the database objects and types of changes for which the rule evaluates to TRUE. You can place these rules in a positive rule set or a negative rule set used by the propagation. If a rule evaluates to TRUE for a message, and the rule is in the positive rule set for a propagation, then the propagation propagates the change. If a rule evaluates to TRUE for a message, and the rule is in the negative rule set for a propagation, then the propagation discards the change. If a propagation has both a positive and a negative rule set, then the negative rule set is always evaluated first. Streams rules can be defined at various levels: Global Schema Table

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

17 of 59

2.1.2.2 Queue-to-queue propagation ALMA replication environment is always configured propagations. to use queue-to-queue

A queue-to-queue propagation always has its own exclusive propagation job to propagate messages from the source queue to the destination queue. Because each propagation job has its own propagation schedule, the propagation schedule of each queue-to-queue propagation can be managed separately. 2.1.2.3 Message Delivery A captured LCR is propagated successfully to a destination queue when both of the following actions are completed: The message is processed by all relevant apply processes associated with the destination queue The message is propagated successfully from the destination queue to all of its relevant destination queues When a message is successfully propagated between two queues, the destination queue acknowledges successful propagation of the message. If the source queue is configured to propagate a message to multiple destination queues, then the message remains in the source queue until each destination queue has sent confirmation of message propagation to the source queue. When each destination queue acknowledges successful propagation of the message, and all local consumers in the source queue database have consumed the message, the source queue can drop the message. 2.1.2.4 Propagation jobs An Oracle Streams propagation is configured internally using Oracle Scheduler. Therefore, a propagation job is a job that propagates messages from a source queue to a destination queue. Propagation jobs have an owner and they use slave processes (jnnn) as needed to execute jobs. A propagation schedule specifies how often a propagation job propagates messages from a source queue to a destination queue. Each queue-to-queue propagation has its own propagation job and propagation schedule. Propagation restarts as soon as it finishes the current duration. 2.1.2.5 The Streams data dictionary The Oracle Streams data dictionary is a multi-versioned copy of some of the information in the primary data dictionary at a source database. The Oracle Streams data dictionary maps object numbers, object version information, and internal column numbers from the source database into table names, column names, and column data types.

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

18 of 59

To make this mapping information available to a propagation, Oracle automatically populates a multi-versioned Oracle Streams data dictionary at each database that has an Oracle Streams propagation. Oracle automatically sends internal messages that contain relevant information from the Oracle Streams data dictionary at the source database to all other databases that receive captured LCRs from the source database. 2.1.2.6 Streams Tags Every redo entry in the redo log has a tag associated with it. In an Oracle Streams environment that includes more than one database sharing data bidirectionally, tags should be used to avoid change cycling. Change cycling means sending a change back to the database where it originated. Change cycling should be avoided because it can result in each change going through endless loops back to the database where it originated. 2.1.2.7 Combined Capture and Apply As of Oracle 11g Release 1, for the sake of efficiency, a capture process can act as a propagation sender to transmit logical change records directly to a propagation receiver under specific conditions: The capture process queue must have a single publisher A propagation must be directly configured between the capture process queue and the apply process queue The capture process queue must have a single consumer The apply process queue must have a single publisher The apply process queue must have a single consumer For some branches of the ALMA replication environment this kind of optimization is possible and will greatly reduce latency.

2.1.3

Information Apply

Consuming information with Oracle Streams means dequeuing a message that contains the information from a queue and either processing or discarding the message. In ALMA the consumed information describe a database change. With implicit consumption, an apply process automatically dequeues either captured LCRs. A captured LCR is a logical change record (LCR) that was captured implicitly by a capture process and enqueued into the buffered queue portion of an ANYDATA queue.

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

19 of 59

An apply process in the ALMA environment is an optional Oracle background process that dequeues messages from a specific queue and either applies each message directly or discards it. 2.1.3.1 Apply rules An apply process applies messages based on rules that you define. Each rule specifies the database objects and types of changes for which the rule evaluates to TRUE. Rules can be placed in the positive rule set or negative rule set for the apply process. If a rule evaluates to TRUE for a message, and the rule is in the positive rule set for an apply process, then the apply process dequeues and processes the message. If a rule evaluates to TURE for a message, and the rule is in the negative rule set for an apply process, then the apply process discards the message. If an apply process has both a positive and a negative rule set, then the negative rule set is always evaluated first. Streams rules can be defined at various levels: Global Schema Table 2.1.3.2 Message Processing for an Apply Process In the ALMA environment an apply process applies LCRs without running a user procedure (called apply handler). The apply process either successfully applies the change in the LCR or, if a conflict or an apply error is encountered, the apply process places the transaction, and all LCRs associated with the transaction, into the error queue. 2.1.3.3 Apply Process Components An apply process consists of the following components: A reader server that dequeues messages. The reader server is a process that computes dependencies between logical change records (LCRs) and assembles messages into transactions A coordinator process that gets transactions from the reader server and passes them to apply servers. The coordinator process name is APnn One or more apply servers that apply LCRs to database objects as DML or DDL statements Each apply server is a process. If an apply server encounters an error, and cannot resolve it, then it rolls back the transaction and places the entire transaction, including all of its messages, in the error queue. When an apply server commits a completed transaction, this transaction has been applied. When an apply server places a transaction in the error queue and commits, this transaction also has been applied.

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

20 of 59

The reader server and the apply server process names are Asnn. If a transaction being handled by an apply server has a dependency on another transaction that is not known to have been applied, then the apply server contacts the coordinator process and waits for instructions. The coordinator process monitors all of the apply servers to ensure that transactions are applied and committed in the correct order. 2.1.3.4 The Streams data dictionary The Oracle Streams data dictionary is a multi-versioned copy of some of the information in the primary data dictionary at a source database. The Oracle Streams data dictionary maps object numbers, object version information, and internal column numbers from the source database into table names, column names, and column data types. The mapping information in the Oracle Streams data dictionary at the source database is needed to interpret the contents of the LCR at any database that applies the captured LCR. To make this mapping information available to an apply process, Oracle automatically populates a multi-versioned Oracle Streams data dictionary at each destination database that has an Oracle Streams apply process. Oracle automatically propagates relevant information from the Oracle Streams data dictionary at the source database to all other databases that apply captured LCRs from the source database.

3 ALMA Oracle Streams Setup1 The ALMA Streams replication configuration is quite complex because of the simultaneous presence of uni-directional and bi-directional branches around the main hub constituted by the SCO private database. The following table summarizes the processes that need configuration at each site inside the tiers and adds some details to make the data flow cleared:
Database deployment ALMA.OSF.CL Streams Processes running Capture Propagation (OSF Apply SCO) Tier level 0 Additional information Change cycling prevention enabled Negative ruleset for capture defined Combined capture and apply optimization Change cycling prevention

ALMA.SCO.CL
1

Capture

ALMA is currently using Oracle Database 11g release 1

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

21 of 59

Propagation (SCO Propagation (SCO Propagation (SCO Propagation (OSF Propagation (OSF Apply ALMA.SCOPUB.CL ALMA.ARC.EA ALMA.ARC.EU ALMA.ARC.NA Apply Apply Apply Apply

OSF) SCOPUB) EA-ARC) EU-ARC) NA-ARC) 2 2 2 2

enabled Negative ruleset capture defined

for

Configured apply process not to stop on errors Configured apply process not to stop on errors Configured apply process not to stop on errors Configured apply process not to stop on errors

Depending on site/database deployment, a different procedure is needed to properly configure ALMA Streams replication. The configuration scripts have been gathered and organized in a tree structure that contains all of the set up, monitor and control SQL scripts. ALMA Oracle installation deploys the scripts by default under: ~oracle/Archive/oInstall/scripts/replication 3.1.1 Streams Installation pre-requisites

To set ALMA Streams up, the DBA has to comply with a group of pre-requisites before starting the configuration. Missing this steps could produce unexpected effects. 3.1.1.1 Oracle Patches During the development of ALMA Streams replication some problems have been detected that prevented the data flow to take place properly or other side effects raised. Before applying any patch it is highly recommended to upgrade the Oracle OPatch utility to its latest version. It should be downloaded as patch 6880880 and uncompressed in every ORACLE_HOME. Most of these issues where already been detected and fixed by the Oracle support with a series of patches. They can be grouped into 4 main super-units: Clusterware Patch Set Update (only for RAC deployments) Database Patch Set Update (PSU) Streams specific Patches Other Patches (on top of Database PSU)

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

22 of 59

Depending on the type of installation chosen more or less patches are needed. The ALMA standard installation (from the official ALMA DVD set) will install the Clusterware (if RAC is deployed), ASM and RDBMS, patching it to the current ALMA version. If another Oracle software installation path is chosen, then the PSU(s) have to be applied manually. ALMA is currently using PSU 11.1.0.7.5 for either CRS (99522240) and Database (9952228). Once all the database components (CRS, ASM and RDBMS) have been properly updated to this software version, a group of patches have to be applied before starting any Streams configuration: Patch Number Patch Type 6915767 Streams Specific 8268939 8268944 7006588 Merged Patch 7242222 6676049 Merged Patch 3.1.2 Streams Cleanup Problems fixed
ORA-600 WHEN REMOVING STREAMS STREAMS PERFORMANCE ADVISOR MODIFICATIONS DBMS_COMPARISON PATCH EXCESSIVE *.AUD FILES GENERATED CURSOR LEAK REPORTED IN STREAM ENVIRONMENT IMP TABLE WHICH HAS XMLTYPE COLUMN FAILS

If the system where ALMA Streams will be configured already had some Streams components configured in the past, a preliminary cleanup is needed, to avoid generating conflicts or unexpected behaviors. In case Streams components existed on the database, the Streams Administrator user exist as well and should be exclusively used during the cleanup procedure: Stop any Streams related process (capture, propagation, apply) Execute the following Oracle procedure: SQL> exec dbms_streams_adm.remove_streams_configuration(); As the SYS user (with SYSDBA role): re-create the Streams Administrator user (see below) 3.1.3 Oracle SQL*Net Setup

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

23 of 59

Oracle Streams heavily relies on network connections, through database links, to provide its data sharing capabilities, for this reason the network layers have to be primarily configured at the operating system level. At a higher level the Oracle specific network must be configured. SQL*Net configuration depends on the role of the database. If it is a pure consumer, no setup is needed for Streams to work properly. It means that the ARCs do not need any configuration. At the OSF and SCO the SQL*Net configuration is quite extended. Although it is not mandatory, it is strongly recommended to configure a descriptor for each database in the Streams structure that can be directly reached. The SCO private database, being the hub, will have an entry for every database of the Streams environment. While the OSF will only need to reach the SCO. Oracle SQL*Net for ALMA Streams configuration is performed by means of two files located in $ORACLE_HOME/network/admin: tnsnames.ora: containing the descriptors to connect to databases sqlnet.ora: containing server wide network parameters 3.1.3.1 OSF Configurations DEFAULT_SDU=32767 SEND_BUF_SIZE=500000 RECV_BUF_SIZE=500000 sqlnet.ora Maximum session data unit for Oracle 11.1 Buffer sizes allow to enlarge TCP windows

tnsnames.ora ALMA.SCO.CL = (DESCRIPTION = (SDU=32767) (SEND_BUF_SIZE=500000) (RECV_BUF_SIZE=500000) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = oraclsco.sco.alma.cl)(PORT = 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ALMA.SCO.CL)

SCO Database descriptor

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

24 of 59

) ) 3.1.3.2 SCO Configurations DEFAULT_SDU=32767 SEND_BUF_SIZE=500000 RECV_BUF_SIZE=500000 sqlnet.ora Maximum session data unit for Oracle 11.1 Buffer sizes allow to enlarge TCP windows

tnsnames.ora2 ALMA.OSF.CL = (DESCRIPTION = (SDU=32767) (SEND_BUF_SIZE=500000) (RECV_BUF_SIZE=500000) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = oracl1)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = oracl2)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = oracl3)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = oracl4)(PORT = 1521)) (LOAD_BALANCE = yes) (FAILOVER = on) ) (CONNECT_DATA = (SERVICE_NAME = ALMA.OSF.CL) (failover_mode=(type=select)(method=b asic)) ) ) ALMA.SCOPUB.CL =
A very large TCP window/buffer size have been used for the ARCs to take into account a 600700ms RTT delay
2

OSF, SCO public and ARC descriptors

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

25 of 59

(DESCRIPTION = (SDU=32767) (SEND_BUF_SIZE=500000) (RECV_BUF_SIZE=500000) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = oraclscopub)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = ALMA.SCOPUB.CL) (failover_mode=(type=select)(method=b asic)) ) ) ALMA.ARC.EA = (DESCRIPTION = (SDU=32767) (SEND_BUF_SIZE=1125000) (RECV_BUF_SIZE=1125000) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = almaora01.mtk.nao.ac.jp)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = ALMA.ARC.EA) (failover_mode=(type=select)(method=b asic)) ) ) ALMA.ARC.EU = (DESCRIPTION = (SDU=32767) (SEND_BUF_SIZE=1125000)

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

26 of 59

(RECV_BUF_SIZE=1125000) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = arcdb1.hq.eso.org)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = ALMA.ARC.EU) (failover_mode=(type=select)(method=b asic)) ) ) ALMA.ARC.NA = (DESCRIPTION = (SDU=32767) (SEND_BUF_SIZE=1125000) (RECV_BUF_SIZE=1125000) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = naarcserver)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = ALMA.ARC.NA) (failover_mode=(type=select)(method=b asic)) ) ) Each database must be tested for connectivity before proceeding. 3.1.4 Setup of Instance Parameters

A group of instance parameters must be setup for the Streams components to work smoothly, some are mandatory, for others the value that is suggested proved to be optimal for the ALMA environment:

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

27 of 59

Parameter streams_pool_size global_names open_links undo_retention open_cursors processes

Value 512M TRUE 10 @ OSF, 20 @ SCO 7200 >=1000 >=500 (suggested 750)

Some other parameters will probably need additional tuning to deal with the operative environment, any change should always be thoroughly evaluated taking into account the side effects on Streams 3.1.5 Log Mode (OSF and SCO only)

The data producer database have the capture process generating and enqueueing messages using the services of the LogMiner. The latter needs to access information from the redo logs to operate. Common information stored into the redo logs during normal operations is not enough for Streams, additional data has to be appended to the redo records. It is called supplemental logging. Should these data miss, the database won't be able to replicate anymore. For this reason the OSF and SCO database have to be configured in a way that redo data is always captured and can't be skipped in any way: force logging. Every other database in the Streams environment must be in Archivelog mode. 3.1.6 The Streams Administrator User

Every operation performed on the Oracle Streams environment should not be performed by the SYS or SYSTEM users. For safety and flexibility a special user must be created, it is called Streams Administrator. A special group of privileges will be granted to the user to have full control over Streams. In ALMA Streams the strmadmin user must be created by the SYS user using the script: OSF and SCO ARC The script will take create to: Create the STRMADMIN tablespace osf/01-setup/strmadmin.sql arc/01-setup/strmadmin.sql

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

28 of 59

Create the STMADMIN user Granting needed permissions to the user (including DBA role) Generating Oracle Streams performance analysis package (OSF and SCO only) Creating the database link to the destination database (the DB_DOMAIN is asked) Loading a group of stored procedures for Streams troubleshooting 3.1.7 LogMiner Configuration (OSF and SCO only)

On the source databases the LogMiner tool must be configured to keep clear from the SYSAUX tablespace that could fill taking the RDBMS to hang. To perform this operation the following script must be used: osf/01-setup/logminer.sql The script will create a new tablespace and assign it to the logminer. 3.2 OSF/SCO bi-directional configuration

The OSF and SCO are sharing data at the ALMA schema level, the configuration for this environment is quite complex and can be performed using a group of scripts. It is advisable to stop any backup process before starting the configuration and re-enable the schedule after Streams replication has been started and tested. For ALMA it means commenting out the line in the root user crontab. 3.3 3.3.1 OSF configuration Configuration steps

The SQL scripts needed for the configuration are fully parametric, it means that every time one of them is run, some arguments must be provided. In general they are: The Schema that is object of the configuration The Local/Source database name (TNS) The Remote/Destination database name (TNS) Each script has one or more testing counterparts that should be used to perform a check on the success of the configuration. Nevertheless they can be used each time the status of the environment must be checked.

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

29 of 59

3.3.1.1 Schema supplemental logging @ OSF The first step of any ALMA Streams configuration is the definition of supplemental logging for all the tables of the schema to be replicated: osf/02-scripts/01-local/01-supplemental_logging.sql The related check script is: osf/02-scripts/monitor/supplemental_log_groups.sql 3.3.1.2 Apply/Destination Queue creation @ OSF The creation of the buffered queue permanently associated with the Apply process is performed by the following script: osf/02-scripts/01-local/02-destination_queue.sql The related check script is: osf/02-scripts/monitor/propagation/01-queues.sql 3.3.1.3 Apply Process Configuration @ OSF The Apply process, rulesets and rules are created by means of the following script: osf/02-scripts/01-local/03-apply.sql The related check script is: osf/02-scripts/monitor/apply/01-queue_and_rules.sql Being the OSF/SCO a bi-directional replication, loop prevention must be configured through the use of tags. The value of the tag can be checked: osf/02-scripts/monitor/apply/12-show_apply_tag.sql (expected value: 4F5346 OSF) Rulesets and rules can be investigated using the following two scripts: osf/02-scripts/monitor/rules/01-ruleslist.sql osf/02-scripts/monitor/rules/02-rules.sql

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

30 of 59

3.3.1.4 Capture/Source Queue creation @ OSF The creation of the buffered queue permanently associated with the Capture process is performed by the following script: osf/02-scripts/01-local/04-capture_queue.sql The related check script is: osf/02-scripts/monitor/propagation/01-queues.sql 3.3.1.5 OSF SCO Propagation configuration @ OSF3

The Propagation process (schedule), rulesets and rules are created by means of the following script: osf/02-scripts/01-local/05-propagation.sql The related check scripts are: osf/02-scripts/monitor/propagation/04propagation_schedule.sql osf/02-scripts/monitor/propagation/08propagation_details.sql osf/02-scripts/monitor/propagation/09propagation_schedule.sql Loop prevention in the ALMA Streams environment is enforced by propagation rules that must be thoroughly checked: osf/02-scripts/monitor/rules/01-ruleslist.sql osf/02-scripts/monitor/rules/02-rules.sql 3.3.1.6 Capture Process Configuration @ OSF The Capture process, rulesets and rules are created by means of the following script: osf/02-scripts/01-local/06-capture.sql
This step will report an error for the propagation process if the destination (apply) queue does not yet exists at the remote site. A quick check is the following: select PROPAGATION_NAME,ERROR_MESSAGE,ERROR_DATE from dba_propagation; It is perfectly safe and can be neglected.
3

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

31 of 59

The related check scripts are: osf/02-scripts/monitor/capture/01-processes.sql osf/02-scripts/monitor/capture/07-capture_parameters.sql Rulesets and rules can be drilled down using the following two scripts: osf/02-scripts/monitor/rules/01-ruleslist.sql osf/02-scripts/monitor/rules/02-rules.sql During the capture process creation the schema tables are prepared for instantiation. In case of long running transactions, the definition of the process could last long, because this last step have to wait for lock release. A script to check for proper preparation is the following: osf/02scripts/monitor/tables_prepared_for_instantiation.sql 3.3.1.7 Dump of the schema to be replicated @ OSF4 The ALMA schema must be consistently dumped and copied to the destination database (SCO). Consistency in this case means that each object has to be snapshot to a certain point in time (SCN in this case). For this reason Flashback will be used during data export: Register the current SCN at the source database: select dbms_flashback.get_system_change_number from dual; Dump the ALMA schema related to the SCN above: expdp system directory=ALMA_DATA_PUMP_DIR dumpfile=ALMA_osf_cl.dmp logfile=ALMA_osf_cl.log schemas=ALMA flashback_scn=<REGISTERED_SCN> exclude=TABLE:\"IN \(\'XML_LOGENTRIES\'\)\" exclude=statistics Transfer the dump to the path ALMA_DATA_PUMP_DIR at SCO
4

mapped

to

the

directory

object

Multiple methods can be chosen to perform the initial data copy. For ALMA Data Pump export/import is the preferred way

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

32 of 59

3.4

SCO configuration

All the scripts in this section must be executed at the OSF, each script will take care of connecting to the right database, based on the TNS name provided by the DBA. If otherwise specified, some steps must be performed on the SCO database directly. 3.4.1 Import of the Schema at SCO

The ALMA objects have to be imported at the SCO database on an empty schema. For it to happen the following steps must be followed: Check if the ALMA schema is empty: select count(*) from dba_objects where owner='ALMA'; If the schema is populated any object must be deleted before the Data Pump import phase Import the dump @ SCO impdp system dumpfile=ALMA_osf_cl.dmp logfile=ALMA_osf_cl_IMPORT.log directory=ALMA_DATA_PUMP_DIR transform=oid:n Check the Datapump Import logfile for severe errors. Some of them can be neglected, other easily corrected without the need for a new import Gather Statistics on the freshly imported schema @ SCO with the force option Check that the supplemental log groups have been created @ SCO osf/02-scripts/monitor/supplemental_log_groups.sql Check that the instantiation SCN has been set for all the tables during import @ SCO osf/02-scripts/monitor/tables_with_instantiation_scn.sql 3.4.2 Schema fixing @ SCO

The imported ALMA schema must be corrected in some objects to make it ready for bidirectional replication. In particular: Change of Archive UID

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

33 of 59

update alma.xml_metainfo set value='A001' where name='archiveID'; commit; Drop/Create the UID_SEQ sequence drop sequence uid_seq; create sequence uid_seq start with 20 nocache order nocycle; Fixing of the Materialized View Refresh Group: Check for existing refresh jobs select * from user_jobs; execute for all the job numbers extracted during the previous step dbms_job.remove(N); Destroy the refresh group, if it exists exec dbms_refresh.destroy('OPERLOG_MV_GROUP'); Create the MV refresh group exec dbms_refresh.make('OPERLOG_MV_GROUP', '', SYSDATE + 1/96, 'SYSDATE + 1/96', FALSE, TRUE); Add the Materialized Views to be refreshed to the group: exec dbms_refresh.add('OPERLOG_MV_GROUP','MVIEW_NAME'); 3.4.3 Object exclusion for the Capture process @ OSF

Some objects must be excluded from the replication for various reasons. Some tables must not share content, others can make the Streams replication to fail and some object types don't need to be generated at the remote database. The exclusion rules are better defined at the Capture process level, minimizing the amount of data to be transmitted. A group of script will setup exclusion properly:

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

34 of 59

Exclusion of MV tables osf/02-scripts/exclude/exclude_mv_tables.sql Exclusion of sequences DDL5. The script will modify one of the capture rules and the rule number must be embedded in the script before running it osf/02-scripts/exclude/exclude_sequences.sql Group of tables excluded from replication osf/02-scripts/exclude/exclude_tables.sql Exclusion of XML indexes related objects osf/02-scripts/exclude/exclude_xml_indexes.sql Exclusion of the Phase1 Manager related tables osf/02-scripts/exclude/exclude_ph1m.sql Rulesets and rules can be checked using the following two scripts: osf/02-scripts/monitor/rules/01-ruleslist.sql osf/02-scripts/monitor/rules/02-rules.sql 3.4.4 Apply/Destination Queue creation @ SCO

The creation of the buffered queue permanently associated with the Apply process is performed by the following script: osf/02-scripts/02-remote/02-destination_queue.sql The related check script is: osf/02-scripts/monitor/propagation/01-queues.sql 3.4.5 Apply Process Configuration @ SCO

The Apply process, rulesets and rules are created by means of the following script: osf/02-scripts/02-remote/03-apply.sql
5

Sequence value is never replicated

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

35 of 59

The related check script is: osf/02-scripts/monitor/apply/01-queue_and_rules.sql Being the SCO/OSF a bi-directional replication, loop prevention must be configured through the use of tags. The value of the tag can be checked: osf/02-scripts/monitor/apply/12-show_apply_tag.sql (expected value: 53434F SCO) Rulesets and rules can be investigated using the following two scripts: osf/02-scripts/monitor/rules/01-ruleslist.sql osf/02-scripts/monitor/rules/02-rules.sql 3.4.6 Capture/Source Queue creation @ SCO

The creation of the buffered queue permanently associated with the Capture process is performed by the following script: osf/02-scripts/02-remote/04-capture_queue.sql The related check script is: osf/02-scripts/monitor/propagation/01-queues.sql 3.4.7 SCO OSF Propagation configuration @ SCO

The Propagation process (schedule), rulesets and rules are created by means of the following script: osf/02-scripts/02-remote/05-propagation.sql The related check scripts are: osf/02-scripts/monitor/propagation/04propagation_schedule.sql osf/02-scripts/monitor/propagation/08propagation_details.sql osf/02-scripts/monitor/propagation/09propagation_schedule.sql

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

36 of 59

Loop prevention in the ALMA Streams environment is enforced by propagation rules that must be thoroughly checked: osf/02-scripts/monitor/rules/01-ruleslist.sql osf/02-scripts/monitor/rules/02-rules.sql 3.4.8 Capture Process Configuration @ SCO

The Capture process, rulesets and rules are created by means of the following script: osf/02-scripts/02-remote/06-capture.sql The related check scripts are: osf/02-scripts/monitor/capture/01-processes.sql osf/02-scripts/monitor/capture/07-capture_parameters.sql Rulesets and rules can be drilled down using the following two scripts: osf/02-scripts/monitor/rules/01-ruleslist.sql osf/02-scripts/monitor/rules/02-rules.sql During the capture process creation the schema tables are prepared for instantiation. In case of long running transactions, the definition of the process could last long, because this last step have to wait for lock release. A script to check for proper preparation is the following: osf/02scripts/monitor/tables_prepared_for_instantiation.sql 3.4.9 Setup of instantiation SCN @ OSF

To properly operate Streams need some SCN information associated with the tables. In particular each table at the OSF must have an instantiation SCN based on the value at the SCO before that any Streams process is started. osf/02-scripts/02-remote/07-set_instantiation_scn.sql The related check script is the following: osf/02scripts/monitor/tables_prepared_for_instantiation.sql

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

37 of 59

3.4.10 Object exclusion for the Capture process @ SCO6 Some objects must be excluded from the replication for various reasons. Some tables must not share content, others can make the Streams replication to fail and some object types don't need to be generated at the remote database. The exclusion rules are better defined at the Capture process level, minimizing the amount of data to be transmitted. A group of script will setup exclusion properly: Exclusion of MV tables osf/02-scripts/exclude-remote/exclude_mv_tables.sql Exclusion of sequences DDL7. The script will modify one of the capture rules and the rule number must be embedded in the script before running it osf/02-scripts/exclude-remote/exclude_sequences.sql Group of tables excluded from replication osf/02-scripts/exclude-remote/exclude_tables.sql Exclusion of XML indexes related objects osf/02-scripts/exclude-remote/exclude_xml_indexes.sql Exclusion of the Phase1 Manager related tables osf/02-scripts/exclude-remote/exclude_ph1m.sql Rulesets and rules can be checked using the following two scripts: osf/02-scripts/monitor/rules/01-ruleslist.sql osf/02-scripts/monitor/rules/02-rules.sql 3.4.11 Test startup of the data flow
Currently all the Phase1 Manager objects are excluded from the bi-directional replication with the exception of the INSTITUTION tables. This object is replicated from the SCO to the OSF but not in the opposite direction 7 Sequence value is never replicated
6

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

38 of 59

Before proceeding to Streams process fine tuning, it is advisable to test the global startup of the two data paths of ALMA Streams. It should be performed starting the processes, testing some DMLs and DDLs on both sides and, at the end, stopping the processes. The golden rule is to always start the Apply side before the Capture/Propagation one. Database alert log files should be kept under careful control. 3.4.11.1 Streams processes startup Site SQL command

exec dbms_apply_adm.start_apply(apply_name => OSF '"APPLY"'); exec dbms_apply_adm.start_apply(apply_name => SCO '"APPLY"'); exec dbms_capture_adm.start_capture(capture_name => OSF '"CAPTURE"'); exec dbms_capture_adm.start_capture(capture_name => SCO '"CAPTURE"'); exec dbms_aqadm.enable_propagation_schedule(queue_name => '"STRMADMIN"."CAPTURE_QUEUE"', destination => OSF 'ALMA.SCO.CL', destination_queue => '"STRMADMIN"."DEST_QUEUE"'); exec dbms_aqadm.enable_propagation_schedule(queue_name => '"STRMADMIN"."CAPTURE_QUEUE"', destination => SCO 'ALMA.OSF.CL', destination_queue => '"STRMADMIN"."DEST_QUEUE"');

3.4.11.2 Streams processes shutdown Site SQL command

OSF exec dbms_capture_adm.stop_capture(capture_name =>

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

39 of 59

'"CAPTURE"'); exec dbms_capture_adm.stop_capture(capture_name => SCO '"CAPTURE"'); exec dbms_apply_adm.stop_apply(apply_name => OSF '"APPLY"'); exec dbms_apply_adm.stop_apply(apply_name => SCO '"APPLY"'); exec dbms_aqadm.disable_propagation_schedule(queue_name => '"STRMADMIN"."CAPTURE_QUEUE"', destination => OSF 'ALMA.SCO.CL', destination_queue => '"STRMADMIN"."DEST_QUEUE"'); exec dbms_aqadm.disable_propagation_schedule(queue_name => '"STRMADMIN"."CAPTURE_QUEUE"', destination => SCO 'ALMA.OSF.CL', destination_queue => '"STRMADMIN"."DEST_QUEUE"'); 3.4.12 Configuration of Steams processes parameters For the sake of performance and stability the Streams processes must be configured to alter they behavior. It is possible by means of some Streams specific parameters, they are configured using some special PL/SQL packages. Apply process configuration to avoid stopping in the event of errors osf/02-scripts/configure_streams_processes/change_apply.sql Checkpoint retention policy for the Capture process (7 days) osf/02scripts/configure_streams_processes/checkpoint_retention.sq l Shortening of the propagation schedule (run every script at the OSF only)

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

40 of 59

osf/02scripts/configure_streams_processes/alter_propagation_sched ule.sql osf/02scripts/configure_streams_processes/alter_propagation_sched ule_remote.sql Prevention of Capture waiting for flow control osf/02scripts/configure_streams_processes/capture_flow_control.sq l 3.4.13 Temporary configurations During the Cycle0 Proposal Submission phase, Archive has been asked to prevent any possible deletion of the submitted proposals/projects. To accomplish this requirement three triggers have been created on the database. They intercept any delete request and return an ORA- error in that case. The triggers are owner by the SYSTEM user, in this way they can't be disabled by an user connecting to the ALMA schema. The objects are named: Trigger Name OBSPROPOSAL_BLOCK_DELETE OBSPROJECT_BLOCK_DELETE OBSATTACHMENT_BLOCK_DELETE Protected Table XML_OBSPROPOSAL_ENTITIES XML_OBSPROJECT_ENTITIES XML_OBSATTACHMENT_ENTITIES

This trigger should be removed after the Phase2 will be complete. 3.5 Tier-2/SCO spoke setup

Once OSF/SCO bi-directional replication has been configured and tested, the ALMA streams environment can be extended to the pure data consumers. The type of replication that will be configured in this case is uni-directional. The configuration is quite simple on the Tier-2 side, while SCO retains most of the complexity. A group of custom scripts have been produced to extend Streams to the ARCs. The scripts can be found under the main Oracle installation tree under the arc branch, they

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

41 of 59

are divided into two main sections one applies to the Tier-2 database (spoke) and the other to the hub (SCO private). This chapter will only summarize the operations to be performed after all the prerequisites have been fulfilled. In particular: All the Oracle Patches must be already applied at the Tier-2 database SQL*Net network to connect SCO to the Tier-2 database must be configured and tested Instance Parameters must be applied and enforced at the Tier-2 database The Streams Administrator User must already exist at the Tier-2 database Any backup schedule should be stopped and restarted after the spoke data flow is activated 3.5.1 Apply/Destination Queue creation @ Tier-2

The creation of the buffered queue permanently associated with the Apply process is performed by the following script: arc/02-scripts/spoke/02-destination_queue.sql The related check script is: osf/02-scripts/monitor/propagation/01-queues.sql 3.5.2 Apply Process Configuration @ Tier-2

The Apply process, rulesets and rules are created by means of the following script: arc/02-scripts/spoke/03-apply.sql The related check script is: osf/02-scripts/monitor/apply/01-queue_and_rules.sql Rulesets and rules can be investigated using the following two scripts: osf/02-scripts/monitor/rules/01-ruleslist.sql osf/02-scripts/monitor/rules/02-rules.sql 3.5.3 SCO configuration extension pre-requisites

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

42 of 59

Before starting SCO configuration, it is advisable (although not mandatory) to: Stop the capture process at the OSF to reduce the data flow during the extension phase (stop Apply at SCO since it is not needed without any capture publishing for it) Stop Capture and any Propagation at SCO. It prevents that any error may affect the existing data flows. The processes can be easily restarted after all the rules have been thoroughly examined Create a database link in the Streams Administrator Schema to connect to the database to be added to the environment Destroy any Materialized View refresh group to prevent to overload the system during the initial alignment of the hub and spoke databases. It can be re-created after the completion of the setup process 3.5.4 SCO Tier-2 Propagation configuration @ SCO

The Propagation process (schedule), rulesets and rules are created by means of the following script: arc/02-scripts/hub/05-propagation.sql The related check scripts are: osf/02-scripts/monitor/propagation/04propagation_schedule.sql osf/02-scripts/monitor/propagation/08propagation_details.sql osf/02-scripts/monitor/propagation/09propagation_schedule.sql Propagation rules must be thoroughly checked: osf/02-scripts/monitor/rules/01-ruleslist.sql osf/02-scripts/monitor/rules/02-rules.sql 3.5.5 Schema preparation for instantiation

Before any other operation, the ALMA schema must be prepared for instantiation at the hub, it is needed to extract a copy of the Streams data dictionary to the redo logs for the remote apply process.

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

43 of 59

A script takes care of the operation: arc/02-scripts/hub/instantiate_schema.sql The related check script is: osf/02scripts/monitor/tables_prepared_for_instantiation.sql 3.5.6 Dump of the schema to be replicated @ SCO

The ALMA schema must be consistently dumped and copied to the destination database (Tier-2). Consistency in this case means that each object has to be snapshot to a certain point in time (SCN in this case). For this reason Flashback will be used during data export: Register the current SCN at the source database: select dbms_flashback.get_system_change_number from dual; Dump the ALMA schema related to the SCN above: expdp system directory=ALMA_DATA_PUMP_DIR dumpfile=ALMA_sco_cl.dmp logfile=ALMA_sco_cl.log schemas=ALMA flashback_scn=<REGISTERED_SCN> exclude=TABLE:\"IN \(\'XML_LOGENTRIES\'\)\" exclude=statistics Transfer the dump to the path ALMA_DATA_PUMP_DIR at SCO 3.5.7 Import of the Schema at Tier-2 mapped to the directory object

The ALMA objects have to be imported at the Tier-2 database on an empty schema. For it to happen the following steps must be followed: Check if the ALMA schema is empty: select count(*) from dba_objects where owner='ALMA'; If the schema is populated any object must be deleted before the Data Pump import phase Import the dump @ SCO

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

44 of 59

impdp system dumpfile=ALMA_sco_cl.dmp logfile=ALMA_sco_cl_IMPORT.log directory=ALMA_DATA_PUMP_DIR transform=oid:n Check the Datapump Import logfile for severe errors. Some of them can be neglected, other easily corrected without the need for a new import Gather Statistics on the freshly imported schema @ Tier-2 with the force option Check that the instantiation SCN has been set for all the tables during import @ Tier-2 osf/02-scripts/monitor/tables_with_instantiation_scn.sql 3.5.8 Schema fixing @ Tier-2

The imported ALMA schema must be corrected for some objects to make it ready for replication. In particular: Change of Archive UID8 update alma.xml_metainfo set value='VALUES_FROM_TABLE' where name='archiveID'; commit; Drop/Create the UID_SEQ sequence drop sequence uid_seq; create sequence uid_seq start with 20 nocache order nocycle; Fixing of the Materialized View Refresh Group: Check for existing refresh jobs select * from user_jobs; execute for all the job numbers extracted during the previous step dbms_job.remove(N);
ARC and SCO public Archive UID values should be gathered from the following document: http://almasw.hq.eso.org/almasw/bin/view/Archive/ArchiveIdAssignment
8

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

45 of 59

Destroy the refresh group, if it exists exec dbms_refresh.destroy('OPERLOG_MV_GROUP'); Create the MV refresh group exec dbms_refresh.make('OPERLOG_MV_GROUP', '', SYSDATE + 1/96, 'SYSDATE + 1/96', FALSE, TRUE); Add the Materialized Views to be refreshed to the group: exec dbms_refresh.add('OPERLOG_MV_GROUP','MVIEW_NAME'); 3.5.9 Configuration of Steams processes parameters

For the sake of performance and stability the Streams processes must be configured to alter they behavior. It is possible by means of some Streams specific parameters, they are configured using some special PL/SQL packages. Apply process configuration to avoid stopping in the event of errors @ Tier-2 arc/02-scripts/spoke/change_apply.sql Shortening of the propagation schedule (run at SCO only for the new Propagation) arc/02-scripts/hub/alter_propagation_schedule.sql 3.5.10 Startup of Streams processes If the OSF/SCO bi-directional replication has been stopped at the beginning of the Tier-2 extension, it can be restarted now. The new propagation process that delivers data to the Tier-2 database must remain stopped. 3.5.10.1 Startup of Tier-2 Streams processes dbms_apply_adm.start_apply(apply_name => '"APPLY"'); 3.5.10.2 Startup of the SCO Tier-2 Propagation

dbms_aqadm.enable_propagation_schedule(queue_name => '"STRMADMIN"."CAPTURE_QUEUE"', destination => 'DESTINATION_GLOBAL_NAME', destination_queue => '"STRMADMIN"."DEST_QUEUE"');

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

46 of 59

The data flow should start at this stage, consider at least 30 seconds for the process to start and the messages to be delivered. The following script can be used to monitor the flow of transactions at the Tier-2 site: osf/02-scripts/monitor/apply/13-apply_progress.sql Careful check of the alert log is needed to exclude unexpected complications. 4 Control of ALMA Streams processes Controlling Oracle Streams means being able to start and stop the processes that are responsible for the data flow. To prevent data loss Oracle suggests a golden rule: always stop the capture/propagation side before stopping the apply one. The symmetric rule is: always start the apply process before capture/propagation. Of course it is possible only if a planned shutdown of the processes can be scheduled. If a server or network outage should happen, the handshaking mechanism will take care not to loose any LCR or any transaction. 4.1 4.1.1 Control of the Capture process Starting Capture process

exec dbms_capture_adm.start_capture(capture_name => '"CAPTURE"'); 4.1.2 Stopping the Capture process

exec dbms_capture_adm.stop_capture(capture_name => '"CAPTURE"'); 4.2 Control of Propagation schedules

Although a procedure exists to stop and start the propagation process globally, it is strongly suggested to work on the propagation schedule only, it is the core of any propagation and stopping it fully disconnects any queue-to-queue data flow. For each source database propagation schedules are identified by: The CAPTURE process The destination database name

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

47 of 59

In particular the latter must be explicitly set each time that an operation on a schedule must be performed. 4.2.1 Starting the Propagation Schedule

exec dbms_aqadm.enable_propagation_schedule(queue_name => '"STRMADMIN"."CAPTURE_QUEUE"', destination => 'DESTINATION_DATABASE_GLOBAL_NAME', destination_queue => '"STRMADMIN"."DEST_QUEUE"'); 4.2.2 Stopping the Propagation Schedule

exec dbms_aqadm.disable_propagation_schedule(queue_name => '"STRMADMIN"."CAPTURE_QUEUE"', destination => 'DESTINATION_DATABASE_GLOBAL_NAME', destination_queue => '"STRMADMIN"."DEST_QUEUE"'); 4.3 4.3.1 Control of Apply process Starting the Apply process

exec dbms_capture_adm.start_capture(capture_name => '"CAPTURE"'); 4.3.2 Stopping the Apply process

exec dbms_capture_adm.stop_capture(capture_name => '"CAPTURE"'); 5 Monitoring the ALMA Streams environment Keeping the Streams environment under strict monitoring is the most important activity to be performed by the DBA once the whole system has been configured and tested. The main purpose of monitoring is detecting problems as early as possible and avoiding data loss. Most of the problems that may raise during common Streams activity are easy recoverable without the need to interrupt the data flow. Being SCO the pivot of the whole structure, it is the best place for monitoring activities, in fact every database in the structure is connected to the SCO via a database link and consistency checks are extremely easy from this point of view. Both a bottom-up or a top-down approach can be used to check proper behavior of the ALMA Streams environment. The most common problems will be detected at the Tier-2 databases, where, for any reason, data is not delivered or generates errors during apply.

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

48 of 59

The monitoring scripts are grouped under the osf branch of the scripts tree: Archive/oInstall/scripts/replication/osf/02-scripts/monitor it does not imply that they are only able to monitor the OSF database, on the contrary the scripts should be extensively used to keep every Tier under control. 5.1 Enterprise Manager

Both Oracle Enterprise Manager Grid and Database Control have some built-in capabilities. Unfortunately the Streams section is only able to scratch the surface of the system and the use of monitoring scripts is recommended. The Enterprise Manager Streams section is located under: Data Movement Streams Manage

The graphs that are displayed under this section are quite useful to highlight the amount of data that has been recently moved. 5.2 Global Monitoring script

Each Oracle installation has a sysV init script that allows control of the Oracle RDBMS component. In addition the global status of some components can be displayed using passing some options. Streams can be monitored by means of: /etc/init.d/dbora streams The script reports the current status of the Streams processes that are running on the system, if some of them are not an empty line is displayed. In case some errors are detected, they are reported in the relevant section. 5.3 Capture process monitoring

The scripts for this section are collected under the capture sub-branch. The following table summarizes the scripts used for the capture side monitoring: Script
01-processes.sql

Usage Display of Queue, Rule Sets, and Status of each Capture Process

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

49 of 59

02-capture_status.sql

Display of Change Capture Information About each Capture Process Displays State Change and Message Creation Time for each Capture Process

03-timed_statistics.sql

Display of Elapsed Time Performing Capture Operations for Each Capture Process Displays of Registered Redo Log Files for each Capture Process Displays of Registered Redo Log Files for Each Capture Process Display of SCN Values for each Redo Log File Used by each Capture Process Displays of the Last Archived Redo Entry Available to each Capture Process

04-registered_archlogs.sql 05-required_archlogs.sql 06-redologs_SCN.sql

07-capture_parameters.sql 08-applied_SCN.sql 09-redo_scan_latency.sql

List of the Parameter Settings for each Capture Process Determines the Applied SCN for all Capture Processes in a Database Determines Redo Log Scanning Latency for each Capture Process each Capture Process

10-message_enqueuing_latency.sql Determines Message Enqueuing Latency for 11-rule_evaluation.sql 12-combined_capture_apply.sql 13-extra_attributes.sql

Display of Information About Rule Evaluations for each Capture Process Determines Which Capture Processes Use Combined Capture and Apply View of the Extra Attributes Captured by Implicit Capture

5.4

Propagation monitoring

The scripts for this section are collected under the propagation sub-branch. Propagation is highly related to scheduler jobs, hence a lot of useful queries will focus on this.

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

50 of 59

Queue related monitoring script are relevant also to apply and capture processes. The following table summarizes the scripts used for the propagation monitoring: Script
01-queues.sql

Usage

Display of ANYDATA Queues in a Database 02-number_of_messages_in_the_queue.sql Determining the Number of Messages in Each Buffered Queue
03-capture_process_enqueuing.sql 04-propagation_schedule.sql

View of the Capture Processes for the LCRs in Each Buffered Queue Display of Information About Propagations that Send Buffered Messages Display of the Number of Messages and Bytes Sent By Propagations Displaying Performance Statistics for Propagations that Send Buffered Messages View of the Propagations Dequeuing Messages from Each Buffered Queue

05-messages_sent.sql 06-propagation_statistics.sql

07-apply_process_dequeuing.sql

Viewing the Apply Processes Dequeuing Messages from Each Buffered Queue Display the Queues and Database Link for Each Propagation Determining the Source Queue and Destination Queue for Each Propagation

08-propagation_details.sql

09-propagation_schedule.sql 10-messages_propagated.sql 11-propagation_errors.sql

Displaying the Schedule for a Propagation Job Determining the Total Number of Messages and Bytes Propagated Summarizes Propagation errors

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

51 of 59

5.5

Apply process monitoring

The scripts for this section are collected under the apply sub-branch. This section is the only one of interest for the Tier-2 database, especially the ARCs. The following table summarizes the scripts used for the apply side monitoring: Script
01-queue_and_rules.sql 02-apply_parameters.sql

Usage Determining the Queue, Rule Sets, and Status for Each Apply Process Listing the Parameter Settings for Each Apply Process Displaying Information About the Reader Server for Each Apply Process Monitoring Transactions and Messages Spilled by Each Apply Process Determining Capture to Dequeue Latency for a Message Displaying General Information About Each Coordinator Process Displaying Information About Transactions Received and Applied Determining the Capture to Apply Latency for a Message for Each Apply Process Displaying Information About the Apply Servers for Each Apply Process Combined Capture and Apply Checking for Apply Errors Showing the Apply Tag Displaying Information About the progress of the Apply process in SCN terms

03-reader_server.sql 04-spilled_messages.sql 05-last_dequeued_message.sql 06-coordinator_process.sql 07-applied_tnxs.sql 08-capture_to_apply_latency.sql 09-apply_server.sql

10-combined_capture_and_apply.sql Determining Which Apply Processes Use 11-apply_errors.sql 12-show_apply_tag.sql 13-apply_progress.sql

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

52 of 59

5.6

Rules Monitoring

Each Streams process has associated two rulesets, one positive and one negative. The rulesets are nothing else than containers for the rules which specify the destiny of each LCR. The Streams components are not directly using the rules, but they interface with the rule evaluation engine, that is an Oracle built-in component. The scripts for this section are collected under the rules sub-branch.

Script
01-ruleslist.sql 02-rules.sql

Usage Summary of rules associated to each ruleset for a specific Streams process Detail of rules associated to the Streams process

6 ALMA Streams Troubleshooting In case of problems with Streams a careful analysis of the alert log and trace files should highlight the source of the issue. If the problem can't be easily identified and/or solved, Oracle support must be involved. To properly open a Streams related service request at least the RSA report and the Streams Health Check script output must be uploaded. The Health Check script is available under the ALMA Streams scripts tree under:
diag_tools/streams_hc_11GR1.sql

6.1

Streams Alerts

Streams components produce alert log messages and trace files that could help to detect possible issues. Alerts can be: Stateless: Alerts that indicate single events that are not necessarily tied to the system state. They are produced if any of the following happens: A capture process aborts. A propagation aborts after 16 consecutive errors. An apply process aborts.

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

53 of 59

An apply process with an empty error queue encounters an apply error. Stateful: Alerts that are associated with a specific system state. They are produced if the following happens: Oracle Streams pool memory usage exceeds the percentage specified by the STREAMS_POOL_USED_PCT metric. The activation of an alert can be detected from the Enterprise Manager (an email alert should be generated) or by directly querying the database: Outstanding Stateless alerts:
SELECT REASON, SUGGESTED_ACTION FROM DBA_ALERT_HISTORY WHERE MODULE_ID LIKE '%STREAMS%';

Outstanding Statelful alerts:


SELECT REASON, SUGGESTED_ACTION FROM DBA_OUTSTANDING_ALERTS WHERE MODULE_ID LIKE '%STREAMS%';

6.2

Management of Capture problems

Generally the capture process and the underlying LogMiner service are extremely robust and seldom fail. The most common critical condition for a capture process is when one of the corresponding apply processes are unable to dequeue and acknowledge the consumption of the messages at a fast enough pace to cope with the generation rate. In this case an internal mechanism will slow down the Capture process to prevent excessive queue spilling. In general this is a temporary situation and doesn't need any action from the DBA side. The following query will show the process state: SELECT CAPTURE_NAME, STATE FROM V$STREAMS_CAPTURE; If the Capture process stays in the state: PAUSED FOR FLOW CONTROL for a long time, it is worth to proceed to an in-depth investigation. My Oracle Support contains a document that proposes a very systematic approach to the diagnosis of the reason why the process is getting slow: ID 746247.1.

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

54 of 59

The most common causes of such a problem are: Apply not able to deal with the data rate: this is likely to happen when data migration is performed, given that the data rate will grow to hundreds of times the average one. The Apply process can be made more efficient with some parameter tuning. A script has been created for that, it should be applied after the process have been stopped:
osf/02scripts/configure_streams_processes/apply_performance_enhancement .sql

After data have converged, it is suggested to set the parallelism level back to 1:
exec dbms_apply_adm.set_parameter('APPLY','parallelism','1');

Very large transactions running: This case it is very common in ALMA because each transaction can produce large XML documents. In this case no DBA action is needed, since the system will recover automatically. To highlight the presence of large transactions the following query can be useful (MINUTES is a threshold in minutes to identify transactions that are running longer than that):
col col col col col runlength HEAD 'Txn Open|Minutes' format 9999.99 sid HEAD 'Session' format a13 xid HEAD 'Transaction|ID' format a18 terminal HEAD 'Terminal' format a10 program HEAD 'Program' format a27 wrap

select t.inst_id, sid||','||serial# sid,xidusn||'.'||xidslot||'.'||xidsqn xid, (sysdate - start_date )* 1440 runlength ,terminal,program from gv$transaction t, gv$session s where t.addr=s.taddr and (sysdate - start_date) * 1440 > <MINUTES>;

Low flow control threshold: by default capture flow control kicks in when more than 5000 unbrowsed messages lie in the queue. This level is too low for ALMA, in particular during data migration. To increase the thresholds a script is run during the configuration phase.

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

55 of 59

LogMiner memory area: the portion of SGA used by the LogMiner component is 10MB by default. If the need of space raises beyond this limit, then the LogMiner reports an error to the Capture process and the latter shuts down. This memory allocation can be increased using the script:
osf/02-scripts/configure_streams_processes/logminer_sga.sql

Nevertheless a bit of fine tuning is suggested. The portion of SGA in use is reported in the alert not as well as the Capture process parameter. 6.3 Management of Propagation problems

Propagation jobs can get stuck from time to rime,ending in messages not being dispatched from the source to the destination queue. Detection of this kind of errors is not always easy, a combination of the alert log, the trace file and the following query may help to isolate the cause:
COLUMN COLUMN COLUMN COLUMN DESTINATION_DBLINK HEADING 'Database|Link' FORMAT A15 STATUS HEADING 'Status' FORMAT A8 ERROR_DATE HEADING 'Error|Date' ERROR_MESSAGE HEADING 'Error Message' FORMAT A35

SELECT DESTINATION_DBLINK, STATUS, ERROR_DATE, ERROR_MESSAGE FROM DBA_PROPAGATION WHERE PROPAGATION_NAME = 'STREAMS_PROPAGATION';

The error message often highlights where the problems lies. Often a simple restart of the schedule is enough. 6.4 Management of Apply errors

Apply process errors are the most likely to happen, due to conflicts9 or other skewness between that data coming from the capture process and data to be applied. In particular, XML Indexes are prone to corrupt during data migration procedures. Should it happen, the Apply process will report an error (without stopping) and starting to enqueue failing transactions to the error queue. Given the large amount of data to be applied, the apply queue could grow fast. It is highly suggested to drop any XML Index before data migration and recreated them afterwards. Other common problems are related to objects that should not be replicated, but that are not explicitly excluded from Capture10.
ALMA does not currently have an automatic update conflict resolution policy. Anyway conflicts are very unlikely to happen in ALMA and are limited to the scheduling blocks entities
9

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

56 of 59

If a transaction can't be applied to the destination site, the Apply process try to invoke an error handler (not defined for ALMA) and it put the transaction into the error queue afterwards. Carefully monitoring the error queue permits to detect apply errors quite soon and find a way to solve them. Once the cause of the error have been removed, the transactions in the apply queue can be applied again. Checking the error queue is performed with this simple query
select * from dba_apply error;

Once the filing transaction identifier has been isolated, it is possible to gather details through a stored procedure:
exec print_transaction(<TNX_ID>);

At this point it should be clear where the problem resides and the actions needed to fix it. To re-execute a specific transaction the following procedure can be used:
BEGIN DBMS_APPLY_ADM.EXECUTE_ERROR( local_transaction_id => , execute_as_user => FALSE, user_procedure => NULL); END; /

To re-execute all the transaction in the error queue:


BEGIN DBMS_APPLY_ADM.EXECUTE_ALL_ERRORS( apply_name => 'APPLY', execute_as_user => FALSE); END; /

Executing all the transactions is safe, because if the problem that prevented it from being applied did not disappear, the transaction stays in the error queue. Sometimes the transaction that went to error can be neglected because it is not needed on the destination database, in this case it can be removed from the error queue:
EXEC DBMS_APPLY_ADM.DELETE_ERROR(local_transaction_id => <TNX_ID>);

In case all the transactions in the error queue can be removed, carefully use this procedure:
Only a bunch of object types are auto-excluded from replication, a lot of them needs explicit exclusion
10

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

57 of 59

EXEC DBMS_APPLY_ADM.DELETE_ALL_ERRORS(apply_name => 'APPLY');

6.4.1

Errors related to the Streams data dictionary

When a capture process is created, a duplicate data dictionary called the Streams data dictionary is populated automatically.The Streams data dictionary is a multi-versioned copy of some of the information in the primary data dictionary at a source database. The Streams data dictionary maps object numbers, object version information, and internal column numbers from the source database into table names, column names, and column data types when a capture process evaluates rules and creates LCRs. This mapping keeps each captured event as small as possible because the event can store numbers rather than names. Whenever modifications are made to the structure of a database object with DDL, this information is maintained automatically. The error "MISSING Streams multi-version data dictionary!!!" occurs when the Streams data dictionary information for the specified object is not available in the database. This message is generated at an apply site in the apply reader trace file (P000). This is an informational message. When the "Missing ...data dictionary information" message occurs, the LCR is not applied. The apply process does not disable or abort when the multi-version data dictionary information is unavailable. In ALMA this error can occur in the following situations: The object number specified in the trace file has not been prepared at the source (capture) site. In order to resolve the problem follow the following steps: Identify the object that is missing. The trace file includes the identification information to determine the missing object. At the source site, repopulate the streams data dictionary for the object or granularity required using any of the following procedures.
DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION

Ensure that propagation is occurring from the source site to the destination site,so that the information will be loaded into the dictionary at the destination site

6.5

Management of object exclusions

As the ALMA schema objects evolve, new entities are created and others updated. To avoid problems with Streams replication, it is mandatory to pass any new or modified

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

58 of 59

object through a complete test process. It aims to check if the object is replicable, if it is not the case, the best exclusion strategy must be investigated and put in place. Both DDL and DML operations on the new object should be taken into account and excluded properly. Exclusion can be enforced at schema or table level. Rules can be linked to any Streams process ruleset, of course excluding at capture level is the preferred method since it will minimize the amount of data to be moved through the network. In general data that don't need to be replicated should be confined in a separate schema than ALMA and, if needed, select privileges on the ALMA objects can be granted. 6.6 Management of ALMA software release migration

When ALMA will deploy a new software release, the data format will change to fit the code and to adapt the data model to new pieces of information. In particular a bunch of XSLT documents will be used to transform XML documents by means of the internal Oracle transformation engine. In general only a small portion of the tables are involved in this process, nevertheless it is time consuming and resource hungry. The amount of redo data produced and, as a consequence, LCRs is extremely high. For this reason the recommended method to upgrade is: Test the impact of schema upgrade and replication thoroughly before getting then production database involved Stop any capture, propagation and apply throughout the entire Streams tree Apply schema upgrade to the OSF Check carefully for any error Restart the replication OSF SCO and let the changes to propagate and apply. This step could last many hours. Check carefully for any error during replication and for database consistency in XML terms Restart the replication to the ARCs. A few days will be needed to have the databases aligned to the SCO one

7 Appendix A Streams Split and Merge

ALMA Project
ALMA Database Replication

Doc # : ALMA-xx.xx.xx.xx-xxx-X-XXX Date: yyy-mm-dd Status: Draft


(Draft, Pending, Approved, Released, Superceded, Obsolete)

Page:

59 of 59

Splitting and merging an Oracle Streams destination is useful under the following conditions: A capture process captures changes that are sent to two or more destination databases. A destination queue stops accepting propagated changes captured by the capture process. In this case, the captured changes that cannot be sent to a destination queue remain in the source queue. This causes the source queue size to increase. Two procedures are the fundamental tools to operate with split and merge: The SPLIT_STREAMS procedure splits off the stream for the problem propagation destination from all of the other streams flowing from a capture process to other destinations. The procedure clones the capture process, queue, and propagation. The cloned versions of these components are used by the stream that is split off. The MERGE_STREAMS_JOB procedure determines whether or not the streams with a user-specified merge threshold. If they are, then the MERGE_STREAMS_JOB procedure runs the MERGE_STREAMS11 procedure. If the streams are not within the merge threshold, then the MERGE_STREAMS_JOB procedure does nothing. Splitting and merging is extremely important for the SCO private database, the hub of the whole structure, since one of the destinations could experience a long network outage, remaining disconnected for a long time. RMAN won't remove any of the redo log files that are still needed by one of the spokes.

The MERGE_STREAMS procedure merges the stream that was split off back into the other streams flowing from the original capture process.

11

You might also like