Oracle Rdbms Keypoints
Oracle Rdbms Keypoints
Oracle Rdbms Keypoints
Might be of use as just one of your Exam Preparation "support files" for the Oracle exam 1Z0-05 But due to the limited scope of subjects, it might even be used for 1Z0-042 (10gR1/R2) as well. It will be marked where relevant differences exists between 10g and 11g.
It might be of use for a beginner, or somebody preparing for the exam, but it's of NO USE for ex Version: Date: Compiled by: Usability: Contents: Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 17. 18. Birds eye view Main categories, of the most relevant DBA_ and V$ views Oracle (background / instance) Processes 10g / 11g. Overview Architecture Memory and Instance. SPFILE.ORA and INIT.ORA startup parameters. Startup and shutdown of an Instance. Some keypoints on ADDM, AWR and ASH. Some keypoints on Memory configuration. Some keypoints on Backup & Restore & Recovery 11g. Create Database Objects. Some keypoints about Users, Roles, Security. Implementing Auditing. ADR, ADRCI, incident packaging, logs and traces. Some keypoints in Health Monitoring. Some keypoints on Network Configurations. Some keypoints on Constraints Some keypoints on Resource Management. Some keypoints on FLASHBACK options.
3.1 - Revised Document. Please disregard previous versions. 10 December 2009 Albert van der Sel - Antapex Listing of some 10g / 11g RDBMS keypoints, that might be relevant for the exams. T
Important Note: hopefully you will like this file, but it should be stressed that it certainly does not cover all exam objectives. This is pretty obvious, ofcourse. So, it can only be viewed as "just" one of your (many other) support files in studying Oracle 11g.
Chapter 1. Birds eye view Main categories, of the most relev of a Single Instance.
For the static views, only "DBA_" is listed, and not the projections on "USER_" or "ALL_". Important: below is just a very small subset of all DBA_ and v$ views. Not included RMAN views. PROCESSES, SESSIONS: V$SESSION V$PROCESS V$ACTIVE_SESSION_HISTORY LOCKS, TRANSACTIONS: V$LOCK V$TRANSACTION V$LOCKED_OBJECT DBA_LOCK DBA_LOCK_INTERNAL DBA_BLOCKERS DBA_DDL_LOCKS DBA_DML_LOCKS DBA_WAITERS DBA_BLOCKERS
DBA_TAB_HISTOGRAMS DBA_TAB_STATISTICS DBA_TAB_MODIFICATIONS DBA_ENCRYPTED_COLUMNS SGA, POOLS: V$SGA V$SGASTAT V$SGAINFO V$BUFFER_POOL V$JAVA_POOL V$LIBARARYCACHE V$LIBRARY_CACHE_MEMORY V$DB_OBJECT_CACHE V$PGASTAT V$MEMORY_DYNAMIC_COMPONENTS V$MEMORY_TARGET_ADVICE V$SGA_TARGET_ADVICE V$SQL_SHARED_MEMORY V$RESULT_CACHE_MEMORY V$SGA_DYNAMIC_FREE_MEMORY V$JAVA_LIBRARY_CACHE_MEMORY V$PROCESS_MEMORY V$PROCESS_MEMORY_DETAIL V$PROCESS_MEMORY_DETAIL_PROG V$MEMORY_RESIZE_OPS V$MEMORY_CURRENT_RESIZE_OPS DBA_HIST_MEMORY_RESIZE_OPS DBA_HIST_MEMORY_TARGET_ADVICE
WAITS / HOTSPOTS: V$FILESTAT V$SYSTEM_EVENT V$SEGMENT_STATISTICS V$EVENT_NAME V$SYSTEM_WAIT_CLASS V$SESSION_WAIT V$SESSION_WAIT_HISTORY DBA_HIST_ACTIVE_SESS_HISTORY V$SYSSTAT V$SESSTAT V$STATNAME V$MYSTAT V$RESOURCE_LIMIT V$OSSTAT V$SESSION_LONGOPS V$OBJECT_USAGE ADDM / AWR: DBA_ADVISOR_ACTIONS DBA_ADVISOR_COMMANDS DBA_ADVISOR_DEFINITIONS DBA_ADVISOR_DEF_PARAMETERS DBA_ADVISOR_DIR_DEFINITIONS DBA_ADVISOR_DIR_INSTANCES DBA_ADVISOR_DIR_TASK_INST DBA_ADVISOR_EXECUTIONS DBA_ADVISOR_EXECUTION_TYPES DBA_ADVISOR_EXEC_PARAMETERS DBA_ADVISOR_FDG_BREAKDOWN DBA_ADVISOR_FINDINGS DBA_ADVISOR_FINDING_NAMES DBA_ADVISOR_JOURNAL DBA_ADVISOR_LOG DBA_ADVISOR_OBJECTS DBA_ADVISOR_OBJECT_TYPES DBA_ADVISOR_PARAMETERS DBA_ADVISOR_PARAMETERS_PROJ DBA_ADVISOR_RATIONALE DBA_ADVISOR_RECOMMENDATIONS DBA_ADVISOR_SQLPLANS DBA_ADVISOR_SQLSTATS DBA_ADVISOR_TASKS DBA_ADVISOR_TEMPLATES DBA_ADVISOR_USAGE DBA_ADDM_FDG_BREAKDOWN DBA_ADDM_FINDINGS DBA_ADDM_INSTANCES DBA_ADDM_SYSTEM_DIRECTIVES DBA_ADDM_TASKS DBA_ADDM_TASK_DIRECTIVES V$STATISTICS_LEVEL DBA_HIST_SNAPSHOTS DBA_HIST_WR_CONTROL DBA_HIST_SYSTEM_EVENT DBA_HIST_ACTIVE_SESS_HISTORY
CONSTRAINTS: DBA_CONSTRAINTS
ONLINE REDOLOGS: V$LOG V$LOGFILE INSTANCE / DATABASE V$INSTANCE V$DATABASE DATABASE_PROPERTIES GLOBAL_NAME OS: V$OSSTAT TIMEZONES: V$TIMEZONE_NAMES CONTROLFILES:
V$CONTROLFILE
Oracle uses many (focused) processes that are part of the Oracle instance. The following is a short list of You can also query v$bgprocess to view the name and description of all (active and inactive) background pro Database writer (DBWn)
The database writer writes modified blocks from the database buffer cach Oracle Database allows a maximum of 20 database writer processes.
The log writer process writes redo log entries to a disk. Redo log entri of the System Global Area (SGA) and the log writer process writes the re
Checkpoint (CKPT)
At specific times, all modified database buffers in the SGA are written This event is called a checkpoint. The checkpoint process signals DBWn P updates both the controlfile and the datafiles to indicate when the last Important: Thus the Checkpoint process writes checkpoint information to The system monitor performs instance recovery when a failed instance is
The process monitor performs a recovery when a user process fails. It cl that the failed process was using or holding. Archiver processes copy the redo log files to archival storage when the The database must be in archive log mode to run archive processes.
Archiver (ARCn)
This process performs various management-related background tasks, for Issuing alerts whenever a given metric violates its threshold value Taking snapshots by spawning additional processes Capturing statistical values for SQL objects that have been recently mo MMNL collect statistics for the Automatic Workload Repository (AWR). Job queue controller process wakes up periodically and checks the job lo
This is an ASM related process that performs rebalancing of disk resourc (ASM=Automatic Storage Management=special storage stucture; a separate A An Automatic Storage Management instance contains two main background pr This is RBAL. The second one performs the actual rebalance data extent m and they are called ARB0, ARB1, and so forth. An Automatic Storage Manag as a REGULAR database instance (SMON, PMON, LGWR, and so on).
Provides reference time for other processes. VKTM acts as a time publisher for an Oracle instance. VKTM publishes two using a seconds interval, and a higher resolution time. The flashback data archiver writes old row-versions of tables with 'flas
The diagnosibility process (DIAG) runs oradebug commands and triggers di (automatic diagnostic repository) feature, which is a replacement (and Note: RDA (Remote Diagnostics Agent) is a utility that can be downloaded
The space management coordinator (SMC) and slaves (Wnnn) perform space a
In addition to the background processes, the above command will also show user processes to the host. For example, from Windows:
Oracle.exe is the process, whilst the above background "modules" are threads. You must use a "process viewe
SQL> SELECT pid, spid, program, background FROM v$process WHERE BACKGROUND=1;
PID SPID PROGRAM B ---------- ------------------------ ---------------------------------------------------------------- 2 7184 ORACLE.EXE (PMON) 1
1 1 1
Here we have used v$session and v$process views, to list the background processes. Use, in sqlplus or sql developer, the "desc" command to see what field you can select from those views, lik desc v$process desc v$session
- An instance is a set of memory structures that manage database files. The instance consists of a shared m called the system global area (SGA), and a set of background processes. An instance can exist independently
- A database is a set of files, located on disk, that store data. These files can exist independently of a Suppose the Instance is not running, the database files (which are just files), still exists on the filesys
Library Cache
Shared SQL Area - SQL Execution Plans - Parsed SQL - Parsed Compiled PLSQL units
Server Result cache -SQL Query Result cache -PLSQL Result Cache
Other
Client Client
LISTENER
PGA: Program Global Area (PGA) is a memory area that contains data and control information for a server pro Access to the PGA is exclusive to server processes. The actual location of a private SQL area depends on the session's connection. For a session connected through a dedicated server, private SQL areas are located in the server process's P If a session is connected through a shared server, part of the private SQL area is kept in the SGA. Some example queries to view your SGA and Instance PGA: set linesize 1000 set pagesize 1000
SELECT * FROM v$sga; SELECT * FROM v$sgastat; SELECT * FROM v$pgastat; SELECT * FROM v$memory_target_advice ORDER BY memory_size;
SELECT SUBSTR(COMPONENT,1,20), CURRENT_SIZE, MIN_SIZE, MAX_SIZE, USER_SPECIFIED_SIZE from V$MEMO SELECT sum(bytes) FROM v$sgastat WHERE pool in ('shared pool', 'java pool', 'large pool'); SELECT FROM WHERE AND AND (1-(pr.value/(dbg.value+cg.value)))*100 v$sysstat pr, v$sysstat dbg, v$sysstat cg pr.name = 'physical reads' dbg.name = 'db block gets' cg.name = 'consistent gets';
SELECT * FROM v$sgastat WHERE name = 'free memory'; SELECT gethits,gets,gethitratio FROM v$librarycache WHERE namespace = 'SQL AREA';
SELECT SUM(PINS) "EXECUTIONS", SUM(RELOADS) "CACHE MISSES WHILE EXECUTING" FROM V$LIBRARYCACHE;
which shows you all parameters containing "target" in their name. Oracle Managed Files OMF: DB_CREATE_FILE_DEST
DB_CREATE_ONLINE_LOG_DEST_n
DB_CREATE_FILE_DEST specifies the default location for Oracle-managed da for Oracle-managed control files and online redo logs if none of the DB_ Default database location ['Path_to_directory'] Online log/controlfile destination (where n=1-5) ['Path'] DB_CREATE_ONLINE_LOG_DEST_n (where n = 1, 2, 3, ... 5) specifies the def If more than one DB_CREATE_ONLINE_LOG_DEST_n parameter is specified, the across the locations of the other DB_CREATE_ONLINE_LOG_DEST_n parameters One member of each online redo log is created in each location, and one (11gR2)
In 11gR2, the "FLASH RECOVERY AREA" is renamed to "FAST RECOVERY AREA". DB_RECOVERY_FILE_DEST
DB_RECOVERY_FILE_DEST_SIZE
DB_RECOVERY_FILE_DEST = directory / filesystem, or ASM disk group DB_RECOVERY_FILE_DEST specifies the default location for the flash recov archived redo logs, flashback logs, and RMAN backups. Specifying this parameter without also specifying the DB_RECOVERY_FILE_D DB_RECOVERY_FILE_DEST_SIZE specifies (in bytes) the hard limit on the to by target database recovery files created in the flash recovery area.
A flash recovery area is a location in which Oracle Database can store a related to backup and recovery. It is distinct from the database area.
You specify a flash recovery area with the following initialization para DB_RECOVERY_FILE_DEST DB_RECOVERY_FILE_DEST_SIZE You cannot enable these parameters if you have set values for the LOG_AR DEST and LOG_ARCHIVE_DUPLEX_DEST parameters. You must disable those parameters before setting up the flash recovery area. You can instead se LOG_ARCHIVE_DEST_n parameters. If you do not set values for local LOG_AR DEST_n, then setting up the flash recovery area will implicitly set LOG_ DEST_10 to the flash recovery area. Oracle recommends using a flash recovery area, because it can simplify b recovery operations for your database. DB_FLASHBACK_RETENTION_TARGET
specifies in minutes how far back you can "flashback" the database. How far back one can actually flashback the database, depends on how muc Oracle has kept in the recovery area.
Example: Here is an example of how to create a datafile using a default disk group specified by an initialization pa Suppose the Database initialization parameter file is set as follows: DB_CREATE_FILE_DEST = +dskgrp01 If you now create a tablespace SQL> CREATE TABLESPACE SALESDATA; it will be stored in +dskgrp01 Automatic Diagnostic Repository ADR:
Starting in Oracle11g we no longer have many of the original OFA file system structures and we see that the dump destination init.ora parms (core_dump_dest, background_dump_dest, user_dump_dest) are replaced by a s DIAGNOSTIC_DEST Specifically 11g. 10g uses: - core_dump_dest - background_dump_dest etc..
DIAGNOSTIC_DEST = { pathname | directory } As of Oracle Database 11g Release 1, the diagnostics for each database i which can be specified through the DIAGNOSTIC_DEST initialization parame specified by DIAGNOSTIC_DEST is as follows: <diagnostic_dest>/diag/rdbms/<dbname>/<instname> This location is known as the Automatic Diagnostic Repository (ADR) Home and the instance name is proddb1, the ADR home directory would be <diagn
So, if the DIAGNOSTIC_DEST was placed to "C:\ORACLE", you would find the "C:\oracle\diag\rdbms\test11g\test11g\alert\log.xml" The old plain text alert.log is still available in: "C:\oracle\diag\rdbms\test11g\test11g\trace\alert_test11g.log"
DB_DOMAIN
the DB_DOMAIN parameter, which is optional, indicates the domain (logica network structure. The combination of the settings for these two paramet form a database name that is unique within a network. For example, to create a database with a global database name of test.us.acme.com, edit the parameters of the new parameter file as follo DB_NAME = test DB_DOMAIN = us.acme.com
The DB_BLOCK_SIZE initialization parameter specifies the standard block database. This block size is used for the SYSTEM tablespace and by defau tablespaces. Oracle Database can support up to four additional nonstanda sizes. Typical values: 4096, 8192, 16K, 32K, 64K
Tablespaces of nonstandard block sizes can be created using the CREATE TABLESPACE statement and specifying the BLOCKSIZE clause. These nonstand block sizes can have any of the following power-of-two values: 2K, 4K, 8 To use nonstandard block sizes, you must configure subcaches within the area of the SGA memory for all of the nonstandard block sizes that you i PROCESSES: PROCESSES
The PROCESSES initialization parameter determines the maximum number of operating system processes that can be connected to Oracle Database conc value of this parameter must be a minimum of one for each background pro one for each user process. The number of background processes will vary the database features that you are using. For example, if you are using Queuing or the file mapping feature, you will have additional background If you are using Automatic Storage Management, then add three additional for the database instance.
SESSIONS
SESSIONS specifies the maximum number of sessions that can be created in Because every login requires a session, this parameter effectively deter concurrent users in the system. You should always set this parameter exp to your estimate of the maximum number of concurrent users, plus the num plus approximately 10% for recursive sessions. Default: Derived: (1.1 * PROCESSES) + 5
true | false Beginning with Oracle Database 11g Release 1, database passwords are case sensitive. (You can disable case sensitivity and retu preRelease 11g behavior by setting the SEC_CASE_SENSITIVE_LOGO init.ora parameter to false.
REMOTE_LOGIN_PASSWORDFILE
exclusive = password file authentication | none = OS authentication | sh If the database has a password file and you have been granted the SYSDBA SYSOPER system privilege, then you can connect and be authenticated by a password file.
LDAP_DIRECTORY_SYSAUTH
yes | no You can now use the Secure Sockets Layer (SSL) and Kerberos strong authe to authenticate users who have the SYSDBA and SYSOPER privileges.
The REMOTE_OS_AUTHENT
true | false Default=FALSE. Depreciated for 11g. It's still there for compatibility reasons. If remote_os_authent = FALSE, remote users will be unable to connect wit IDENTIFIED EXTERNALLY will only be in effect from the local host
11g: Complete Automatic Memory Management: AMM MEMORY_TARGET specifies the Oracle system-wide usable memory. The databa reducing or enlarging the SGA and PGA as needed. Thus it makes using SGA_TARGET and PGA_AGGEGATE_TARGET unneccessay in 11 MEMORY_MAX_TARGET sets an upper limit to what MEMORY_TARGET can get. If using MEMORY_TARGET and MEMORY_MAX_TARGET, you should set SGA_TARGET= specify them at all in the init/spfile. If you do assign values to SGA_TARGET and PGA_AGGREGATE_TARGET, they wil
11g: Partial Automatic Memory Management - ASSM - Automatic Shared Memor SGA_TARGET specifies the total size of all SGA components. If SGA_TARGET then the following memory pools are automatically sized: - Buffer cache (DB_CACHE_SIZE) - Shared pool (SHARED_POOL_SIZE) - Large pool (LARGE_POOL_SIZE) - Java pool (JAVA_POOL_SIZE) - Streams pool (STREAMS_POOL_SIZE) 0: SGA autotuning is disabled. Non Zero: Automatic Memory Management AMM is then enabled.
If these automatically tuned memory pools are set to nonzero values, the are used as minimum levels by Automatic Shared Memory Management. You wo an application component needs a minimum amount of memory to function pr The following pools are manually sized components and are not affected b - Log buffer - Other buffer caches, such as KEEP, RECYCLE, and other block sizes - Fixed SGA and other internal allocations SGA_MAX_SIZE
The SGA_MAX_SIZE initialization parameter specifies the maximum size of Global Area for the lifetime of the instance. You can dynamically alter parameters affecting the size of the buffer caches, shared pool, large p and streams pool but only to the extent that the sum of these sizes and other components of the SGA (fixed SGA, variable SGA, and redo log buffe exceed the value specified by SGA_MAX_SIZE. Size then Size Size Size Size of the Buffer Cache (for the primary blocksize). Specify in bytes, the default is either 48 MB or 4 MB * number of CPUs, whichever is of the Shared Pool. of the Large Pool. of the JAVA Pool. of the STREAMS Pool.
PGA_AGGREGATE_TARGET specifies the target aggregate PGA memory available to all server processes attached to the instance. So, it's the target for the total memory assigned to the server processe
The default is: 10 MB or 20% of the size of the SGA, whichever is greate
Setting PGA_AGGREGATE_TARGET to 0 automatically sets the WORKAREA_SIZE_P This means that SQL workareas are sized using the old style *_AREA_SIZE
Oracle does not recommend using the SORT_AREA_SIZE parameter unless the with the shared server option. Oracle recommends that you enable automat by setting PGA_AGGREGATE_TARGET instead. SORT_AREA_SIZE is retained for MEMORY_TARGET UNDO: UNDO_TABLESPACE
Again, for 11g, Is one level higher than "SGA_TARGET", because MEMORY_TA as needed . So, setting this parameter means full Automatic Memory Manage
UNDO_TABLESPACE = undotbs_01 If the database contains multiple undo tablespaces, you can optionally s
Default: 900 seconds After a transaction is committed, undo data is no longer needed for roll However, for consistent read purposes, long-running queries may require for producing older images of data blocks. Furthermore, the success of s can also depend upon the availability of older undo information. Old undo information with an age that is less than the current undo rete is said to be unexpired and is retained for consistent read and Oracle F
Normally DML operations have priority over retaining committed undo data If the undo retention threshold must be guaranteed, even at the expense the RETENTION GUARANTEE clause can be set against the undo tablespace du Examples: ALTER SYSTEM SET UNDO_RETENTION = 2400; ALTER TABLESPACE undotbs1 RETENTION GUARANTEE; ALTER TABLESPACE undotbs1 RETENTION NOGUARANTEE; ADDM and AWR: STATISTICS_LEVEL STATISTICS_LEVEL = { ALL | TYPICAL | BASIC } Default: TYPICAL
STATISTICS_LEVEL specifies the level of collection for database and oper The Oracle Database collects these statistics for a variety of purposes,
The default setting of TYPICAL ensures collection of all major statistic self-management functionality and provides best overall performance.
When the STATISTICS_LEVEL parameter is set to ALL, additional statistics collected with the TYPICAL setting. The additional statistics are timed Setting the STATISTICS_LEVEL parameter to BASIC disables the collection required by Oracle Database features and functionality. CONTROL_MANAGEMENT_PACK_ACCESS
- The DIAGNOSTIC pack includes AWR, ADDM, and so on. - The TUNING pack includes SQL Tuning Advisor, SQLAccess Advisor, and so A license for DIAGNOSTIC is required for enabling the TUNING pack.
CONTROL_MANAGEMENT_PACK_ACCESS = { NONE | DIAGNOSTIC | DIAGNOSTIC+TUNING Default: DIAGNOSTIC+TUNING If set to NONE, those features are switched off.
4.3
Example INIT.ORA
Example 1: ########################################### # Cache and I/O ########################################### db_block_size=8192 db_file_multiblock_read_count=16 ########################################### # Cursors and Library Cache ###########################################
open_cursors=300 ########################################### # Database Identification ########################################### db_domain=antapex.org db_name=test10g ########################################### # Diagnostics and Statistics ########################################### # 10g parameters: background_dump_dest=C:\oracle/admin/test10g/bdump core_dump_dest=C:\oracle/admin/test10g/cdump user_dump_dest=C:\oracle/admin/test10g/udump # 11g parameters: DIAGNOSTIC_DEST=C:\oracle\
########################################### # File Configuration ########################################### control_files=("C:\oracle\oradata\test10g\control01.ctl", "C:\oracle\oradata\test10g\control02.ctl", "C:\or db_recovery_file_dest=C:\oracle/flash_recovery_area db_recovery_file_dest_size=2147483648 ########################################### # Job Queues ########################################### job_queue_processes=10 ########################################### # Miscellaneous ########################################### # 10g example # compatible=10.2.0.1.0 # 11g example compatible=11.1.0.0.0 ########################################### # Processes and Sessions ########################################### processes=150 ########################################### # SGA Memory ########################################### sga_target=287309824 ########################################### # Security and Auditing ########################################### audit_file_dest=C:\oracle/admin/test10g/adump remote_login_passwordfile=EXCLUSIVE ########################################### # Shared Server ########################################### dispatchers="(PROTOCOL=TCP) (SERVICE=test10gXDB)" ########################################### # Sort, Hash Joins, Bitmap Indexes ########################################### pga_aggregate_target=95420416
########################################### # System Managed Undo and Rollback Segments ########################################### undo_management=AUTO undo_tablespace=UNDOTBS1 ########################################### # Archive Mode: ########################################### LOG_ARCHIVE_DEST_1=c:\oracle\oradata\archlog LOG_ARCHIVE_FORMAT='arch_%t_%s_%r.dbf' Example 2:
test11g.__db_cache_size=281018368 test11g.__java_pool_size=12582912 test11g.__large_pool_size=4194304 test11g.__oracle_base='c:\oracle'#ORACLE_BASE set from environment test11g.__pga_aggregate_target=322961408 test11g.__sga_target=536870912 test11g.__shared_io_pool_size=0 test11g.__shared_pool_size=230686720 test11g.__streams_pool_size=0 *.audit_file_dest='c:\oracle\admin\test11g\adump' *.audit_trail='db' *.compatible='11.1.0.0.0' *.control_files='c:\oradata\test11g\control01.ctl','c:\oradata\test11g\control02.ctl','c:\oradata\test11g\c *.db_block_size=8192 *.db_domain='antapex.nl' *.db_name='test11g' *.db_recovery_file_dest='c:\oracle\flash_recovery_area' *.db_recovery_file_dest_size=2147483648 *.diagnostic_dest='c:\oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=test11gXDB)' *.memory_target=857735168 *.open_cursors=300 *.processes=150 *.remote_login_passwordfile='EXCLUSIVE' *.undo_tablespace='UNDOTBS1' Example 3: ############################################################################## # Example INIT.ORA file # # This file is provided by Oracle Corporation to help you start by providing # a starting point to customize your RDBMS installation for your site. # # NOTE: The values that are used in this file are only intended to be used # as a starting point. You may want to adjust/tune those values to your # specific hardware and needs. You may also consider using Database # Configuration Assistant tool (DBCA) to create INIT file and to size your # initial set of tablespaces based on the user input. ############################################################################### # Change '<ORACLE_BASE>' to point to the oracle base db_name='ORCL' memory_target=1G processes = 150 audit_file_dest='<ORACLE_BASE>/admin/orcl/adump' audit_trail ='db' db_block_size=8192 db_domain='' db_recovery_file_dest='<ORACLE_BASE>/flash_recovery_area'
db_recovery_file_dest_size=2G diagnostic_dest='<ORACLE_BASE>' dispatchers='(PROTOCOL=TCP) (SERVICE=ORCLXDB)' open_cursors=300 remote_login_passwordfile='EXCLUSIVE' undo_tablespace='UNDOTBS1' # You may want to ensure that control files are created on separate physical # devices control_files = (ora_control1, ora_control2) compatible ='11.1.0'
4.4
For the instance with the Oracle system identifier (sid) prod1, the OPEN_CURSORS parameter remains set to 1 database-wide setting of 500. The instance-specific parameter setting in the parameter file for an instance database-wide alterations of the setting. This gives you control over parameter settings for instance prod1 can appear in any order in the parameter file.
If another DBA runs the following statement, then Oracle updates the setting on all instances except the in ALTER SYSTEM SET OPEN_CURSORS=1500 sid='*' SCOPE=MEMORY;
In the example instance with sid prod1, the parameter begins accepting ALTER SYSTEM values set by other ins if you change the parameter setting by running the following statement: ALTER SYSTEM RESET OPEN_CURSORS SCOPE=MEMORY sid='prod1';
Then if you execute the following statement on another instance, the instance with sid prod1 also assumes t ALTER SYSTEM SET OPEN_CURSORS=2000 sid='*' SCOPE=MEMORY;
At instance startup, the instance will search for the SPFILE server parameter file, at the default location For UNIX or Linux, the platform-specific default location (directory) for the SPFILE and text initialization parameter file is: $ORACLE_HOME/dbs For Windows the location is: %ORACLE_HOME%\database
There are several ways to orderly stop, and start an Instance, like using: - SQL*plus - via RMAN when working with backup/recovery tasks - The graphical web interface "Enterprise Manager" (EM)
-- use OS authentication to log on as a SYSDBA -- presuming password authentication, and username is listed in so that this user can log on as a SYSDBA.
A normal startup of the Instance, and mounting and opening of the Database: SQL> startup
A normal startup of the Instance, and mounting and opening of the Database, using a certain init.ora parame SQL> startup mount pfile=/var/tmp/initSALES.ora SQL> alter database open; -- open the database in the normal way. SQL> alter database open READ ONLY; SQL> alter database open READ WRITE; Only starting the Instance: SQL> startup nomount Only starting the Instance, and mounting the database (but not opening the database): SQL> startup mount Starting the Instance and opening the database in RESTRICTED mode: SQL> startup RESTRICT -- open the database as READ ONLY -- the default.
Later on, after doing your maintenance, you can allow access again to all users with: ALTER SYSTEM DISABLE RESTRICTED SESSION; Special startup options: SQL> startup FORCE
-- There exists also the "startup force" command, which basical start's it up in a normal way.
-- If you know that media recovery is required, you can start a database to the instance, and have the recovery process auto the STARTUP command with the RECOVER clause
The NORMAL clause is optional, because this is the default shutdown method if no clause is provided. Normal database shutdown proceeds with the following conditions: - No new connections are allowed after the statement is issued. - Before the database is shut down, the database waits for all currently connected users to disconnect from the database. The next startup of the database will not require any instance recovery procedures.
IMMEDIATE
No new connections are allowed, nor are new transactions allowed to be started, after the statement is issued. - Any uncommitted transactions are rolled back. (If long uncommitted transactions exist, this method of shutdown might not complete quickly, despite its name.) - Oracle Database does not wait for users currently connected to the database to disconnect. The database implicitly rolls back active transactions and disconnects all connected users. But, the next startup of the database will not require any instance recovery proced
In practice, this is a good and common way to close the Database. TRANSACTIONAL
When you want to perform a planned shutdown of an instance while allowing active transactions to complete first, use the SHUTDOWN command with the TRANSACTIONAL cla Transactional database shutdown proceeds with the following conditions: - No new connections are allowed, nor are new transactions allowed to be started, after the statement is issued. - After all transactions have completed, any client still connected to the instance disconnected. - At this point, the instance shuts down just as it would when a SHUTDOWN IMMEDIATE statement is submitted. The next startup of the database will not require any instance recovery procedures. If you need to shut down the database instantaneously, use the ABORT clause. The next startup of the database will require an instance recovery.
ABORT
Only users with the RESTRICTED SESSION privildge, are Return the Database to normal operation.
Allows only DBA transactions, queries, fetches, or PL This is called a "quiesced state". non-DBA sessions are prevented to get active. Return the Database to normal operation.
Look at the ACTIVE_STATE column of the V$INSTANCE view, to see the current state of an Instance. SQL> select ACTIVE_STATE FROM v$instance; ACTIVE_ST --------NORMAL
-- Can be NORMAL, QUIESCED (in that state), QUIESCING (becoming the quiesced state)
Stops all IO operations to files and headers. When the database is suspended all preexisting I/O operations are allowed to complete and any new da Return the Database to normal operation.
Look at the DATABASE_STATUS column of the V$INSTANCE view, to see the current state of an Instance. SQL> ALTER SYSTEM SUSPEND; System altered SQL> SELECT DATABASE_STATUS FROM v$instance; DATABASE_STATUS --------SUSPENDED SQL> ALTER SYSTEM RESUME;
So, ADDM examines and analyzes data captured in the Automatic Workload Repository (AWR) to determine possible performance problems in Oracle Database. ADDM then locates the root causes of the performance prob provides recommendations for correcting them, and quantifies the expected benefits. A key component of AWR, is Active Session History (ASH). ASH samples the current state of all active sessions every second and stores it in memory. The data collected in memory can be accessed by system views. This sampled data is also pushed into AWR every hour for the purposes of performance diagnostics.
Gathering database statistics using AWR is enabled by default and is controlled by the STATISTICS_LEVEL ini The STATISTICS_LEVEL parameter should be set to TYPICAL or ALL to enable statistics gathering by AWR. The default setting is TYPICAL. Setting the STATISTICS_LEVEL parameter to BASIC disables many Oracle Databa including AWR, and is not recommended. Overview Architecture:
purging every 7 days - Default every 60 minutes a snapshot is taken. Every snapshot has its own "snap_id".
WRH$_DB_CACHE_ADVICE_BL WRH$_LIBRARYCACHE WRH$_MUTEX_SLEEP WRH$_EVENT_HISTOGRAM_BL WRH$_LATCH_MISSES_SUMMARY_BL WRH$_LATCH_PARENT_BL WRH$_LATCH_CHILDREN_BL WRH$_LATCH_BL WRH$_ENQUEUE_STAT WRH$_WAITSTAT_BL WRH$_SQL_PLAN WRH$_SQL_SUMMARY WRH$_SQLTEXT
The CONTROL_MANAGEMENT_PACK_ACCESS parameter specifies which of the Server Manageability Packs should be ac - The DIAGNOSTIC pack includes AWR, ADDM, and so on. - The TUNING pack includes SQL Tuning Advisor, SQLAccess Advisor, and so on. A license for DIAGNOSTIC is required for enabling the TUNING pack. CONTROL_MANAGEMENT_PACK_ACCESS = { NONE | DIAGNOSTIC | DIAGNOSTIC+TUNING } Default: DIAGNOSTIC+TUNING If set to NONE, those features are switched off.
STATISTICS_LEVEL
STATISTICS_LEVEL specifies the level of collection for database and operating system statistics. STATISTICS_LEVEL = { ALL | TYPICAL | BASIC } Default: TYPICAL
1. Using Enterprise Manager 2. Using PLSQL API DBMS packages 3. Ready supplied scripts (that will ask for parameters, for example start and end snapshot id's like the script "@?/rdbms/admin/addmrpt" 4. View the DBA_ADDM* and DBA_ADVISOR* views
Automatic Database Diagnostic Monitor (ADDM) automatically detects and reports performance problems with th The results are displayed as ADDM findings on the Database Home page in Oracle Enterprise Manager (EM). Reviewing the ADDM findings enables you to quickly identify the performance problems that require your atte
SCREEN 4: From the Database Home page, choose performance. Under Additional Monitoring L
Package DBMS_ADDM
Sub Functions/Procedures ANALYZE_DB Procedure ANALYZE_INST Procedure ANALYZE_PARTIAL Procedure DELETE Procedure DELETE_FINDING_DIRECTIVE Procedure DELETE_PARAMETER_DIRECTIVE Procedure DELETE_SEGMENT_DIRECTIVE Procedure DELETE_SQL_DIRECTIVE Procedure GET_ASH_QUERY Function GET_REPORT Function INSERT_FINDING_DIRECTIVE Procedure INSERT_PARAMETER_DIRECTIVE Procedure INSERT_SEGMENT_DIRECTIVE Procedure INSERT_SQL_DIRECTIVE Procedure
What for?
Creates an ADDM task for analyzing Creates an ADDM task for analyzing Creates an ADDM task for analyzing Deletes an already created ADDM ta Deletes a finding directive Deletes a parameter directive Deletes a segment directive Deletes a SQL directive Returns a string containing the SQ Retrieves the default text report Creates a directive to limit repor Creates a directive to prevent ADD Creates a directive to prevent ADD Creates a directive to limit repor
Example: Most interresting subprocedures: DBMS_ADDM.ANALYZE_DB() and DBMS_ADDM.GET_REPORT() DBMS_ADDM.ANALYZE_DB ( task_name IN OUT VARCHAR2,
IN IN IN
var tname VARCHAR2(60); BEGIN :tname := 'my_database_analysis_mode_task'; DBMS_ADDM.ANALYZE_DB(:tname, 1, 2); END To see a report: SET LONG 100000 SET PAGESIZE 50000 SELECT DBMS_ADDM.GET_REPORT(:tname) FROM DUAL;
Package DBMS_WORKLOAD_REPOSITORY:
The DBMS_WORKLOAD_REPOSITORY package lets you manage the Workload Repository, performing operations such as managing snapshots and baselines, and retrieving reports. Package DBMS_WORKLOAD_REPOSITORY Sub Functions/Procedures What for?
ADD_COLORED_SQL Procedure Adds a colored SQL ID ASH_GLOBAL_REPORT_HTML Function Displays a global or Oracle Real A ASH_GLOBAL_REPORT_TEXT Function Displays a global or Oracle Real A ASH_REPORT_HTML Function Displays the ASH report in HTML ASH_REPORT_TEXT Function Displays the ASH report in text AWR_DIFF_REPORT_HTML Function Displays the AWR Diff-Diff report AWR_DIFF_REPORT_TEXT Function Displays the AWR Diff-Diff report AWR_GLOBAL_DIFF_REPORT_HTML Functions Displays the Global AWR Compare Pe AWR_GLOBAL_DIFF_REPORT_TEXT Functions Displays the Global AWR Compare Pe AWR_GLOBAL_REPORT_HTML Functions Displays the Global AWR report in AWR_GLOBAL_REPORT_TEXT Functions Displays the Global AWR report in AWR_REPORT_HTML Function Displays the AWR report in HTML AWR_REPORT_TEXT Function Displays the AWR report in text AWR_SQL_REPORT_HTML Function Displays the AWR SQL Report in HTM AWR_SQL_REPORT_TEXT Function Displays the AWR SQL Report in tex CREATE_BASELINE Functions & Procedures Creates a single baseline CREATE_BASELINE_TEMPLATE Procedures Creates a baseline template CREATE_SNAPSHOT Function and Procedure Creates a manual snapshot immediat DROP_BASELINE Procedure Drops a range of snapshots DROP_BASELINE_TEMPLATE Procedure Removes a baseline template that i DROP_SNAPSHOT_RANGE Procedure Activates service MODIFY_SNAPSHOT_SETTINGS Procedures Modifies the snapshot settings MODIFY_BASELINE_WINDOW_SIZE Procedure Modifies the window size for the D RENAME_BASELINE Procedure Renames a baseline SELECT_BASELINE_METRICS Function Shows the values of the metrics co
Example:
This example shows how to generate an AWR text report with the DBMS_WORKLOAD_REPOSITORY package for databas instance id 1, snapshot ids 5390 and 5391 and with default options. -- make sure to set line size appropriately -- set linesize 152 SELECT * FROM TABLE( DBMS_WORKLOAD_REPOSITORY.AWR_REPORT_TEXT( 1557521192, 1, 5390, 5392) ) ; Example: This example changes the interval setting to one hour and the retention setting to two weeks for the local
EXECUTE DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS( interval => 60, retention => 20160); Example: SELECT dbid FROM v$database; SELECT inst_id FROM gv$instance; SELECT sample_time FROM gv$active_session_history ORDER BY 1; set pagesize 0 set linesize 121 spool c:\temp\ash_rpt.html
Package DBMS_ADVISOR:
DBMS_ADVISOR is part of the Server Manageability suite of Advisors, a set of expert systems that identifies resolve performance problems relating to the various database server components. Package DBMS_ADVISOR Sub Functions/Procedures ADD_SQLWKLD_REF Procedure ADD_SQLWKLD_STATEMENT Procedure ADD_STS_REF Procedure CANCEL_TASK Procedure COPY_SQLWKLD_TO_STS Procedure CREATE_FILE Procedure CREATE_OBJECT Procedure CREATE_SQLWKLD Procedure CREATE_TASK Procedures DELETE_SQLWKLD Procedure DELETE_SQLWKLD_REF Procedure DELETE_SQLWKLD_STATEMENT Procedures DELETE_STS_REF Procedure DELETE_TASK Procedure EXECUTE_TASK Procedure GET_REC_ATTRIBUTES Procedure GET_TASK_REPORT Function GET_TASK_SCRIPT Function IMPLEMENT_TASK Procedure IMPORT_SQLWKLD_SCHEMA Procedure IMPORT_SQLWKLD_SQLCACHE Procedure IMPORT_SQLWKLD_STS Procedure IMPORT_SQLWKLD_SUMADV Procedure IMPORT_SQLWKLD_USER Procedure INTERRUPT_TASK Procedure MARK_RECOMMENDATION Procedure QUICK_TUNE Procedure RESET_SQLWKLD Procedure RESET_TASK Procedure SET_DEFAULT_SQLWKLD_PARAMETER Pro SET_DEFAULT_TASK_PARAMETER Procedures What for?
Adds a workload reference to an Ad Adds a single statement to a workl Establishes a link between the cur Cancels a currently executing task Copies the contents of a SQL workl Creates an external file from a PL Creates a new task object Creates a new workload object (Cau Creates a new Advisor task in the Deletes an entire workload object Deletes an entire workload object Deletes one or more statements fro Removes a link between the current Deletes the specified task from th Executes the specified task Retrieves specific recommendation Creates and returns a report for t Creates and returns an executable Implements the recommendations for Imports data into a workload from Imports data into a workload from Imports data from a SQL Tuning Set Imports data into a workload from Imports data into a workload from Stops a currently executing task, Sets the annotation_status for a p Performs an analysis on a single S Resets a workload to its initial s Resets a task to its initial state Imports data into a workload from Modifies a default task parameter
SET_SQLWKLD_PARAMETER Procedures SET_TASK_PARAMETER Procedure TUNE_MVIEW Procedure UPDATE_OBJECT Procedure UPDATE_REC_ATTRIBUTES Procedure UPDATE_SQLWKLD_ATTRIBUTES Procedure UPDATE_SQLWKLD_STATEMENT Procedure UPDATE_TASK_ATTRIBUTES Procedure Example: DECLARE workload_name VARCHAR2(30); BEGIN workload_name := 'My Workload';
Sets the value of a workload param Sets the specified task parameter Shows how to restate the materiali Updates a task object Updates an existing recommendation Updates a workload object Updates one or more SQL statements Updates a task's attributes
DBMS_ADVISOR.CREATE_SQLWKLD(workload_name, 'My Workload'); DBMS_ADVISOR.ADD_SQLWKLD_STATEMENT(workload_name, 'MONTHLY', 'ROLLUP', 100,400,5041,103,640445,680000,2, 1,SYSDATE,1,'SH','SELECT AVG(amount_sold) FROM sh.sales'); END; / Example: SELECT MIN(snap_id), MAX(snap_id) FROM dba_hist_snapshot; MIN(SNAP_ID) MAX(SNAP_ID) ------------ -----------884 1052 1 row selected. DECLARE l_task_name VARCHAR2(30) := '884_1052_AWR_SNAPSHOT_UNDO'; l_object_id NUMBER; BEGIN -- Create an ADDM task. DBMS_ADVISOR.create_task ( advisor_name => 'Undo Advisor', task_name => l_task_name, task_desc => 'Undo Advisor Task'); DBMS_ADVISOR.create_object ( task_name => l_task_name, object_type => 'UNDO_TBS', attr1 => NULL, attr2 => NULL, attr3 => NULL, attr4 => 'null', attr5 => NULL, object_id => l_object_id); -- Set the target object. DBMS_ADVISOR.set_task_parameter ( task_name => l_task_name, parameter => 'TARGET_OBJECTS', value => l_object_id); -- Set the start and end snapshots. DBMS_ADVISOR.set_task_parameter ( task_name => l_task_name, parameter => 'START_SNAPSHOT', value => 884);
DBMS_ADVISOR.set_task_parameter ( task_name => l_task_name, parameter => 'END_SNAPSHOT', value => 1052); -- Execute the task. DBMS_ADVISOR.execute_task(task_name => l_task_name); END; /
The Automatic Database Diagnostic Monitor (hereafter called ADDM) is an integral part of the Oracle RDBMS capable of gathering performance statistics and advising on changes to solve any exitsing performance issue
For this it uses the Automatic Workload Repository ( hereafter called AWR), a repository defined in the dat to store database wide usage statistics at fixed size intervals (default 60 minutes).
To make use of ADDM, a PL/SQL interface called DBMS_ADVISOR has been implemented. This PL/SQL interface may be called through the supplied $ORACLE_HOME/rdbms/admin/addmrpt.sql script, called directly, or used in combination with the Oracle Enterprise Manager application. Besides this PL/SQL package, a number of views (with names starting with the DBA_ADVISOR_ prefix) allow retrieval of the results of any actions performed with the DBMS_ADVISOR API. The preferred way of accessing ADDM is t Enterprise Manager interface, as it shows a complete performance overview including recommendations on how solve bottlenecks on a single screen. When accessing ADDM manually, you should consider using the ADDMRPT.SQL script provided with your Oracle re as it hides the complexities involved in accessing the DBMS_ADVISOR package.
To use ADDM for advising on how to tune the instance and SQL, you need to make sure that the AWR has been p with at least 2 sets of performance data. When the STATISTICS_LEVEL is set to TYPICAL or ALL the database will automatically schedule the AWR to be populated at 60 minute intervals. When you wish to create performance snapshots outside of the fixed intervals, then you can use the DBMS_WORKLOAD_REPOSITORY package for this, like in: BEGIN DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT('TYPICAL'); END; / The snapshots need be created before and after the action you wish to examine. E.g. when examining a bad performing query, you need to have performance data snapshots from the timestamps before the query was and after the query finished. You may also change the frequency of the snapshots and the duration for which they are saved in the AWR. Use the DBMS_WORKLOAD_REPOSITORY package as in the following example: execute DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(interval=>60,retention=>43200); Example: -------You can use ADDM through the PL/SQL API and query the various advisory views in SQL*Plus to examine how to solve performance issues. The example is based on the SCOTT account executing the various tasks. To allow SCOTT to both generate AWR snapshots and sumbit ADDM recommendation jobs, he needs to be granted proper access: CONNECT / AS SYSDBA GRANT ADVISOR TO scott; GRANT SELECT_CATALOG_ROLE TO scott;
GRANT EXECUTE ON dbms_workload_repository TO scott; Furthermore, the buffer cache size (DB_CACHE_SIZE) has been reduced to 24M. The example presented makes use of a table called BIGEMP, residing in the SCOTT schema. The table (containing about 14 million rows) has been created with: CONNECT scott/tiger CREATE TABLE bigemp AS SELECT * FROM emp; ALTER TABLE bigemp MODIFY (empno NUMBER); DECLARE n NUMBER; BEGIN FOR n IN 1..18 LOOP INSERT INTO bigemp SELECT * FROM bigemp; END LOOP; COMMIT; END; / UPDATE bigemp SET empno = ROWNUM; COMMIT; The next step is to generate a performance data snapshot: EXECUTE dbms_workload_repository.create_snapshot('TYPICAL'); Execute a query on the BIGEMP table to generate some load: SELECT * FROM bigemp WHERE deptno = 10; After this, generate a second performance snapshot: EXECUTE dbms_workload_repository.create_snapshot('TYPICAL'); The easiest way to get the ADDM report is by executing: @?/rdbms/admin/addmrpt Running this script will show which snapshots have been generated, asks for the snapshot IDs to be used for generating the report, and will generate the report containing the ADDM findings. So, maybe this is just all you want. When you do not want to use the script, you need to submit and execute the ADDM task manually. First, query DBA_HIST_SNAPSHOT to see which snapshots have been created. These snapshots will be used by ADDM to generate recommendations: SELECT * FROM dba_hist_snapshot ORDER BY snap_id; SNAP_ID DBID INSTANCE_NUMBER ---------- ---------- --------------STARTUP_TIME ----------------------------------------------------------------------BEGIN_INTERVAL_TIME ----------------------------------------------------------------------END_INTERVAL_TIME ----------------------------------------------------------------------FLUSH_ELAPSED ----------------------------------------------------------------------SNAP_LEVEL ERROR_COUNT ---------- ----------1 494687018 1 17-NOV-03 09.39.17.000 AM 17-NOV-03 09.39.17.000 AM 17-NOV-03 09.50.21.389 AM +00000 00:00:06.6 1 0
2 494687018 17-NOV-03 09.39.17.000 17-NOV-03 09.50.21.389 17-NOV-03 10.29.35.704 +00000 00:00:02.3 1 0 3 494687018 17-NOV-03 09.39.17.000 17-NOV-03 10.29.35.704 17-NOV-03 10.35.46.878 +00000 00:00:02.1 1 0
1 AM AM AM
1 AM AM AM
Mark the 2 snapshot IDs (such as the lowest and highest ones) for use in generating recommendations. Next, you need to submit and execute the ADDM task manually, using a script similar to: DECLARE task_name VARCHAR2(30) := 'SCOTT_ADDM'; task_desc VARCHAR2(30) := 'ADDM Feature Test'; task_id NUMBER; BEGIN (1) dbms_advisor.create_task('ADDM', task_id, task_name, task_desc, null); (2) dbms_advisor.set_task_parameter('SCOTT_ADDM', 'START_SNAPSHOT', 1); dbms_advisor.set_task_parameter('SCOTT_ADDM', 'END_SNAPSHOT', 3); dbms_advisor.set_task_parameter('SCOTT_ADDM', 'INSTANCE', 1); dbms_advisor.set_task_parameter('SCOTT_ADDM', 'DB_ID', 494687018); (3) dbms_advisor.execute_task('SCOTT_ADDM'); END; / Here is the explanation of the steps you need to take to successfully execute an ADDM job: 1) The first step is to create the task. For this, you need to specify the name under which the task will be known in the ADDM task system. Along with the name you can provide a more readable description on what the job should do. The task type must be 'ADDM' in order to have it executed in the ADDM environment. 2) After having defined the ADDM task, you must define the boundaries within which the task needs to be executed. For this you need to set the starting and ending snapshot IDs, instance ID (especially necessary when running in a RAC environment), and database ID for the newly created job. 3) Finally, the task must be executed. When querying DBA_ADVISOR_TASKS you see the just created job: SELECT * FROM dba_advisor_tasks; OWNER TASK_ID TASK_NAME ------------------------------ ---------- -----------------------------DESCRIPTION -----------------------------------------------------------------------ADVISOR_NAME CREATED LAST_MODI PARENT_TASK_ID ------------------------------ --------- --------- -------------PARENT_REC_ID READ_ ------------- ----SCOTT 5 SCOTT_ADDM ADDM Feature Test ADDM 17-NOV-03 17-NOV-03 0 0 FALSE When the job has successfully completed, examine the recommendations made by ADDM by calling the DBMS_ADVISOR.GET_TASK_REPORT() routine, like in:
SET LONG 1000000 PAGESIZE 0 LONGCHUNKSIZE 1000 COLUMN get_clob FORMAT a80 SELECT dbms_advisor.get_task_report('SCOTT_ADDM', 'TEXT', 'TYPICAL') FROM sys.dual; The recommendations supplied should be sufficient to investigate the performance issue, as in: DETAILED ADDM REPORT FOR TASK 'SCOTT_ADDM' WITH ID 5 ---------------------------------------------------Analysis Period: Database ID/Instance: Snapshot Range: Database Time: Average Database Load: 17-NOV-2003 from 09:50:21 to 10:35:47 494687018/1 from 1 to 3 4215 seconds 1.5 active sessions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Next you will find a number of "Findings". We will not show that here. Hope that the general idea is clear
or still use them in 11g, if you want Automatic Share instead of "full" Automatic Memory Management
simplest way to manage instance memory is to allow the Oracle Database instance to automatically manage tune it for you. To do so (on most platforms), you set only a target memory size initialization paramet optionally a maximum memory size initialization parameter (MEMORY_MAX_TARGET). instance then tunes to the target memory size, redistributing memory as needed between system global area (SGA) and the instance program global area (instance PGA).
Difference between MEMORY_TARGET and SGA_TARGET: MEMORY_TARGET is one level higher than "SGA_TARGET", because MEMORY_TARGET will redistribute memory to the and total Instance PGA as needed . So, setting this parameter means full Automatic Memory Management. parameters: MEMORY_TARGET MEMORY_MAX_TARGET controls all SGA components all PGA components
SGA_TARGET PGA_AGGREGATE_TARGET
Note: In Oracle 10g, you used only the SGA_TARGET and PGA_AGGREGATE_TARGET for AMM. You can still use those in 11g, if you do not want full AMM, but you still want Automatic Shared Memory Man In this case, set SGA_TARGET to a non zero value. Example: ALTER ALTER ALTER ALTER ALTER SYSTEM SYSTEM SYSTEM SYSTEM SYSTEM SET SET SET SET SET SGA_TARGET = 800M; SHARED_POOL_SIZE = 0; LARGE_POOL_SIZE = 0; JAVA_POOL_SIZE = 0; DB_CACHE_SIZE = 0;
Granules:
All SGA components allocate and deallocate space in units of granules. Oracle Database tracks SGA memory us internal numbers of granules for each SGA component. The memory for dynamic components in the SGA is allocated in the unit of granules. Granule size is determined by total SGA size. Generally speaking, on most platforms, if the total SGA size is equal to or less than 1 GB, then granule size is 4 MB. For SGAs larger than 1 GB, granule size is 16 MB. Some platform dependencies may arise. SGA and real / virtual memory: The SGA should fit in real memory, and do not let parts of it page out to paging space / swap space.
1. Determine the current sizes of SGA_TARGET and PGA_AGGREGATE_TARGET by entering the following SQL*Plus co
SQL> SHOW PARAMETER TARGET SQL*Plus displays the values of all initialization parameters with the string TARGET in the parameter name. NAME TYPE VALUE ------------------------------ ----------- ---------------archive_lag_target integer 0
integer integer integer big integer big integer big integer big integer
2. Run the following query to determine the maximum instance PGA allocated since the database was started: SQL> select value from v$pgastat where name='maximum PGA allocated'; Suppose this returns 120MB. 3. Determine the minimum value of MEMORY_TARGET: memory_target = sga_target + max(pga_aggregate_target, maximum PGA allocated) Thus in this example: MEMORY_TARGET = 272 + 120 = at least 392MB 4. Determine the MEMORY_MAX_TARGET.
For the MEMORY_MAX_TARGET initialization parameter, decide on a maximum amount of memory that you would wan to allocate to the database for the foreseeable future. That is, determine the maximum value for the sum of the SGA and instance PGA sizes. This number can be larger than or the same as the MEMORY_TARGET value that you chose in the previous step.
If using MEMORY_TARGET and MEMORY_MAX_TARGET, you should set SGA_TARGET=0 and PGA_AGGREGATE_TARGET=0, or do specify them at all in the init/spfile. If you do assign values to SGA_TARGET and PGA_AGGREGATE_TARGET, they will function as minimum levels.
-- ops of operations
Examples:
SQL>
MEMORY_SIZE MEMORY_SIZE_FACTOR ESTD_DB_TIME ESTD_DB_TIME_FACTOR VERSION ----------- ------------------ ------------ ------------------- ---------180 .5 458 1.344 0 270 .75 367 1.0761 0 360 1 341 1 0 450 1.25 335 .9817 0 540 1.5 335 .9817 0 630 1.75 335 .9817 0 720 2 335 .9817 0 This "maps" the EST_DB_TIME (waittime due to actions in the Database) to the Memory Size. Or, if you use "partial" Automatic Memory Management: SQL> SELECT * FROM v$sga_target_advice ORDER BY sga_size; This is similar to the first query.
SQL> SELECT * FROM v$sga; NAME VALUE -------------------- ---------Fixed Size 1334380 Variable Size 171967380 Database Buffers 356515840 Redo Buffers 5844992 SQL> SELECT * FROM v$sgastat; Returns a long list of structures and memory occupation of the pools. SQL> SELECT * FROM v$sgastat 2 WHERE name = 'free memory'; POOL -----------shared pool large pool java pool NAME BYTES -------------------------- ---------free memory 22286088 free memory 4129856 free memory 3490752
SQL> SELECT SUBSTR(COMPONENT,1,20), CURRENT_SIZE, MIN_SIZE, MAX_SIZE, USER_SPECIFIED_SIZE from V$MEMORY_DYN SUBSTR(COMPONENT,1,25) CURRENT_SIZE MIN_SIZE MAX_SIZE USER_SPECIFIED_SIZE ------------------------- ------------ ---------- ---------- ------------------shared pool 155189248 155189248 159383552 0 large pool 4194304 4194304 4194304 0 java pool 12582912 12582912 12582912 0 streams pool 0 0 0 0 SGA Target 536870912 536870912 536870912 0 DEFAULT buffer cache 356515840 352321536 356515840 0 KEEP buffer cache 0 0 0 0 RECYCLE buffer cache 0 0 0 0 DEFAULT 2K buffer cache 0 0 0 0 DEFAULT 4K buffer cache 0 0 0 0 DEFAULT 8K buffer cache 0 0 0 0 DEFAULT 16K buffer cache 0 0 0 0 DEFAULT 32K buffer cache 0 0 0 0 Shared IO Pool 0 0 0 0 PGA Target 322961408 322961408 322961408 0 ASM Buffer Cache 0 0 0 0
This shows the current, min, and max sizes that AMM is using. SQL> SELECT * FROM v$sgainfo; NAME BYTES RES -------------------------------- ---------- --Fixed SGA Size 1334380 No Redo Buffers 5844992 No Buffer Cache Size 356515840 Yes Shared Pool Size 155189248 Yes Large Pool Size 4194304 Yes Java Pool Size 12582912 Yes Streams Pool Size 0 Yes Shared IO Pool Size 0 Yes Granule Size 4194304 No Maximum SGA Size 535662592 No Startup overhead in Shared Pool 50331648 No Free SGA Memory Available 0 Note also the "Granule Size" in the output above.
put the tablespace in BACKUP MODE with "!" you can issue OS commands from s just using tar as an example; could also do the same for all other tablespaces
.. SQL> archive log current; ! tar -rvf /dev/rmt/0hc /u07/archives/*.arc exit EOF
-----
make an archived log of your current onli in order to capture all recent transactio backup the (cold) archived redologs (incl tapedvice and backuplocations are just ex
Instead of placing all tablespaces individually in backup mode, you may also use: ALTER DATABASE BEGIN BACKUP; (backup commands using tar, cpio, or whatever is suitable in your environment) ALTER DATABSE END BACKUP; This way of making "user defined" backups, is hardly used anymore. Ofcourse, everyone uses RMAN, or the Enterprise Manager (which uses RMAN in an integral way)
You will then enter "rman", which will show you the RMAN> prompt. RMAN> From here, you can issue a whole range of commands, like BACKUP, RESTORE, RECOVER, and reporting commands like LIST, REPORT etc..
But before you can do anything with the "target" database (the database you want to backup), you need to co RMAN maintains a record of database files and backups for each database on which it performs operations. This metadata is called the RMAN repository. -- The controlfile(s) of the target database, always contains the RMAN backup METADATA (repository). -- Optionally, you can have a separate dedicated (rman) database, which is called "the catalog". So, before you can "work" with RMAN, planning to do "something" with the target, you need to connect to the target, and optionally also to the "catalog" (if you use one). So, to connect to the target database (and optionally, the catalog) here are a few examples: RMAN> connect target / RMAN> connect target system/password@SID RMAN> connect target system/password@SID catalog rman/rman@RCAT Or starting rman, and connecting at the same time: % rman target sys/password@prod1 catalog rman/rman@rcat Once connected, you could use commands (or scripts) like: -- using OS authentication -- using the system account
RMAN> run { 2> allocate channel t1 type disk; 3> backup full database ; 4> sql 'alter system archive log current'; 5> backup archivelog all delete input ; 6> release channel t1; 7> }
The above example, just serves to give an idea about how such a script could look like.
Under the hood, the Enterprise Manager uses RMAN related DBMS* packages. The EM provides for a GUI for BACKUP, RESTORE, RECOVERY and configuration.
Otherwise, you will deal with an Open database, running in ARCHIVE mode, and let RMAN create (inconsistent) There is "nothing wrong" with inconsistent backups: After a restore, a recovery is needed, using the archi and/or the online redologs (which may contain transactions that are more recent than just the backup has).
Suppose you have three online redo files. All transactions that are done in the database, are first written current redofile (at commit) before they are also flushed to the database file(s) (at checkpoint).. When the current redolog file is full (say redo02.dbf), oracle will use the next one (redo03.dbf). When that one gets full too, oracle will reuse redo01.dbf. And so on and so on. Such a redolog gets overwritten each time the database switch to that redolog. If you want to save the history of transactions (that is recorded in such file) , then place the database in Then, before a redologfile will be reused, oracle will save a copy of that file (with all transactions) in archived redolog (like for example "arch_redo_344.log").
redo01.log
redo02.log
redo03.log
when redo03.log gets full, oracle returns to redo01.log
This way, you get a number of archived logs, which number keeps growing as time progresses. How many archived redologs will be produced per day, depends on the number of transactions in your database, and also on the size of the ONLINE redologs. The larger a redologfile is, "the longer" it takes before it's full. We can leave the "administration" of the archived redologs (how much to keep etc..) to tools like RMAN. Only when a database is running in Archive Mode, you are able to create ONLINE backups, that is, making backups while the database is open.
The former 10g "FLASH RECOVERY AREA", is in 11gR2 renamed to "FAST RECOVERY AREA", but otherwise it is exac the same thing.
To simplify the management of backup and recovery files, you can create a fast recovery area for your datab The fast recovery area is an Oracle-managed directory, file system, or Oracle Automatic Storage Management
that provides a centralized disk location for backup and recovery files. Oracle creates archived logs and f in the fast recovery area. RMAN can also store its backup sets and image copies in the fast recovery area, when restoring files during media recovery. The fast recovery area also acts as a disk cache for tape You create a FRA using the following spfile/init.ora parameters: DB_RECOVERY_FILE_DEST
DB_RECOVERY_FILE_DEST_SIZE
DB_RECOVERY_FILE_DEST = directory | disk group DB_RECOVERY_FILE_DEST specifies the default location for the flash recov archived redo logs, flashback logs, and RMAN backups. Specifying this parameter without also specifying the DB_RECOVERY_FILE_D DB_RECOVERY_FILE_DEST_SIZE specifies (in bytes) the hard limit on the to by target database recovery files created in the flash recovery area.
A flash recovery area is a location in which Oracle Database can store a related to backup and recovery. It is distinct from the database area.
You specify a flash recovery area with the following initialization para DB_RECOVERY_FILE_DEST DB_RECOVERY_FILE_DEST_SIZE You cannot enable these parameters if you have set values for the LOG_AR DEST and LOG_ARCHIVE_DUPLEX_DEST parameters. You must disable those parameters before setting up the flash recovery area. You can instead se LOG_ARCHIVE_DEST_n parameters. If you do not set values for local LOG_AR DEST_n, then setting up the flash recovery area will implicitly set LOG_ DEST_10 to the flash recovery area. Oracle recommends using a flash recovery area, because it can simplify b recovery operations for your database. For large backups, the FRA will maybe become impractical because of size limits. Ofcourse, the larger the FRA is, the more backups you can store. Examples: DB_RECOVERY_FILE_DEST=+RECOV -- An ASM Diskgroup DB_RECOVERY_FILE_DEST=/dumps/fra DB_RECOVERY_FILE_DEST=C:\ORACLE\Flash_Recovery_Area If you use a fast recovery area, then storage management for most backup-related files will be automated. You can also specify it as an archived redo log file destination.
-- Image copies are exact byte-for-byte copies of files. It does not save in space the way a backup set doe Backup sets and backup pieces are recorded in the RMAN repository. Types of backups: -- Full: The rman BACKUP DATABASE command will create a full database backup. -- Incremental:
If you specify BACKUP INCREMENTAL, then RMAN creates an incremental backup of a database. Incremental backups capture block-level changes to a database made after a previous incremental backup (lev Incremental backups are generally smaller and faster to make than full database backups. Recovery with incremental backups is faster than using redo logs alone. An incremental backup at level 0 is identical in content to a full backup, but unlike a full backup the lev is considered a part of the incremental backup strategy.
Usually, an incremental backup will be "differential", so it contains only the block that are changed since differentail backup. But you can specify that you want a "cumulative" incremental backup, in which case all blocks changed since the most recent level 0 backup are included. The starting point for an incremental backup strategy is a level 0 incremental backup, which backs up all used blocks in the database. A full database backup, corresponds to an INCREMENTAL LEVEL 0 backup. An incremental backup, corresponds to an INCREMENTAL LEVEL 1 backup.
An incremental LEVEL 1 backup, will only backup the changed database blocks that have been changed since th former LEVEL 0 or LEVEL 1 backup. So, in fact with an incremental level 1, you only capture all changes compared to the former backup (level Suppose you have the following daily backup schedule: time 2:00 4:00 6:00 8:00 10:00 12:00 14:00 16:00 18:00 20:00 22:00 0:00 type backup full database backup - incremental 0 incremental1 incremental2 incremental3 incremental4 incremental5 incremental6 incremental7 incremental8 incremental9 incremental10 incremental11 contains changes compared to this is a full backup (for this day) all changes after the full backup all changes after incremental1 all changes after incremental2 all changes after incremental3 all changes after incremental4 all changes after incremental5 all changes after incremental6 all changes after incremental7 all changes after incremental8 all changes after incremental9 all changes after incremental10
Ofcourse, the archived redologs are still important, but the main items in your backup/recovery policy are the full backup (your baseline for this day) and all incrementals created thereafter. Ofcourse, you can always backup your archived redologs, which would be very wise to do so. RMAN will always chooses incremental backups over archived logs, as applying changes at a block level is faster than reapplying individual changes. So, in the above example, if your database crashes at 17.00h, and you need to restore and recover, RMAN will restore all needed incrementals to get the database back at the situation of 16:00h Ofcourse, if you have archived redolog(s) that were created after 16:00h, you can recover up to the last transaction that was recorded in the most recent archived redolog. The archived redo log files are used only for changes from the period not covered by level 1 backups.
As an example on how to implement a level 0 and level 1 backup: Level 0: RMAN> run { 2> allocate channel t1 type disk; 3> backup incremental level 0 database; 4> release channel t1; 5> }
Or just use one command, this time with compression: RMAN> backup incremental level 0 as compressed backupset database; Level 1: RMAN> run { 2> allocate channel t1 type disk; 3> backup incremental level 1 database; 4> release channel t1; 5> } Or just use one command, this time with compression: RMAN> backup incremental level 1 as compressed backupset database; In the "one command" examples, you just use the default configured channels (serverprocesses).
8.6 Channels.
The actual backup will be performed by "server processes" which are of the same sort, as normal server proc These server processes are called "channels". You can define one or more channels in your backup script, so that execution may be performed in parallel. Such a serverprocess will read the database blocks, and also writes to the destination (disk, or tape). In case of tape, it might have a "sbt" driver specification in it's declaration, so that it can correctly handle the device. For a disk destination, that is not necessary. Examples:
RMAN> run { 2> allocate channel dev1 type disk; -- if you do not specify a diskdestination, and a FAS 3> allocate channel dev2 type disk; -- then the backup will be stored in the FAST RECOVER 4> sql "ALTER TABLESPACE tbs_1 OFFLINE IMMEDIATE"; 5> restore tablespace tbs_1; 6> recover tablespace tbs_1; 7> sql "ALTER TABLESPACE tbs_1 ONLINE"; } Note that here we have declared 2 diskchannels.
RMAN> run { 2> allocate channel t1 type 'sbt_tape' parms 'ENV=(tdpo_optfile=/usr/tivoli/tsm/client/oracle/bin64/td 3> allocate channel t2 type 'sbt_tape' parms 'ENV=(tdpo_optfile=/usr/tivoli/tsm/client/oracle/bin64/td 4> sql "ALTER TABLESPACE tbs_1 OFFLINE IMMEDIATE"; 5> restore tablespace tbs_1; 6> recover tablespace tbs_1; 7> sql "ALTER TABLESPACE tbs_1 ONLINE"; }
In this case, we backup to tape, and the channels "know" which drivers and configfiles they need to access, in order to handle the tape device.
C:\>rman Recovery Manager: Release 11.1.0.6.0 - Production on Tue Nov 24 20:38:34 2009 Copyright (c) 1982, 2007, Oracle. RMAN> SHOW ALL; using target database control file instead of recovery catalog RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of show command at 11/24/2009 20:38:44 RMAN-06171: not connected to target database RMAN> connect target / connected to target database: TEST11G (DBID=855257989) using target database control file instead of recovery catalog RMAN> SHOW ALL; All rights reserved.
-- will be retrieved from the RMAN repository, that is, the controlfile.
RMAN configuration parameters for database with db_unique_name TEST11G are: CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default CONFIGURE BACKUP OPTIMIZATION OFF; # default CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE ENCRYPTION FOR DATABASE OFF; # default CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default CONFIGURE COMPRESSION ALGORITHM 'BZIP2'; # default CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'C:\ORACLE\PRODUCT\11.1.0\DB_1\DATABASE\SNCFTEST11G.ORA'; # default RMAN> For example, the setting "CONFIGURE CONTROLFILE AUTOBACKUP OFF" is not really good. We want a backup of the controlfile, that will be part at all sorts of rman backups, automatically . If we want to change that, we would do the following: RMAN> configure controlfile autobackup on; REASON: This will ensure that you always have an up to date controlfile available that has been taken at the end of the current backup. Here are some more examples of persistent configuration settings, that will be stored in the repository. RMAN> RMAN> RMAN> RMAN> configure configure configure configure retention policy to recovery window of 5 days; default device type to disk; controlfile autobackup on; channel device type disk format '/dumps/orabackups/backup%d_DB_%u_%s_%p';
recent copy of the controlfile. >>> CONFIGURE RETENTION POLICY TO REDUNDANCY n | RECOVERY WINDOW OF m DAYS;
This determines how RMAN views backups as candidates for removal. -- For example, if you do this: CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS; It will ensure that RMAN retains all backups needed to recover the database to any point in time in the las -- For example, if you do this: CONFIGURE RETENTION POLICY TO REDUNDANCY 3 This means that 3 backups needs to be maintained by rman, and other backups are considered "obsolete". But those other backups beyond retention, are not expired or otherwise not usable. If they are still present, you can use them in a recovery.
8.7.2 View and change RMAN configuration settings from the EM:
In the EM Database Homepage, click "AVAILABILITY". Here you will find the options to change RMAN Backup and Recovery Settings. Click the "Backup Settings" link.
An Example screen. See the figure below. Note that we here have changed the "configure controlfile autoback but this time in a graphical way. It's just the same as we did from the RMAN prompt in section 8.7.1.
8.8 A few very simple backup and recovery examples using rman scripts.
Example 1: backup of database and archived redologs. RMAN> run { 2> backup database plus archivelog; 3> delete noprompt obsolete; 4> } or, this works too: RMAN> run { 2> allocate channel t1 type disk; 3> backup database tag full_251109 ; 4> sql 'alter system archive log current'; 5> backup archivelog all delete input ; 6> release channel t1; 7> } Backup of only archived redologs: RMAN> run { 2> allocate channel t1 type disk; 4> sql 'alter system archive log current'; 5> backup archivelog all delete input ; 6> release channel t1; -- if backup to disk, and a default persistent config setting -- is a channel to disk, you may leave the channel declaration
7>
Example 2: Complete and Full database Restore and Recovery. Suppose you have a database crash. If the controlfiles and online redo logs are still present, a whole database recovery can be achieved by running the following script: run { shutdown immediate; # use abort if this fails startup mount; restore database; recover database; alter database open; }
This will result in all datafiles being restored then recovered. RMAN will apply full, and differential, an as necessary until the recovery is complete. At that point the database is opened. You do NOT need to use "ALTER DATABASE RESETLOG", because the online redologs were available in this exampl Example 3: Restore & Recover A Subset Of The Database Suppose you have a problem with the USERS tablespace. If you need to restore it: run { sql 'ALTER TABLESPACE users OFFLINE IMMEDIATE'; restore tablespace users; recover tablespace users; sql 'ALTER TABLESPACE users ONLINE'; } In such a restore, it is not manadatory to use a block of code. You can also just do this: RMAN> RESTORE TABLESPACE USERS; RMAN> RECOVER TABLESPACE USERS; After this, you can put the tablespace ONLINE (e.g. in sqlplus, or rman). Example 4: Restore and Recovering a Datafile Restored to a New Location
The following example allocates one disk channel and one media management channel to use datafile copies on disk and backups on tape, and restores one of the datafiles in tablespace TBS_1 to a different location: run { allocate channel dev1 type disk; allocate channel dev2 type 'sbt_tape'; sql "ALTER TABLESPACE tbs_1 OFFLINE IMMEDIATE"; set newname for datafile 'disk7/oracle/tbs11.f' to 'disk9/oracle/tbs11.f'; restore tablespace tbs_1; switch datafile all; recover tablespace tbs_1; sql "ALTER TABLESPACE tbs_1 ONLINE"; } Example 5: Incomplete Recovery, to a certain SCN, or point in time: As you would expect, RMAN allows incomplete recovery to a specified time, SCN or sequence number: run { shutdown immediate; startup mount; set until time 'Nov 15 2009 08:00:00'; # set until scn 1000; # alternatively, you can specify SCN
# set until sequence 9923; # alternatively, you can specify log sequence number restore database; recover database; alter database open resetlogs; }
The incomplete recovery requires the database to be opened using the RESETLOGS option. This is necessary because, we have restored the database to an earlier point in time, whilst the online red then still contains "future" events, which are NOT APPLICABLE. Example 6: Single backup statements: A RMAN script is of the form "run { statements }" But you can also just use single RMAN backup commands like: BACKUP DATAFILE 1; BACKUP CURRENT CONTROLFILE; BACKUP ARCHIVELOG ALL;
- In the "Host Crendentials" fields, fill in suitable credentials for your Host OS. - As an example, here we will choose "Customized Backup - Whole Database". - Next, we have to pass several pages where we have to specify serveral options:
- Choose between Full or Incremental Backup. Here we will choose "Full". - Make the Backup while the database is online. - We also want to backup all archived redo logs, as part of this job. Click Next.
Here you can choose between tape and disk backup locations. Because we have configured a "fast recovery are shown as the "default" backup destination. Note that you can change the destination on disk, by choosing "Override Current Settings". Also, you can only backup to tape, if the SBT drivers and libararies are installed on your Server. Click Next. The "Schedule Customized Backup: Schedule" page appears.
Obiously, here you can create a schedule for your job, or run it now, or run it on a later moment. In the Job Name field, you can enter a user-specified tag for this backup. The job name is used as a prefix for the backup tag for backups created by this job. Click Next, and the "Review" page will appear.
Note the "Edit RMAN Script" button, which enable you to view the assoiciated RMAN script. Also note that this page enables you to Submit the job for execution. In this example, the RMAN script is this: backup device type disk tag '%TAG' database include current controlfile; backup device type disk tag '%TAG' archivelog all not backed up; Click "Submit Job" to start the backup. After submitting, you are able to view the status of the job.
RMAN always stores its RMAN repository of metadata in the control file of each target database on which it For example, suppose that you use RMAN to back up the prod1 and prod2 databases. RMAN stores the metadata f of prod1 in the control file of prod1, and the metadata for backups of prod2 in the control file of prod2.
You may also have an RMAN repository (the catalog) in an a separate, central database . In a large environment, with many targets, that will make (for example) reporting and listing of stored bac Here we mean (ofcourse) obtaining all kinds of status information from the RMAN repository. But, again, any target will always carry metadata (with respect to backups for itself) in the controlfile.
Also, backups are physically stored on some media. You can validate the status of the physical backup piece against the information in the metadata.
Some REPORT examples: The REPORT command is more focused on analyzing your backups, instead of just providing lists. RMAN> REPORT NEED BACKUP; RMAN> REPORT UNRECOVERABLE; RMAN> REPORT OBSOLETE; RMAN> report obsolete; RMAN-03022: compiling command: report RMAN-06147: no obsolete backups found A CROSSCHECK and REPORT example:
Use the CROSSCHECK command to update the status of backups in the repository compared to their status on di With respect to tape media, this is somewhat more difficult since tapes "are less online" compared to disks But if the tape media manager (sbt), exposes all it's catalog information, you can use CROSSCHECK on tape a RMAN> RMAN> RMAN> RMAN> CROSSCHECK CROSSCHECK CROSSCHECK CROSSCHECK BACKUP DEVICE TYPE DISK; BACKUP DEVICE TYPE sbt; BACKUP; -- crosshecks all backups on all types of media. BACKUP DEVICE TYPE sbt COMPLETED BETWEEN '01-AUG-09' AND '30-AUG-09';
RESTORE .. PREVIEW and RESTORE VALIDATE commands: You can apply "RESTORE ... PREVIEW" to any RESTORE operation to create a detailed list of every backup to be used in the requested RESTORE operation. This command accesses the RMAN repository to query the backup metadata, but does not actually read the backup files to ensure that they can be restored.
Somewhat more elaborate is the "RESTORE VALIDATE HEADER" command. In addition to listing the files needed for restore and recovery, the RESTORE ... VALIDATE HEADER command validates the backup file headers to determine whether the files on disk or in the media management catalog correspond to the metadata in the RMAN repository. Example: RMAN> RESTORE DATABASE PREVIEW; RMAN> RESTORE ARCHIVELOG FROM TIME 'SYSDATE-7' PREVIEW;
Note that you can: Crosscheck backups Delete backups Delete Obsolete (check status) backups Delete Expired backups
checks the backups in the repository, against what exists on me if you want to delete a certain backup deletes backups that are not needed to satisfy the retention po deletes repository entries for any backups not found when a Cro
SQL> select FILE_ID, SUBSTR(file_name,1,40) NAME, substr(tablespace_name,1,20) TABLESPACE from dba_data_fil FILE_ID ---------4 3 2 1 5 NAME ---------------------------------------C:\ORADATA\TEST11G\USERS01.DBF C:\ORADATA\TEST11G\UNDOTBS01.DBF C:\ORADATA\TEST11G\SYSAUX01.DBF C:\ORADATA\TEST11G\SYSTEM01.DBF C:\ORADATA\TEST11G\STAGING.DBF TABLESPACE -------------------USERS UNDOTBS1 SYSAUX SYSTEM STAGING
Next, we shutdown the database. SQL> shutdown immediate; After the database is closed, we delete the "STAGING.DBF" file. Next, we try to start the database. SQL> alter database open; alter database open *
ERROR at line 1: ORA-01157: cannot identify/lock data file 5 - see DBWR trace file ORA-01110: data file 5: 'C:\ORADATA\TEST11G\STAGING.DBF' This is quite a serious error, which normally can be resolved by restoring and recovering that tablespace. That would be as follows. RMAN> connect target connected to target database: TEST11G (DBID=855257989, not open) RMAN> RESTORE TABLESPACE STAGING; RMAN> RECOVER TABLESPACE STAGING; After that, you can open the database in a normal way. But, this was easy because you know the problem and you know how to solve it! Bus suppose you do not know how to handle it. Then you might consider the Data Recovery Advisor.
The recommended workflow is to run LIST FAILURE to display failures, ADVISE FAILURE to display repair optio and REPAIR FAILURE to fix the failures. So, in general, this is how to use the Data Recovery Advisor: 1. LIST FAILURE 2. ADVISE FAILURE 3. REPAIR FAILURE The "power" of the Advisor is, that you can "just" do those actions, and possibly without actually knowing all background details. Well, anyway, thats the theory. You might have some doubts here. RMAN> LIST FAILURE; List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------142 HIGH OPEN 28-NOV-09 One or more non-system datafiles are missing RMAN> ADVISE FAILURE; List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------142 HIGH OPEN 28-NOV-09 One or more non-system datafiles are missing analyzing automatic repair options; this may take some time using channel ORA_DISK_1 analyzing automatic repair options complete Mandatory Manual Actions ======================== no manual actions available Optional Manual Actions ======================= 1. If file c:\oradata\test11g\staging.dbf was unintentionally renamed or moved, restore it Automated Repair Options ======================== Option Repair Description ------ -----------------1 Restore and recover datafile 5; Strategy: The repair includes complete media recovery with no data loss Repair script: c:\oracle\diag\test11g\test11g\hm\reco_660500184.hm
The ADVISE FAILURE output shows both manual and automated repair options. First try to fix the problem manually. If you cannot fix the problem manually, then review the automated re Now, you can move to that script, and evaluate it, and do manual actions. Or, you can let rman perform an automated repair: RMAN> REPAIR FAILURE; Or do a "preview" first: RMAN> REPAIR FAILURE PREVIEW; Strategy: The repair includes complete media recovery with no data loss Repair script: c:\oracle\diag\test11g\test11g\hm\reco_660500184.hm contents of repair script: restore datafile 5; recover datafile 5; It is that simple !
1. CREATE TABLESPACE:
CREATE TABLESPACE STAGING DATAFILE 'C:\ORADATA\TEST11G\STAGING.DBF' SIZE 5000M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; CREATE TABLESPACE CISTS_01 DATAFILE '/u07/oradata/spldevp/cists_01.dbf' SIZE 1200M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;
- Locally managed tablespaces track all extent information in the tablespace itself by using bitmaps - Space allocation is simplified, because when the AUTOALLOCATE clause is specified, the database automatically selects the appropriate extent size. You can have the database manage extents for you automatically with the AUTOALLOCATE clause (the default), or you can specify that the tablespace is managed with uniform extents of a specific size (UNIFORM). If you expect the tablespace to contain objects of varying sizes requiring many extents with different extent sizes, then AUTOALLOCATE is the best choice - If you want exact control over unused space, and you can predict exactly the space to be allocated for an object or objects and the number and size of extents, then UNIFORM is a good choice. This setting ensures that you will never have unusable space in your tablespace
last_name VARCHAR2(25) CONSTRAINT emp_last_name_nn email VARCHAR2(25) CONSTRAINT emp_email_nn phone_number VARCHAR2(20), hire_date DATE CONSTRAINT emp_hire_date_nn job_id VARCHAR2(10) CONSTRAINT emp_job_nn salary NUMBER(8,2), commission_pct NUMBER(2,2), manager_id NUMBER(6), department_id NUMBER(4), CONSTRAINT emp_salary_min CHECK (salary > 0), CONSTRAINT emp_email_uk UNIQUE (email) ) TABLESPACE USERS; ALTER TABLE employees ADD ( CONSTRAINT emp_emp_id_pk CONSTRAINT emp_dept_fk CONSTRAINT emp_job_fk CONSTRAINT emp_manager_fk ) ;
(employee_id), (department_id) REFERENCES departments (department_id), (job_id) REFERENCES jobs (job_id), (manager_id) REFERENCES employees (manager_id)
CREATE TABLE hr.admin_emp ( empno NUMBER(5) PRIMARY KEY, ename VARCHAR2(15) NOT NULL, ssn NUMBER(9) ENCRYPT, job VARCHAR2(10), mgr NUMBER(5), hiredate DATE DEFAULT (sysdate), photo BLOB, sal NUMBER(7,2), hrly_rate NUMBER(7,2) GENERATED ALWAYS AS (sal/2080), -- virtual column comm NUMBER(7,2), deptno NUMBER(3) NOT NULL, CONSTRAINT admin_dept_fkey REFERENCES hr.departments (department_id)) TABLESPACE admin_tbs STORAGE ( INITIAL 50K);
3. OBJECT TABLE:
CREATE TYPE department_typ AS OBJECT ( d_name VARCHAR2(100), d_address VARCHAR2(200) ); CREATE TABLE departments_obj_t OF department_typ; INSERT INTO departments_obj_t VALUES ('hr', '10 Main St, Sometown, CA');
The data in a global temporary table is private, such that data inserted by a session can only be accessed
by that session. The session-specific rows in a global temporary table can be preserved for the whole sessi or just for the current transaction. The ON COMMIT DELETE ROWS clause indicates that the data should be del at the end of the transaction. Like permanent tables, temporary tables are defined in the data dictionary. However, temporary tables and their indexes do not automatically allocate a segment when created. Instead, temporary segments are allocated when data is first inserted.
5. EXTERNAL TABLE:
CREATE OR REPLACE DIRECTORY ext AS 'c:\external'; GRANT READ ON DIRECTORY ext TO public; CREATE TABLE ext_tab ( empno CHAR(4), ename CHAR(20), job CHAR(20), deptno CHAR(2)) ORGANIZATION EXTERNAL ( TYPE oracle_loader DEFAULT DIRECTORY ext ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE BADFILE 'bad_%a_%p.bad' LOGFILE 'log_%a_%p.log' FIELDS TERMINATED BY ',' MISSING FIELD VALUES ARE NULL REJECT ROWS WITH ALL NULL FIELDS (empno, ename, job, deptno)) LOCATION ('demo1.dat') ) An access driver is an API that interprets the external data for the database. The access driver runs inside the database, which uses the driver to read the data in the external table. The access driver and the external table layer are responsible for performing the transformations required on the data in the data file so that it matches the external table definition. Oracle provides the ORACLE_LOADER (default) and ORACLE_DATAPUMP access drivers for external tables. For both drivers, the external files are not Oracle data files. ORACLE_LOADER enables read-only access to external files using SQL*Loader. You cannot create, update, or append to an external file using the ORACLE_LOADER driver. The ORACLE_DATAPUMP driver enables you to unload external data. This operation involves reading data from the database and inserting the data into an external table, represented by one or more external files. After external files are created, the database cannot update or append data to them. The driver also enables you to load external data, which involves reading an external table and loading its data into a database.
6. CREATE CLUSTER:
Index Cluster: CREATE CLUSTER employees_departments_cluster (department_id NUMBER(4)) SIZE 512;
Haskey Cluster: CREATE CLUSTER employees_departments_cluster (department_id NUMBER(4)) SIZE 8192 HASHKEYS 100;
CREATE INDEX idx_emp_dept_cluster ON CLUSTER employees_departments_cluster; Now, "add" tables to the cluster like for example: CREATE TABLE employees ( ... ) CLUSTER employees_departments_cluster (department_id);
CREATE TABLE departments ( ... ) CLUSTER employees_departments_cluster (department_id); A table cluster is a group of tables that share common columns and store related data in the same blocks. When tables are clustered, a single data block can contain rows from multiple tables. For example, a block can store rows from both the employees and departments tables rather than from only a single table. The cluster key is the column or columns that the clustered tables have in common. For example, the employees and departments tables share the department_id column. You specify the cluster key when creating the table cluster and when creating every table added to the cluster. The cluster key value is the value of the cluster key columns for a particular set of rows. All data that contains the same cluster key value, such as department_id=20, is physically stored together. Each cluster key value is stored only once in the cluster and the cluster index, no matter how many rows of different tables contain the value. You can consider clustering tables when they are primarily queried (but not much modified) and records from the tables are frequently queried together or joined. An indexed cluster is a table cluster that uses an index to locate data. The cluster index is a B-tree index on the cluster key. A cluster index must be created before any rows can be inserted into clustered tables. Assume that you create the cluster employees_departments_cluster with the cluster key department_id, as shown above. Because the HASHKEYS clause is not specified, this cluster is an indexed cluster. Afterward, you create an index named idx_emp_dept_cluster on this cluster key. Index Clause: Specify INDEX to create an indexed cluster. In an indexed cluster, Oracle Database stores together rows having the same cluster key value. Each distinct cluster key value is stored only once in each data block, regardless of the number of tables and rows in which it occurs. If you specify neither INDEX nor HASHKEYS, then Oracle Database creates an indexed cluster by default. After you create an indexed cluster, you must create an index on the cluster key before you can issue any data manipulation language (DML) statements against a table in the cluster. This index is called the cluster index.
Hashkeys Clause: Specify the HASHKEYS clause to create a hash cluster and specify the number of hash values for the hash clu In a hash cluster, Oracle Database stores together rows that have the same hash key value. The hash value for a row is the value returned by the hash function of the cluster.
7. CREATE INDEX:
CREATE INDEX indx_cust_id ON CUSTOMERS(cust_id) nologging; ALTER INDEX emp_pk REBUILD NOLOGGING TABLESPACE INDEX_BIG PCTFREE 10 STORAGE ( INITIAL 5M NEXT 5M pctincrease 0 ); CREATE INDEX employees_ix ON employees (last_name, job_id, salary);
-- B-tree
A B-tree index has two types of blocks: branch blocks for searching and leaf blocks that store values. The upper-level branch blocks of a B-tree index contain index data that points to lower-level index blocks.
Index-organized tables An index-organized table differs from a heap-organized because the data is itself the index. Reverse key indexes In this type of index, the bytes of the index key are reversed, for example, 103 is stored as 301. The reversal of bytes spreads out inserts into the index over many blocks. You also may see these indexes, or want to use them from time to time. Consider a column, which includes names like "restaurant A", "restaurant B", "restaurant C" Perhaps a not very glamorous example, but the point is a column with many unique values but not much variation at the front. Using a reverse-key index would be ideal here, because will simple REVERSE the string before throwing it into the b-tree. CREATE INDEX indx_r_name ON RESTAURANTS(r_name) REVERSE; Descending indexes This type of index stores data on a particular column or columns in descending order. B-tree cluster indexes This type of index is used to index a table cluster key. Instead of pointing to a row, the key points to the block that contains rows related to the cluster key -- Bitmap Index In a bitmap index, an index entry uses a bitmap to point to multiple rows. In contrast, a B-tree index entry points to a single row. A bitmap join index is a bitmap index for the join of two or more tables. CREATE BITMAP INDEX indx_gender ON EMPLOYEE (gender) TABLESPACE EMPDATA; -- Function based Index It precomputes values based on functions, and stores it in the index. CREATE INDEX lastname_idx ON EMPLOYEES(LOWER(l_name)); CREATE INDEX emp_total_sal_idx ON employees (12 * salary * commission_pct, salary, commission_pct);
8. INDEX-ORGANIZED TABLE:
Index Organized Tables are tables that, unlike heap tables, are organized like B*Tree indexes. CREATE TABLE labor_hour ( WORK_DATE DATE, EMPLOYEE_NO VARCHAR2(8), CONSTRAINT pk_labor_hour PRIMARY KEY (work_date, employee_no)) ORGANIZATION INDEX; An index-organized table is a table stored in a variation of a B-tree index structure. In a heap-organized table, rows are inserted where they fit. In an index-organized table, rows are stored in an index defined on the PRIMARY KEY for the table. Each index entry in the B-tree also stores the non-key column values. Thus, the index is the data, and the data is the index. A secondary index is an index on an index-organized table. In a sense, it is an index on an index. The secondary index is an independent schema object and is stored separately from the index-organized table.
9. DATABASE LINK:
To run queries against remote tables in another database, you can create a "database link": CREATE Public Database Link MYLINK Connect To scott Identified By tiger Using sales; SELECT count(*) from table@MYLINK;
-- account in remote database sales -- tnsnames alias sales -- remote table in database (alias) sales
10. SEQUENCE:
CREATE SEQUENCE INCREMENT BY START WITH MAXVALUE CYCLE ; <sequence name> <increment number> <start number> <maximum value>
CREATE SEQUENCE SEQ_SOURCE INCREMENT BY 1 START WITH 1 MAXVALUE 9999999 NOCYCLE; create table SOURCE ( id number(10) not null, longrecord varchar2(128)); CREATE OR REPLACE TRIGGER tr_source BEFORE INSERT ON SOURCE FOR EACH ROW BEGIN SELECT seq_source.NEXTVAL INTO :NEW.id FROM dual; END; / insert into SOURCE (longrecord) values ('ddddd eee ff gggg'); insert into SOURCE (longrecord) values ('ggggg hh ii jjjjj'); insert into SOURCE (longrecord) values ('a b c d e');
TABLESPACE tsd );
A row with SALE_YEAR=1999, SALE_MONTH=8, and SALE_DAY=1 has a partitioning key of (1999, 8, 1) and would be Each partition of a range-partitioned table is stored in a separate segment. HASH PARTITIONED: The following example creates a hash-partitioned table. The partitioning column is id, four partitions are and assigned system generated names, and they are placed in four named tablespaces (gear1, gear2, ...). CREATE TABLE scubagear (id NUMBER, name VARCHAR2 (60)) PARTITION BY HASH (id) PARTITIONS 4 STORE IN (gear1, gear2, gear3, gear4); LIST PARTITIONED: The following example creates a list-partitioned table. It creates table q1_sales_by_region which is partitioned by regions consisting of groups of states. CREATE TABLE q1_sales_by_region (deptno number, deptname varchar2(20), quarterly_sales number(10, 2), state varchar2(2)) PARTITION BY LIST (state) (PARTITION q1_northwest VALUES ('OR', 'WA'), PARTITION q1_southwest VALUES ('AZ', 'UT', 'NM'), PARTITION q1_northeast VALUES ('NY', 'VM', 'NJ'), PARTITION q1_southeast VALUES ('FL', 'GA'), PARTITION q1_northcentral VALUES ('SD', 'WI'), PARTITION q1_southcentral VALUES ('OK', 'TX')); Insert some sample rows: (20, 'R&D', 150, 'OR') maps to partition q1_northwest (30, 'sales', 100, 'FL') maps to partition q1_southeast Composite Range-Hash Partitioning:
The following statement creates a range-hash partitioned table. In this example, three range partitions are each containing eight subpartitions. Because the subpartitions are not named, system generated names are as but the STORE IN clause distributes them across the 4 specified tablespaces (ts1, ...,ts4). CREATE TABLE scubagear (equipno NUMBER, equipname VARCHAR(32), price NUMBER) PARTITION BY RANGE (equipno) SUBPARTITION BY HASH(equipname) SUBPARTITIONS 8 STORE IN (ts1, ts2, ts3, ts4) (PARTITION p1 VALUES LESS THAN (1000), PARTITION p2 VALUES LESS THAN (2000), PARTITION p3 VALUES LESS THAN (MAXVALUE));
Just create a user with a password, and suitable options (like a default tablespace, a quota etc..). -- User Albert: create user albert identified by secret default tablespace USERS temporary tablespace TEMP QUOTA 20M ON users ; -- GRANTS to GRANT create GRANT create GRANT create GRANT create Albert: session TO albert; table TO albert; sequence TO albert; procedure TO albert;
-- User Arnold: create user arnold identified by secret default tablespace STAGING temporary tablespace TEMP QUOTA 20M ON STAGING ; -- GRANTS to Arnold: GRANT create session TO Arnold; GRANT RESOURCE TO Arnold; 2. Externally Authenticated User: When an externally identified user connects to the database, the database relies on the fact that the OS has authenticated the user, and that the username is a valid database account. There is no password stored for this type of account. This type of account must be created with a username 'prefix', which can be controlled with the spfile/init parameter "OS_AUTHENT_PREFIX". SQL> show parameter PREFIX NAME TYPE VALUE ------------------------------------ ----------- -----------------------------os_authent_prefix string OPS$ Example: CREATE USER ops$harry identified externally default tablespace STAGING temporary tablespace TEMP QUOTA 20M ON STAGING; -- grant some "system priviledges" to the user
The keywords IDENTIFIED EXTERNALLY tells the database that the account is an externally authenticated accou 3. Globally Authenticated User: When a globally identified user connects to the database, the database verifies that the username is valid, and passes the authentication to an external Service like Kerberos. Just like the OS users in 2), this type of accounts do not store passwords in the database. The keywords IDENTIFIED GLOBALLY tells the database that the advanced security option must be engaged. Example: CREATE USER jbrown IDENTIFIED GLOBALLY as 'CN=jbrown, OU=SALES, O=ANTAPEX' default tablespace STAGING temporary tablespace TEMP QUOTA 20M ON STAGING;
View the useraccounts in the Database: SELECT username, substr(default_tablespace,1,20), substr(temporary_tablespace,1,20), created, password, account_status FROM DBA_USERS; USERNAME -----------------------------MGMT_VIEW SYS SYSTEM DBSNMP SYSMAN JOHN ALBERT HANK OPS$HARRY ARNOLD OUTLN FLOWS_FILES MDSYS ORDSYS EXFSYS WMSYS WKSYS WK_TEST CTXSYS ANONYMOUS XDB WKPROXY ORDPLUGINS FLOWS_030000 OWBSYS SI_INFORMTN_SCHEMA OLAPSYS SCOTT ORACLE_OCM TSMSYS XS$NULL MDDATA DIP APEX_PUBLIC_USER SPATIAL_CSW_ADMIN_USR SPATIAL_WFS_ADMIN_USR DBA_USERS SUBSTR(DEFAULT_TABLESPACE,1,20) ------------------------------SYSTEM SYSTEM SYSTEM SYSAUX SYSAUX USERS USERS USERS STAGING STAGING SYSTEM SYSAUX SYSAUX SYSAUX SYSAUX SYSAUX SYSAUX SYSAUX SYSAUX SYSAUX SYSAUX SYSAUX SYSAUX SYSAUX SYSAUX SYSAUX SYSAUX USERS USERS USERS USERS USERS USERS USERS USERS USERS SUBSTR(TEMPORARY_TABLESPACE,1,20) --------------------------------TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP TEMP
CREATED ---------15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 21-NOV-09 26-NOV-09 27-NOV-09 27-NOV-09 26-NOV-09 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07
In the upper output, note the external account ops$harry, with password defined as external. Since 11g, we do not see the encrypted password anymore in DBA_USERS;
ROLE:
Managing privileges is made easier by using roles, which are named groups of relate You create roles, grant system and object privileges to the roles, and then grant r You can also grant roles to other roles. Unlike schema objects, roles are not conta A role resembles what you would call a "group" in an OS. If you put an OS user in a group, that user inherents the permissions assigned at t In Oracle, you "GRANT" a role to a user.
"Assign" object privilges, or system privileges, or ROLES, (to a user or ROLE) with the GRANT statement. "Remove" object privilges, or system privileges, or ROLES, (from a user or ROLE) with the REVOKE statement. SQL> select * from dba_roles; ROLE -----------------------------CONNECT RESOURCE DBA SELECT_CATALOG_ROLE EXECUTE_CATALOG_ROLE DELETE_CATALOG_ROLE EXP_FULL_DATABASE IMP_FULL_DATABASE LOGSTDBY_ADMINISTRATOR AQ_ADMINISTRATOR_ROLE AQ_USER_ROLE DATAPUMP_EXP_FULL_DATABASE DATAPUMP_IMP_FULL_DATABASE GATHER_SYSTEM_STATISTICS RECOVERY_CATALOG_OWNER etc.. (many rows ommitted) View System privs: PASSWORD -------NO NO NO NO NO NO NO NO NO NO NO NO NO NO NO
SQL> SELECT * FROM system_privilege_map; PRIVILEGE NAME ---------------------- ----------------------------------------3 ALTER SYSTEM -4 AUDIT SYSTEM -5 CREATE SESSION -6 ALTER SESSION etc.. (many rows ommitted) 204 rows in 11gR1.
PROPERTY ------------------0 0 0 0
IMPORTANT: If Mary now REVOKES the privilege from Arnold, the revoke CASCADES and also Harry loses the priv With OBJECT privilege, the database registers both the GRANTEE and GRANTOR.
For example, if we take a look at the structure of DBA_TAB_PRIVS: SQL> desc dba_tab_privs Name ----------------------------GRANTEE OWNER TABLE_NAME GRANTOR PRIVILEGE GRANTABLE HIERARCHY
Null? -------NOT NULL NOT NULL NOT NULL NOT NULL NOT NULL
We can see that the database "knows" how to cascade the REVOKE, because GRANTEE and GRANTOR are registered.
SYSTEM VIEWS:
Main System views: DBA_ROLES DBA_USERS DBA_SYS_PRIVS DBA_TAB_PRIVS V$OBJECT_PRIVILEGE ROLE_ROLE_PRIVS ROLE_SYS_PRIVS ROLE_TAB_PRIVS SYSTEM_PRIVILEGE_MAP
Example: Suppose user Albert created the table PERSON. Now we do this: GRANT SELECT ON ALBERT.PERSON TO arnold; CREATE ROLE TESTROLE; GRANT RESOURCE, JAVA_ADMIN to TESTROLE GRANT create procedure to TESTROLE
GRANT SELECT ON ALBERT.PERSON TO TESTROLE SELECT grantee, table_name, grantor, privilege, grantable FROM dba_tab_privs WHERE table_name='PERSON'; GRANTEE -----------------------------TESTROLE ARNOLD SELECT * from role_role_privs WHERE role='TESTROLE' ROLE -----------------------------TESTROLE TESTROLE GRANTED_ROLE -----------------------------JAVA_ADMIN RESOURCE ADMIN_OPTION -----------NO NO TABLE_NAME -----------------------------PERSON PERSON GRANTOR -----------------------------ALBERT ALBERT
SELECT * FROM role_sys_privs WHERE role='TESTROLE' ROLE PRIVILEGE ADMIN_OPTION ------------------------------ ---------------------------------------- -----------TESTROLE CREATE PROCEDURE NO SELECT * FROM role_tab_privs WHERE role='TESTROLE'
ROLE OWNER TABLE_NAME COLUMN_NAME ------------------------------ ------------------------------ ------------------------------ -------------TESTROLE ALBERT PERSON
Suppose you want to create a new ROLE, called APPDEV. And you want to grant the following system privileges to APPDEV: CREATE TABLE, CREATE VIEW, CREATE PROCEDURE. In the above screen, click "Roles". In the screen that follows, and show you all present ROLES, click "Create".
In the "Create Role" page, type in the new ROLE name, and click "System Privileges" to go to that subpage. In that subpage, click "Edit List" (in order to add privileges).
Just add from the "Available System Privileges" the privileges you want to grant to APPDEV. If you are ready, click "OK". You can now grant to the role to a database user, like "arnold": GRANT APPDEV TO arnold; Here we have used a SQL statement from sqlplus, but you can use the EM as well. Important: If you drop a role: Dropping (deleting) a role automatically removes the privileges associated with that role from all users that had been granted the role.
Any user you create, will have the "DEFAULT" profile assigned to the account. A profile, is a set of rules, to restrict and/or limit access to resources, such as the "number of sessions that the user may have open. Other resource limits are for example, "CPU time per call", and "number of logical reads per call". You first need to enable resource profiles in your database, by using: SQL> ALTER SYSTEM SET resource_limit=TRUE SCOPE=BOTH; Or, if you use an init.ora file, edit it and place the record "resource_limit=true" in that file. Next you will see an example of a very simple profile: CREATE PROFILE SESS_LIMIT LIMIT SESSIONS_PER_USER 2 Now let's create a new user, which we will assign the profile SESS_LIMIT: CREATE USER hank identified by secret default tablespace USERS temporary tablespace TEMP QUOTA 20M ON users PROFILE SESS_LIMIT; Profiles are registered in DBA_PROFILES: SELECT DISTINCT PROFILE FROM DBA_PROFILES; PROFILE --------------------WKSYS_PROF MONITORING_PROFILE SESS_LIMIT DEFAULT SELECT * FROM dba_profiles WHERE profile='DEFAULT'; PROFILE -----------------------------DEFAULT DEFAULT DEFAULT RESOURCE_NAME -------------------------------COMPOSITE_LIMIT SESSIONS_PER_USER CPU_PER_SESSION RESOURCE -------KERNEL KERNEL KERNEL LIMIT -----------------UNLIMITED UNLIMITED UNLIMITED
DEFAULT DEFAULT DEFAULT DEFAULT DEFAULT DEFAULT DEFAULT DEFAULT DEFAULT DEFAULT DEFAULT DEFAULT DEFAULT
CPU_PER_CALL LOGICAL_READS_PER_SESSION LOGICAL_READS_PER_CALL IDLE_TIME CONNECT_TIME PRIVATE_SGA FAILED_LOGIN_ATTEMPTS PASSWORD_LIFE_TIME PASSWORD_REUSE_TIME PASSWORD_REUSE_MAX PASSWORD_VERIFY_FUNCTION PASSWORD_LOCK_TIME PASSWORD_GRACE_TIME
KERNEL KERNEL KERNEL KERNEL KERNEL KERNEL PASSWORD PASSWORD PASSWORD PASSWORD PASSWORD PASSWORD PASSWORD
UNLIMITED UNLIMITED UNLIMITED UNLIMITED UNLIMITED UNLIMITED 10 180 UNLIMITED UNLIMITED NULL 1 7
We can alter an existing profile as well, for example like: ALTER PROFILE SESS_LIMIT LIMIT SESSIONS_PER_USER 20 IDLE_TIME 20; We can also change the profile asssigned to a user, like this example: ALTER USER arnold PROFILE SESS_LIMIT;
PASSWORD_VERIFY_FUNCTION in a Profile:
We can make use of a PASSWORD VERIFY FUNCTION in a profile. Lets make a function (as SYS) like this: CREATE OR REPLACE FUNCTION sess_limit_passw ( username VARCHAR2, password VARCHAR2, old_password VARCHAR2) RETURN BOOLEAN AS BEGIN -- Whatever code to check on the validity of a password. END; Now, let us make that the profile SESS_LIMIT, use that function: ALTER PROFILE SESS_LIMIT LIMIT PASSWORD_VERIFY_FUNCTION sess_limit_passw;
Disables database auditing. Enables database auditing and directs all audit records to an o which is specified by "AUDIT_FILE_DEST". Enables database auditing and directs all audit records to the Enables database auditing and directs all audit records to the
In addition, populates the SQLBIND and SQLTEXT CLOB columns of Enables database auditing and writes all audit records to XML f Enables database auditing and prints all columns of the audit t
You can use the SQL AUDIT statement to set auditing options regardless of the setti AUDIT_SYS_OPERATIONS AUDIT_SYS_OPERATIONS= { true | false }
AUDIT_SYS_OPERATIONS enables or disables the auditing of operations issued by user and users connecting with SYSDBA or SYSOPER privileges. The audit records are writt The audit records will be written in XML format if the AUDIT_TRAIL initialization p
On UNIX platforms, if the AUDIT_SYSLOG_LEVEL parameter has also been set, then it o and SYS audit records are written to the system audit log using the SYSLOG utility. AUDIT_FILE_DEST AUDIT_FILE_DEST=/path_to_operating_systems_audit_trail Default: ORACLE_BASE/admin/ORACLE_SID/adump
ORACLE_HOME/rdbms/audit AUDIT_SYSLOG_LEVEL
Specifies the operating system directory int when the AUDIT_TRAIL initialization paramete Oracle Database writes the audit records in initialization parameter is set to an XML op Oracle Database also writes mandatory auditi and if the AUDIT_SYS_OPERATIONS initializati
AUDIT_SYSLOG_LEVEL = ' facility_clause.priority_clause ' { USER | LOCAL[0 | 1 | 2 | 3 | 4 | 5 | 6 | 7] | SYSLOG | DAEMON | KERN | MAIL | AUT { NOTICE | INFO | DEBUG | WARNING | ERR | CRIT | ALERT | EMERG }
If AUDIT_SYSLOG_LEVEL is set and SYS auditing is enabled (AUDIT_SYS_OPERATIONS = TR are written to the system audit log.If AUDIT_SYSLOG_LEVEL is set and standard audit (AUDIT_TRAIL = OS), then standard audit records are written to the system audit log To alter a parameter, for example AUDIT_TRAIL: SQL> ALTER SYSTEM SET audit_trail=db SCOPE=SPFILE; SQL> shutdown immediate; SQL> startup
displays all standard audit trail entries. displays all audit records for fine-grained auditing. displays all audit trail records concerning CONNECT and DISCONNECT. displays audit trail records for all GRANT, REVOKE, AUDIT, NOAUDIT, and displays audit trail records for all objects in the database. displays all standard and fine-grained audit trail entries, mandatory an displays which object privileges (access to objects like tables) are ena displays which system privileges are enabled for audit. displays which statements are enabled for audit. When the audit trail is directed to an XML format OS file, it can be rea which contains similar information to the DBA_AUDIT_TRAIL view.
Examples: So, if you want to know which objects are enabled for audit, query the "DBA_OBJ_AUDIT_OPTS" view.
If you want to see your audit records, query the "DBA_AUDIT_TRAIL" view.
Audits SQL statements that are authorized by the specified system privil audits statements issued using the CREATE ANY TRIGGER system privilege. Causes auditing of specific SQL statements or groups of statements that For example, AUDIT TABLE audits the CREATE TABLE, TRUNCATE TABLE etc.. S Audits specific statements on specific objects, such as on the EMPLOYEE
If you want to audit access on data, like a table or column, and specify
AUDIT BY ACCESS;
BY SESSION causes Oracle to write a single record for all SQL statements of the same type issued in the sam BY ACCESS causes Oracle to write one record for each access.
You will use the package "DBMS_FGA", which contains a number of subprocedures, with which you can add a pol enable a policy etc.. Take a look at the code below: begin dbms_fga.add_policy ( object_schema=>'HR', object_name=>'EMPLOYEE', policy_name=>'LARGE_SALARY', audit_condition=> 'SALARY > 10000', audit_column=> 'SALARY statement_type=>'INSERT' ); end; /
What you might see from this example, is that we create a policy called "LARGE_SALARY", with a condition li that if someone inserts a SALARY>10000 into the HR.EMPLOYEE table, an audit record must be created. You can use the "DBA_FGA_AUDIT_TRAIL" view, to see the FGA audit records. Please see: http://www.orafaq.com/wiki/DBMS_FGA For a good example on using FGA.
Suppose you have the 9i or 10g database "sales". As an example, the alert.log and user dumps would be store
For the upper example, the spfile.ora/init.ora file would contain the following parameters: BACKGROUND_DUMP_DEST= /opt/app/oracle/admin/sales/bdump USER_DUMP_DEST= /opt/app/oracle/admin/sales/udump CORE_DUMP_DEST= /opt/app/oracle/admin/sales/core
alert.log file in BACKGROUND_DUMP_DEST: Most notably is the "alert.log" file, which is a plain ascii file, and which logs significant database even It contains messages about startup's, shutdown, serious database/instance errors, as well as the creation o database structures (like tablespaces). trace files in USER_DUMP_DEST: If user server processes encountered en error condition, a trace file might have been generated. This file would contain certain diagnostic information, and possibly the SQL statement that was involved.
- Per default, the (unix/linux) environment variable $ADR_BASE points to the directory set by DIAGNOSTIC_DE which is the higest level directory, which contains all ADR diagnostic subdirectories of all databases/in - The variable $ADR_HOME points to an instance specific directory. The physical location of ADR_BASE can be changed with the "DIAGNOSTIC_DEST" parameter. The DIAGNOSTIC_DEST One of the main objectives of ADR, is to simplify the exhange of diagnostic information to Oracle Support, of an serious error, or bug. ADR is the new unified directory structure that will hold all diagnostic data from all Oracle products and The $ADR_HOME variable then, points to the toplevel directory, which contains all diagnostic information for a particular "/database/instance". Many subdirectories can be found here, all related to messages, traces, and incidents. But if you would have multiple databases, and instances, all information would still be contained within the $ADR_BASE (or DIAGNOSTIC_DEST) location. So, everything is available from one "root" level. See below a graphical representation of the ADR structure. You can view your current database settings by using the "SHOW PARAMETER" command, and by viewing the "v$diag_info" view, which is more interresting: SQL> SHOW PARAMETER DIAG
NAME TYPE VALUE ------------------------------------ ----------- -----------------------------diagnostic_dest string C:\ORACLE SQL> select SUBSTR(NAME,1,20) as NAME, SUBSTR(VALUE,1,70) as VALUE FROM v$diag_info; (Windows Example:) NAME -------------------Diag Enabled ADR Base ADR Home Diag Trace Diag Alert Diag Incident Diag Cdump Health Monitor Default Trace File Active Problem Count Active Incident Coun VALUE ----------------------------------------------------------------TRUE c:\oracle c:\oracle\diag\rdbms\test11g\test11g c:\oracle\diag\rdbms\test11g\test11g\trace c:\oracle\diag\rdbms\test11g\test11g\alert c:\oracle\diag\rdbms\test11g\test11g\incident c:\oracle\diag\rdbms\test11g\test11g\cdump c:\oracle\diag\rdbms\test11g\test11g\hm c:\oracle\diag\rdbms\test11g\test11g\trace\test11g_ora_1704.trc 2 3
In the Unix/Linux example, we then would have: ADR Base: /opt/app/oracle ADR Home: /opt/app/oracle/diag/rdbms/db11/db11
Please note the location of the "trace" and "alert" directories, for a particular instance, are located wit "$ADR_BASE/diag/rdbms/database_name/instance_name" Or, written in terms of the DIAGNOST_DEST parameter: "<DIAGNOSTIC_DEST>/diag/rdbms/database_name/instance_name" In my example, the "/database/instance/" part is just simply "/test11g/test11g/", because the database and instance have the same name, and I only have one instance right now. To depict the ADR structure in a graphical way, it looks like this:
database A
ADR_HOME
database B instance_of_B
instance_of_A
hm
trace
alert
incident
trace
alert
In this figure, you see that there is only one ADR_BASE, while there are two ADR_HOME's, one for Instance_o and one for Instance_of_B.
So what is the ADR? The ADR is a file-based repository for database diagnostic data such as traces, dumps, the alert log, health monitor reports, and more. It has a unified directory structure across multiple instances and multip Beginning with Release 11g, the database, Automatic Storage Management (ASM), and other Oracle products or components store all diagnostic data in the ADR. Each instance of each product stores diagnostic data
underneath its own ADR home directory (see "ADR Home"). For example, in an Oracle RAC environment with shar and ASM, each database instance and each ASM instance has a home directory within the ADR. The ADR's unified directory structure enables customers and Oracle Support to correlate and analyze diagnos across multiple instances and multiple products.
12.2.3. Viewing the alert log (log.xml) with the "adrci" commandline tool:
ADRCI is a command line utility that serves as the interface between you and the ADR. You can do such things as view diagnostic data, view reports, view alert logs, and even package the diagnostic information for sending to Oracle support. So, "adrci" is quite a versatile utility. You can invoke the "ADR command interface" by entering "adrci" from your OS prompt: C:\oracle\diag\rdbms\test11g\test11g\alert>adrci ADRCI: Release 11.1.0.6.0 - Beta on Sat Nov 28 18:24:42 2009 Copyright (c) 1982, 2007, Oracle. All rights reserved. ADR base = "c:\oracle" adrci>
If you want to know which subcommands you can enter in adrci, just enter "help" or "help topic" or "help ex One of the most obvious commands, is the SHOW ALERT commands, which you can use to browse through the alert log. Here are a few examples: adri> SHOW ALERT -TAIL; adri> SHOW ALERT -TAIL 50; adri> SHOW ALERT -TAIL -F; To display the last To display the last To display the last Resembles the "tail 10 lines of 50 lines of 10 lines of -f" command
the alert log. the alert log. the alert log then wait for additional m in Unix.
If you want to "focus" the adrci tool on a certain ADR_HOME, you can use the "SET HOMEPATH" command, like i adrci> SET HOMEPATH diag\rdbms\test11g\test11g
All ADRCI commands operate on diagnostic data in the "current" ADR homes. More than one ADR home can be cur So, you can set one specific ADR_HOME, but you can set the path also one level higher (in the directory str has the effect that all ADR_HOME's under that level, become "current".
Within ADR, a problem is a critical error that has occurred within an Oracle product or component such as t These are the typical ORA- errors you would normally search through the alert log for. An important aspect that each occurrence of a problem is individually tracked and assigned a unique incident ID . Since a problem could happen multiple times, an incident is a single occurrence of the problem. So, individual incidents are tracked within ADR and are given a unique numeric incident ID within ADR.
Every problem has a problem key, which is a text string that includes an error code (such as ORA 600) and i one or more error parameters. Two incidents are considered to have the same root cause if their problem key
If the need arises, and you want to upload all relevant diagnostic data to Oracle Support, you can "Package ADR will put together all the diagnostic data about the particular incident and store this packaged informa in an ADR subdirectory created uniquely for that incident. In the example below, we saw an important "alert" that had something to do with a "ORA-600" error. If we go furher into the details of that error (by clicking that alert), we are able to "Quick Package" the diagnostic information.
Click the alert of interrest (in the above example, its the first alert). Then choose "View Problem Detail You will then enter a page similar to what is shown below. Here, you can choose to "Quick Package" the diag
An incident package (package) is a collection of data about incidents for one or more problems. Before sending incident data to Oracle Support it must be collected into a package using the Incident Packa After a package is created, you can add external files to the package, or remove selected files from the pa
A package is a logical construct only, until you create a physical file from the package contents. That is, an incident package starts out as a collection of metadata in the ADR. As you add and remove package contents, only the metadata is modified. When you are ready to upload the data to Oracle Support, you create a physical package using ADRCI, which saves the data into a zip file
Not only the EM lets you view incidents (coupled to some alert on a problem), but you can use "adrci" as we adrci> SET HOMEPATH diag\rdbms\test11g\test11g adrci> show incident ADR Home = c:\oracle\diag\rdbms\test11g\test11g: ************************************************************************* INCIDENT_ID PROBLEM_KEY
CREATE_TIME
-------------------- ----------------------------------------------------------- -------------------------151377 ORA 600 [kturbleurec1] 2009-11-20 13:23:07.820000 151369 ORA 600 [kturbleurec1] 2009-11-20 13:22:57.702000
ORA 600 [ORA-00600: internal error code, arguments: [kturbl 2009-11-20 13:23:23.661000
-- shows a simple list of all incidents -- obtain detailed info about a particular i
No matter what you chose, ADRCI will respond with output similar to: Created package <package_number> based on <incident _id|problem_key|time> Step 2: Add diagnostic information: You can add an additional incident, or diagnostic file, to an existing logical package: adrci> IPS ADD INCIDENT incident_number PACKAGE package_number adrci> IPS ADD FILE filespec PACKAGE package_number So, you just add what is necessary to the logical package. Step 3: generate a physical incident package: Now, we generate a physical structure, based on what you did in Step 1 and Step 2: adrci> IPS GENERATE PACKAGE package_number IN path
Where the package number is already known to you, and the path is just a suitable path on your filesystem. This generates a complete physical package (zip file) in the designated path. For example, the following command creates a physical package in the directory "/home/oracle/packages" from logical pack adrci> IPS GENERATE PACKAGE 5 IN /home/oracle/packages
A server-generated alert is a notification from the Oracle Database server of an impending problem. The notification may contain suggestions for correcting the problem. Notifications are also provided when the problem condition has been cleared. Alerts are automatically generated when a problem occurs or when data does not match expected values for metrics, such as the following: Physical Reads Per Second User Commits Per Second SQL Service Response Time
Server-generated alerts can be based on threshold levels or can issue simply because an event has occurred. Threshold-based alerts can be triggered at both threshold warning and critical levels. The value of these levels can be customer-defined or internal values, and some alerts have default threshold levels which you can change if appropriate. For example, by default a server-generated alert is generated for tablespace space usage when the percentage of space usage exceeds either the 85% warning or 97% critical threshold level. Examples of alerts not based on threshold levels are: - Snapshot Too Old - Resumable Session Suspended - Recovery Area Space Usage An alert message is sent to the predefined persistent queue ALERT_QUE owned by the user SYS. Oracle Enterprise Manager reads this queue and provides notifications about outstanding server alerts, and sometimes suggests actions for correcting the problem. The alerts are displayed on the Enterprise Manager Database Home page and can be configured to send email or pager notifications to selected administrators. If an alert cannot be written to the alert queue, a message about the alert is written to the alert.log file. Set Alert Treshholds: 1. Using EM 2. Using PLSQL Using EM: From the Database Home Page, choose "Metric and Policy Settings":
Using PLSQL: You can view and change threshold settings for the server alert metrics using the SET_THRESHOLD and GET_THRESHOLD procedures of the DBMS_SERVER_ALERTS PL/SQL package. Example: BEGIN DBMS_SERVER_ALERT.SET_THRESHOLD( metrics_id => DBMS_SERVER_ALERT.TABLESPACE_PCT_FULL, warning_operator => DBMS_SERVER_ALERT.OPERATOR_GE, warning_value => '70', critical_operator => DBMS_SERVER_ALERT.OPERATOR_GE, critical_value => '80', observation_period => 1, consecutive_occurrences => 3, instance_name => 'TEST11G', object_type => DBMS_SERVER_ALERT.OBJECT_TYPE_TABLESPACE, object_name => 'CUST' ); END; /
Example using the Dictionary with respect to Alerts: select object_type, object_name, reason, suggested_action, time_suggested, resolution, advisor_name, metric_value, message_type, message_group, message_level from dba_alert_history where creation_time <= sysdate-1 and resolution = 'cleared' select object_type, object_name, metrics_name, warning_operator, warning_value, critical_operator, critical_value, observation_period, consecutive_occurrences from sys.dba_thresholds where object_type = 'TABLE';
Oracle11g introduces a new framework called the Health Monitor, which runs diagnostic checks in the databas The database runs the health monitor checks automatically in response to critical errors. But you can also run a healthcheck manually. Either by using the Oracle EM or the DBMS_HM package.
Health Monitor checks (also known as checkers, health checks, or checks) examine various layers and components of the database. Health checks detect file corruptions, physical and logical block corruptio undo and redo corruptions, data dictionary corruptions, and more. The health checks generate reports of their findings and, in many cases, recommendations for resolving problems. Health checks can be run in t
- Reactive: The fault diagnosability infrastructure can run health checks automatically in response to a cr - Manual: As a DBA, you can manually run health checks using either the DBMS_HM PL/SQL package or the Enterprise Manager interface. You can run checkers on a regular basis if desired, or Oracle Support may ask you to run a checker while working with you on a service request. Health Monitor checks, store it's findings and recommendations, and other information, in the ADR. HM can run in 2 modes, with respect to the status of the database:
- DB-online mode means the check can be run while the database is open (that is, in OPEN mode or MOUNT mode - DB-offline mode means the check can be run when the instance is available but the database itself is clos Check the "OFFLINE_CAPABLE" field in v$hm_check. You can "interface" to HM or viewing its results with: 1. Viewing data via: v$HM_CHECK v$HM_CHECK_PARAM v$HM_FINDING v$HM_INFO v$HM_RECOMMENDATION v$HM_RUN and using the tools: EM, DBMS_HM, ADRCI 2. Run HM jobs with: EM -> Advisor Central -> Checkers Page DBMS_HM
OFFLINE_CAPABLE --------------N Y Y Y Y
5 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 21 rows selected
Logical Block Check Transaction Integrity Check Undo Segment Integrity Check All Control Files Check CF Member Check All Datafiles Check Single Datafile Check Log Group Check Log Group Member Check Archived Log Check Redo Revalidation Check IO Revalidation Check Block IO Revalidation Check Txn Revalidation Check Failure Simulation Check Dictionary Integrity Check
1 0 0 19 19 19 19 19 19 19 19 19 19 17 35 0
Y N N Y Y Y Y Y Y Y Y Y Y Y Y N
N N N Y Y Y Y Y Y Y Y Y Y N Y N
SELECT name FROM v$hm_check WHERE internal_check='N'; NAME ---------------------------------------------------------------DB Structure Integrity Check Data Block Integrity Check Redo Integrity Check Transaction Integrity Check Undo Segment Integrity Check Dictionary Integrity Check
13.3 Running Health Checks and viewing Reports: Method 1: Using DBMS_HM subprocedures:
DBMS_HM.RUN_CHECK check_name run_name timeout input_params ( IN IN IN IN VARCHAR2, VARCHAR2 := NULL, NUMBER := NULL, VARCHAR2 := NULL);
DBMS_HM.GET_RUN_REPORT ( run_name IN VARCHAR2, type IN VARCHAR2 := 'TEXT', level IN VARCHAR2 := 'BASIC',) RETURN CLOB; Examples: BEGIN DBMS_HM.RUN_CHECK('Dictionary Integrity Check', 'my_run_30112009'); END; BEGIN DBMS_HM.RUN_CHECK ( check_name => 'Transaction Integrity Check', run_name => 'my_run', input_params => 'TXN_ID=7.33.2'); END; Now, let's view a report from the first check: SET LONG 100000
SET LONGCHUNKSIZE 1000 SET PAGESIZE 1000 SET LINESIZE 512 SELECT DBMS_HM.GET_RUN_REPORT('my_run_30112009') FROM DUAL; DBMS_HM.GET_RUN_REPORT('MY_RUN_30112009') --------------------------------------------------------------------------Basic Run Information Run Name : my_run_30112009 Run Id : 1106 Check Name : Dictionary Integrity Check Mode : MANUAL Status : COMPLETED Start Time : 2009-11-30 13:00:29.428000 +01:00 End Time : 2009-11-30 13:00:44.218000 +01:00 Error Encountered : 0 Source Incident Id : 0 Number of Incidents Created : 0 Input Paramters for the Run TABLE_NAME=ALL_CORE_TABLES CHECK_MASK=ALL Run Findings And Recommendations Finding Finding Name : Dictionary Inconsistency Finding ID : 1107 Type : FAILURE Status : OPEN Priority : CRITICAL Message : SQL dictionary health check: dependency$.dobj# fk 126 object DEPENDENCY$ failed Message : Damaged rowid is AAAABnAABAAAO2GAB3 - description: No damage description available Finding Finding Name : Dictionary Inconsistency Finding ID : 1110 Type : FAILURE Status : OPEN Priority : CRITICAL Message : SQL dictionary health check: dependency$.dobj# fk 126 object DEPENDENCY$ failed Message : Damaged rowid is AAAABnAABAAAQtpABQ - description: No damage description available
on further
on further
ADR base = "c:\oracle" adrci> SET HOMEPATH diag\rdbms\test11g\test11g adrci> show report hm_run my_run_30112009; <?xml version="1.0" encoding="US-ASCII"?> <HM-REPORT REPORT_ID="my_run_30112009"> <TITLE>HM Report: my_run_30112009</TITLE> <RUN_INFO>
<CHECK_NAME>Dictionary Integrity Check</CHECK_NAME> <RUN_ID>1106</RUN_ID> <RUN_NAME>my_run_30112009</RUN_NAME> <RUN_MODE>MANUAL</RUN_MODE> <RUN_STATUS>COMPLETED</RUN_STATUS> <RUN_ERROR_NUM>0</RUN_ERROR_NUM> <SOURCE_INCIDENT_ID>0</SOURCE_INCIDENT_ID> <NUM_INCIDENTS_CREATED>0</NUM_INCIDENTS_CREATED> <RUN_START_TIME>2009-11-30 13:00:29.428000 +01:00</RUN_START_TIME> <RUN_END_TIME>2009-11-30 13:00:44.218000 +01:00</RUN_END_TIME> </RUN_INFO> <RUN_PARAMETERS> <RUN_PARAMETER>TABLE_NAME=ALL_CORE_TABLES</RUN_PARAMETER> <RUN_PARAMETER>CHECK_MASK=ALL</RUN_PARAMETER> </RUN_PARAMETERS> <RUN-FINDINGS> <FINDING> <FINDING_NAME>Dictionary Inconsistency</FINDING_NAME> <FINDING_ID>1107</FINDING_ID> <FINDING_TYPE>FAILURE</FINDING_TYPE> <FINDING_STATUS>OPEN</FINDING_STATUS> <FINDING_PRIORITY>CRITICAL</FINDING_PRIORITY> <FINDING_CHILD_COUNT>0</FINDING_CHILD_COUNT> <FINDING_CREATION_TIME>2009-11-30 13:00:42.851000 +01:00</FINDING_CREATION_TIME> <FINDING_MESSAGE>SQL dictionary health check: dependency$.dobj# fk 126 on object DEPENDENCY$ fa d</FINDING_MESSAGE> <FINDING_MESSAGE>Damaged rowid is AAAABnAABAAAO2GAB3 - description: No further damage descripti available</FINDING_MESSAGE> </FINDING> <FINDING> <FINDING_NAME>Dictionary Inconsistency</FINDING_NAME> <FINDING_ID>1110</FINDING_ID> <FINDING_TYPE>FAILURE</FINDING_TYPE> <FINDING_STATUS>OPEN</FINDING_STATUS> <FINDING_PRIORITY>CRITICAL</FINDING_PRIORITY> <FINDING_CHILD_COUNT>0</FINDING_CHILD_COUNT> <FINDING_CREATION_TIME>2009-11-30 13:00:42.928000 +01:00</FINDING_CREATION_TIME> <FINDING_MESSAGE>SQL dictionary health check: dependency$.dobj# fk 126 on object DEPENDENCY$ fa d</FINDING_MESSAGE> <FINDING_MESSAGE>Damaged rowid is AAAABnAABAAAQtpABQ - description: No further damage descripti available</FINDING_MESSAGE> </FINDING> </RUN-FINDINGS> </HM-REPORT> adrci>
TIMEOUT --------0 0 0 0 0 0 0 0 0 0
SELECT * FROM v$hm_findings; FINDING_ID ---------------------28 342 345 382 385 1027 1030 1050 1070 1090 1107 1110 RUN_ID ---------------------21 341 341 381 381 1026 1026 1046 1066 1086 1106 1106 NAME -------------------------------Missing Control File Dictionary Inconsistency Dictionary Inconsistency Dictionary Inconsistency Dictionary Inconsistency Missing Data Files Missing datafile Missing datafile Missing datafile Missing datafile Dictionary Inconsistency Dictionary Inconsistency PARENT_ID ---------------------0 0 0 0 0 0 1027 1027 1027 1027 0 0
CHILD ----0 0 0 0 0 1 0 0 0 0 0 0
Do you notice "my_run_30112009" in this screen? I ran that job using: BEGIN DBMS_HM.RUN_CHECK('Dictionary Integrity Check', 'my_run_30112009'); END; If you click "Details" while that "Checker" is selected, you can view it's findings.
Usually on the database host itself, the Oracle Net listener (the listener) is running. It is a process that listens for client connection requests. It receives incoming client connection request the traffic of these requests to the database server.
Note: it is possible that a listener process is running on some other Host, instead of the Database machine
The default listener configuration file is called listener.ora, and it is located in the "network/admin" su of the Oracle home directory. Examples: On Windows: On unix:
C:\oracle\product\11.1.0\db_1\NETWORK\ADMIN\listener.ora /opt/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
In unix/linux, the environment variable "$TNS_ADMIN" points to that location. All recent Oracle Instance versions, will perform an "dynamic Service Registration" at the local listener, Service registration is performed by the process monitor (PMON) of each database instance. Dynamic service registration does not require modification of the listener.ora file. Which means is, we do not have to place an entry in the listener.ora for that service.
For older listener configurations, it was necessary to to create an entry for each "service" (like a Databa that a client could connect to, and to which the Listener had "to listen for". This is called 'static service configuration'. So, for example, the service "sales" could be placed in the listener.ora as shown in below example, in orde for the listener to know about it. LISTENER= (DESCRIPTION= (ADDRESS_LIST= (ADDRESS=(PROTOCOL=tcp)(HOST=starboss)(PORT=1521)) (ADDRESS=(PROTOCOL=ipc)(KEY=extproc)))) SID_LIST_LISTENER= (SID_LIST= (SID_DESC= (GLOBAL_DBNAME=dw.antapex.com) (ORACLE_HOME=/opt/app/oracle/product/11.1/db_1) (SID_NAME=dw)) (SID_DESC= (GLOBAL_DBNAME=sales.antapex.com) (ORACLE_HOME=/opt/app/oracle/product/11.1/db_1) (SID_NAME=sales)) (SID_DESC= (SID_NAME=plsextproc) (ORACLE_HOME=/oracle10g) (PROGRAM=extproc)))
With respect to the modern way of Instance registrations, the listener.ora does not need to contain more information, other than protocol information, like the port it is listening on, like shown below:
LISTENER_NAME= (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=dbhost.example.com) (PORT=1521))) If your listener is handling requests to multiple hosts, the configuration could be as in this example: LISTENER_NAME= (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=server1.example.com) (PORT=1521))) (ADDRESS=(PROTOCOL=tcp)(HOST=server2.example.com) (PORT=1521))) Some more on Service Registration:
Service registration enables the listener to determine whether a database service and its service handlers A service handler is a dedicated server process or dispatcher that acts as a connection point to a database During registration, the PMON process provides the listener with the instance name, database service names, and the type and addresses of service handlers. This information enables the listener to start a service ha when a client request arrives. Configuring the listener: 1. By editing the listener.ora file. 2. Using the "netca" utility, or the Network Configuration Assistant (netca). 3. Using the EM. The netca utility has a graphical userinterface. You can start netca from the Unix or Windows prompt: $ netca # You need X on unix. C:\> netca Stopping and Starting the Listener, using the "lsnrctl" utility. With the "lsnrctl" utility, which you can start from the prompt, you can manage your listener(s). Examples: $ lsnrctl C:\> lsnrctl LSNRCTL> -- call the utility from the unix shell prompt -- call the utility from the Windows cmd prompt -- the prompt of the listener control utility will show up
Since you might have configured multiple Listeners on your machine, most commands will need the specific "listener name", like for example LISTENER_HR, or LISTENER2 etc.. The first listener configured on your machine will usually be named just "listener". LSNRCTL> LSNRCTL> LSNRCTL> LSNRCTL> start listener stop listener status listener reload listener -----
starts the listener, if it was not running already. stops the listener process shows the status information (like uptime etc..), and to what service forces a read of the configuration file in order for new settings to
arnold@"10.10.10.50/sales.antapex.com"
Ofcourse, that is not very friendly, so (just like with tcpip) a "naming (resolution) method" is usually im The three main implementations are: - A client uses a "local naming method", that is, using a local "tnsnames.ora" file with all data needed to establish a database connection. That data is locally stored in the tnsnames.ora file. - Clients are configured to use a Directory service. - Clients use "Oracle Connection Manager" which can be viewed as a middle tier (layer). This method should actually be viewed as a sort of gateway and concentrator, a method to scale up the number of client connections. (Note: in former versions, a central naming facility called "Oracle Names Server" could also be used.)
This section specifically addresses the local naming method, that is, a client uses a local configuration f called "tnsnames.ora", in order to locate remote services.
Oracle Connection Manager enables large numbers of users to connect to a single server by acting as a conne funnel multiple client database sessions across a single network connection. This is done through multiplex a networking feature included with Oracle Net. Oracle Connection Manager reduces operating system resource minimizing the number of network connections made to a server. This type of implementation has additional features like 'access control' and much more.
But you stll need to configure clients to use the Connection manager. To route clients to the database serv through Oracle Connection Manager, configure either the tnsnames.ora file or the directory server with a connect descriptor that specifies protocol addresses of Oracle Connection Manager and the listener.
At a client, two files are central to this way of resolving remote services: "sqlnet.ora" and "tnsnames.ora Both files can be found in the "ORACLE_HOME/network/admin directory". sqlnet.ora:
This file determines for the Oracle Network software, a number of basic configuration details, like the order and type of naming methods, the trace level on the client, and if encryption should be used. Example sqlnet.ora: NAMES.DIRECTORY_PATH= (LDAP, TNSNAMES) NAMES.DEFAULT_DOMAIN = WORLD TRACE_LEVEL_CLIENT = OFF SQLNET.EXPIRE_TIME = 30
In this example, sqlnet.ora specifies that for naming resolution (locating remote services), first a Direct must be used, and if that does not work, the local "tnsnames.ora" file should be read. The sqlnet.ora file (among other things) enables you to: - Specify the client domain to append to unqualified names - Prioritize naming methods - Enable logging and tracing features tnsnames.ora:
This file is using to resolve remote services. It sort of "links" an "alias" to a full "connection descript Example tnsnames.ora: sales = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = server1)(PORT = 1521)) ) (CONNECT_DATA =
So, the identifier "nicecar" is just an (silly) example of an "alias". It is coupled to the full connection descriptor to reach the remote database DB1 on server starboss. The alias makes it possible to use client tools with a connection string as "username/password@alias", like in this sqlplus example: $ sqplus scott/tiger@nicecar
instance SGA
network
server process
PGA
server process
listener
network remote clients
client
client
client
A one-to-one ratio exists between the client processes and server processes. Even when the user is not actively making a database request, the dedicated server process remains, although it is inactive. It's a fact that in many situations, such a dedicated server process, is idle most of the time, and thereby wasting resources. - Shared Server: In a shared server architecture, a dispatcher directs multiple incoming network session requests to a pool of shared server processes, eliminating the need for a dedicated server process for each connection. An idle shared server process from the pool picks up a request from a common queue. For many situations, this is a more efficient configuration. The dispatcher processes enable client processes to share a limited number of server processes. You can create multiple dispatcher processes for a single database instance. All figures illustrating dedicated server -or shared server - architectures, are "somewhat" difficult in placing the role of the listener in the process. In both cases, the listener receives the connection request. - In a dedicated server architecture, the listener will create a dedicated server process. The server process and client, will then communicate directly. - In shared server architecture, PMON regularly "informs" the listener about the number of connections per Dispatcher. A client initially connects to the listener, which will then hand off the request to the least loaded Dispatcher. The communication is from then on, between client and Dispatcher. A dispatcher can support multiple client connections concurrently. Each client connection is bound to a virtual circuit. A virtual circuit is a piece of shared memory used by the dispatcher for client database connection requests and replies. The dispatcher places a virtual circuit on a common queue when a request arrives. An idle shared server picks up the virtual circuit from the common queue, services the request, and relinquishes the virtual circuit before attempting to retrieve another virtual circuit from the common queue.
SHARED_SERVERS: Specifies the initial number of shared servers to start minimum number of shared servers to keep. This is the only required para
MAX_SHARED_SERVERS SHARED_SERVER_SESSIONS
DISPATCHERS MAX_DISPATCHERS
CIRCUITS
MAX_SHARED_SERVERS: Specifies the maximum number of shared servers that can run simultaneously. SHARED_SERVER_SESSIONS: Specifies the total number of shared server user sessions that can run simultaneously. Setting this parameter enables you user sessions for dedicated servers. DISPATCHERS: Configures dispatcher processes in the shared server archit MAX_DISPATCHERS: Specifies the maximum number of dispatcher processes th can run simultaneously. This parameter can be ignored for now. It will o useful in a future release when the number of dispatchers is auto-tuned to the number of concurrent connections. CIRCUITS: Specifies the total number of virtual circuits that are availa inbound and outbound network sessions.
Shared server is enabled by setting the SHARED_SERVERS initialization parameter to a value greater than 0. The other shared server initialization parameters need not be set. Because shared server requires at least one dispatcher in order to work, a dispatcher is brought up even if no dispatcher has been configured. Shared server can be started dynamically by setting the SHARED_SERVERS parameter to a nonzero value with the ALTER SYSTEM statement, or SHARED_SERVERS can be included at database startup in the initialization parameter file. If SHARED_SERVERS is not included in the initialization parameter file, or is included but is set to 0, then shared server is not enabled at database startup.
The SHARED_SERVERS initialization parameter specifies the minimum number of shared servers that you want created when the instance is started. After instance startup, Oracle Database can dynamically adjust the number of shared servers based on how busy existing shared servers are and the length of the request queue. In typical systems, the number of shared servers stabilizes at a ratio of one shared server for every ten connections. For OLTP applications, when the rate of requests is low, or when the ratio of server usage to request is low, the connections-to-servers ratio could be higher. In contrast, in applications where the rate of requests is high or the server usage-to-request ratio is high, the connections-to-server ratio could be lower. The PMON (process monitor) background process cannot terminate shared servers below the value specified by SHARED_SERVERS. Therefore, you can use this parameter to stabilize the load and minimize strain on the system by preventing PMON from terminating and then restarting shared servers because of coincidental fluctuations in load. Examples:
The DISPATCHERS parameter defines the number of dispatchers that should start when the instance is started. For example, if you want to configure 3 TCP/IP dispatchers and to IPC dispatchers, you set the parameters as follows: DISPATCHERS="(PRO=TCP)(DIS=3)(PRO=IPC)(DIS=2)" For example, if you have 500 concurrent TCP/IP connections, and you want each dispatcher to manage 50 concurrent connections, you need 10 dispatchers. You set your DISPATCHERS parameter as follows: DISPATCHERS="(PRO=TCP)(DIS=10)"
Constraints enforce business rules in the database. Some type of constrains corresponds to "physical" objec
like for example, a Unique Constraint on a column of a table (or multiple columns), that will correspond to a Unique (separate) Index object. A very important "system view" when dealing with constraints is "DBA_CONSTRAINTS". Let's see what the structure is of that view: SQL> desc DBA_CONSTRAINTS Name ----------------------------------------OWNER CONSTRAINT_NAME CONSTRAINT_TYPE TABLE_NAME SEARCH_CONDITION R_OWNER R_CONSTRAINT_NAME DELETE_RULE STATUS DEFERRABLE DEFERRED VALIDATED GENERATED BAD RELY LAST_CHANGE INDEX_OWNER INDEX_NAME INVALID VIEW_RELATED
Type ----------------VARCHAR2(30) VARCHAR2(30) VARCHAR2(1) NOT NULL VARCHAR2(30) LONG VARCHAR2(30) VARCHAR2(30) VARCHAR2(9) VARCHAR2(8) VARCHAR2(14) VARCHAR2(9) VARCHAR2(13) VARCHAR2(14) VARCHAR2(3) VARCHAR2(4) DATE VARCHAR2(30) VARCHAR2(30) VARCHAR2(7) VARCHAR2(14)
So, for example, we need to know what is understood by "DEFERRABLE", and "VALIDATED". We will explore those atributes by some examples. The most important constraints are: NOT NULL UNIQUE PRIMARY KEY REFERENTIAL CHECK ------
a column must have a value, and cannot be null all values in the columns must be unique all values in the columns must be unique, but it's also the PRIMARY KEY of the t this FOREIGN KEY references(points to) a PRIMARY KEY in another table business rule bound on a column
16.2 An Example:
Suppose Arnold logs on, and creates the following tables: create table LOCATIONS -- table of locations ( LOCID int NOT NULL, CITY varchar2(16), constraint pk_loc PRIMARY KEY (locid) USING INDEX TABLESPACE INDX ) TABLESPACE STAGING;
create table DEPARTMENTS -- table of departments ( DEPID int NOT NULL, DEPTNAME varchar2(16), LOCID int, constraint pk_dept PRIMARY KEY (depid) USING INDEX TABLESPACE INDX , constraint uk_dept UNIQUE (deptname) USING INDEX TABLESPACE INDX, constraint fk_dept_loc FOREIGN KEY (locid) references LOCATIONS(locid) ) TABLESPACE STAGING;
create table EMPLOYEES -- table of employees ( EMPID int NOT NULL, EMPNAME varchar2(16), SALARY DECIMAL(7,2) CHECK (SALARY<5000), DEPID int NOT NULL, constraint pk_emp PRIMARY KEY (empid) USING INDEX TABLESPACE INDX, constraint fk_emp_dept FOREIGN KEY (depid) references DEPARTMENTS(depid) ) TABLESPACE STAGING; In these examples, we see the NOT NULL like PRIMARY KEY like FOREIGN KEY like CHECK like following types of constraints: COLUMN LOCID which may not be null pk_loc on COLUMN locid fk_dept_loc where DEPARTMENTS.LOCID references LOCATION.LOCID on COLUMN salary, which may not contain values < 5000
>>> Let's try a few system queries: SELECT c.constraint_type as TYPE, SUBSTR(c.table_name, 1, 20) as TABLE_NAME, SUBSTR(c.constraint_name, 1, 20) as CONSTRAINT_NAME, SUBSTR(c.r_constraint_name, 1, 20) as REF_KEY, SUBSTR(b.column_name, 1, 20) as COLUMN_NAME FROM DBA_CONSTRAINTS c, DBA_CONS_COLUMNS b WHERE c.constraint_name=b.constraint_name AND c.OWNER = 'ARNOLD' TYPE ---P C C P U R C P C C R P P TABLE_NAME -------------------LOCATIONS LOCATIONS DEPARTMENTS DEPARTMENTS DEPARTMENTS DEPARTMENTS EMPLOYEES EMPLOYEES EMPLOYEES EMPLOYEES EMPLOYEES DEPARTMENTS EMPLOYEES CONSTRAINT_NAME -------------------PK_LOC SYS_C009615 SYS_C009617 PK_DEPT UK_DEPT FK_DEPT_LOC SYS_C009621 PK_EMP SYS_C009623 SYS_C009622 FK_EMP_DEPT PK_DEPT PK_EMP REF_KEY COLUMN_NAME -------------------- -------------------LOCID LOCID DEPID DEPID DEPTNAME PK_LOC LOCID EMPID EMPID SALARY DEPID PK_DEPT DEPID DEPTNO EMPNO
The type "C" are the NOT NULL constraints. Because we did not supplied a name, the system created a "system The type "P" are the Primary key's like "pk_dept". The type "R" (from reference) are the Foreign Key's like "fk_emp_dept". SELECT CONSTRAINT_TYPE, CONSTRAINT_NAME, OWNER, DEFERRABLE, DEFERRED, VALIDATED, STATUS FROM DBA_CONSTRAINTS WHERE OWNER='ARNOLD' CONSTRAINT_TYPE --------------R R C C C P C P U C CONSTRAINT_NAME -----------------------------FK_DEPT_LOC FK_EMP_DEPT SYS_C009621 SYS_C009622 SYS_C009623 PK_EMP SYS_C009617 PK_DEPT UK_DEPT SYS_C009615 OWNER -----------------------------ARNOLD ARNOLD ARNOLD ARNOLD ARNOLD ARNOLD ARNOLD ARNOLD ARNOLD ARNOLD DEFERRABLE -------------NOT DEFERRABLE NOT DEFERRABLE NOT DEFERRABLE NOT DEFERRABLE NOT DEFERRABLE NOT DEFERRABLE NOT DEFERRABLE NOT DEFERRABLE NOT DEFERRABLE NOT DEFERRABLE DEFERRED --------IMMEDIATE IMMEDIATE IMMEDIATE IMMEDIATE IMMEDIATE IMMEDIATE IMMEDIATE IMMEDIATE IMMEDIATE IMMEDIATE
VALI ---VALI VALI VALI VALI VALI VALI VALI VALI VALI VALI
PK_LOC
ARNOLD
Now let's find out which constraints corresponds to INDEXES. It should be the Primary keys, and the Unique constraints. SELECT con.owner as con.constraint_type as substr(ind.index_name, 1, 20) as ind.INDEX_TYPE as substr(con.constraint_name, 1, 20) as substr(ind.tablespace_name, 1, 20) as FROM DBA_CONSTRAINTS con, DBA_INDEXES ind WHERE con.constraint_name=ind.index_name AND con.owner='ARNOLD'; OWNER -----------------------------ARNOLD ARNOLD ARNOLD ARNOLD TYPE ---P P U P
T I I I I
Here we see the three Primary key's as indexes, just as the one Unique constraint, which corresponds to an index too. The indexes has the same name as the contraints.
DEPARTMENTS DEPARTMENTS DEPARTMENTS DEPARTMENTS DEPARTMENTS EMPLOYEES EMPLOYEES EMPLOYEES EMPLOYEES EMPLOYEES EMPLOYEES EMPLOYEES
Or, alternatively: alter table DEPARTMENTS modify cons alter table EMPLOYEES modify constr alter table DEPARTMENTS modify cons
alter table EMPLOYEES disable constraint PK_EMP; alter table LOCATIONS disable constraint PK_LOC; alter alter alter alter alter table table table table table DEPARTMENTS enable constraint PK_DEPT; EMPLOYEES enable constraint PK_EMP; LOCATIONS enable constraint PK_LOC; DEPARTMENTS enable constraint FK_DEPT_LOC; EMPLOYEES enable constraint FK_EMP_DEPT;
alter table EMPLOYEES modify constr alter table LOCATIONS modify constr alter alter alter alter alter table table table table table
DEPARTMENTS modify cons EMPLOYEES modify constr LOCATIONS modify constr DEPARTMENTS modify cons EMPLOYEES modify constr
- ENABLE/DISABLE .. VALIDATE/NOVALIDATE
The following additional clauses are possible: - ENABLE VALIDATE - ENABLE NOVALIDATE - DISABLE NOVALIDATE - DISABLE VALIDATE
is the same as ENABLE. The constraint is checked and is guaranteed to hold for all This is true for existing and new rows. means the constraint is checked for new or modified rows, but existing data may vio Existing rows are not checked. New rows are checked. is the same as DISABLE. The constraint is not checked so data may violate the const
means the constraint is not checked but disallows any modification of the constrain
Let's do an EXPERIMENT:
Now suppose Arnold does the following: (1): SQL> insert into locations 2 values 3 (4,'Amsterdam'); insert into locations * ERROR at line 1: ORA-00001: unique constraint (ARNOLD.PK_LOC) violated
This is a correct response from Oracle, because the PK would be violated. There is already a record with LO (2): SQL> alter table LOCATIONS disable constraint PK_LOC; alter table LOCATIONS disable constraint PK_LOC * ERROR at line 1: ORA-02297: cannot disable constraint (ARNOLD.PK_LOC) - dependencies exist
This is correct too. There is a table DEPARTMENTS with a FK that is momentarily pointing to the PK in LOCAT (3): So, Arnold does this: SQL> alter table DEPARTMENTS disable constraint FK_DEPT_LOC; Table altered. SQL> alter table LOCATIONS disable constraint PK_LOC; Table altered. SQL> insert into locations
2 3
values (4,'Amsterdam');
1 row created.
Ofcourse. That works now, because the PK is switched off. What do you think happened to the INDEX "pk_loc"? Note that we first disabled the FK, and after that, the PK. SQL> select * from LOCATIONS; LOCID ---------1 2 3 4 4 CITY ---------------New York Amsterdam Washington Paris Amsterdam
So we have a duplicate row with respect to LOCID. (4): Suppose Arnold tries this: alter table LOCATIONS enable constraint PK_LOC; That should fail. SQL> alter table LOCATIONS modify constraint PK_LOC enable; alter table LOCATIONS enable constraint PK_LOC * ERROR at line 1: ORA-02437: cannot validate (ARNOLD.PK_LOC) - primary key violated We expected that. But, this time Arnold does this: SQL> alter table LOCATIONS modify constraint PK_LOC enable novalidate; alter table LOCATIONS modify constraint PK_LOC enable novalidate * ERROR at line 1: ORA-02437: cannot validate (ARNOLD.PK_LOC) - primary key violated Huh? The ENABLE NOVALIDATE does not work???? Yes, this may seem surprising, because the theory appears to is not checked, while new rows are. But that we cannot create the constraint (or primary key) is try to create a unique index on the table, but in this case, If you did this example for yourself, you will see that it's Here we can conclude: 1.By default, Oracle will attempt to create a Unique Index to police a PK or UK constraint 2.A NOVALIDATE constraint requires a Non-Unique Index for the constraint to really be Novalidated (5) Now Arnold does the following: SQL> alter table departments drop constraint FK_DEPT_LOC; Table altered. SQL> alter table LOCATIONS drop constraint PK_LOC; Table altered. SQL> alter table LOCATIONS add constraint PK_LOC primary key (locid) deferrable enable novalidate; Table altered.
true. Per default, Oracle will always there are duplicate values, so it does not wor really true.
You see that! While there is still a duplicate value (locid=4), we managed to create the PK_LOC constraint.
But the trick here, is the following: The difference between a deferrable and non-deferrable primary key constraint is that the non deferrable uses a Unique indexe, while the deferrable uses a NON Unique index. That's why the statement succeeded, not withstanding the fact that there was a duplicate LOCID.
Directive 1 75 % CPU
Directive 2 15 % CPU
Directive 3 10 % CPU
Some DBA_ views and PLSQL Packages involved in the Resource Manager: DBA_RSRC_PLAN_DIRECTIVES DBA_RSRC_CONSUMER_GROUPS DBA_RSRC_PLANS The DBA_RSRC% series of views are used to monitor resource groups, and the DBMS_RESOURCE_MANAGER and DBMS_RESOURCE_MANAGER_PRIVS packages are used to maintain resource consumer groups and plans. SELECT consumer_group, substr(category,1,20), substr(comments,1,60) FROM dba_rsrc_consumer_groups; CONSUMER_GROUP ------------------------------ORA$AUTOTASK_URGENT_GROUP BATCH_GROUP ORA$DIAGNOSTICS ORA$AUTOTASK_HEALTH_GROUP ORA$AUTOTASK_SQL_GROUP ORA$AUTOTASK_SPACE_GROUP ORA$AUTOTASK_STATS_GROUP ORA$AUTOTASK_MEDIUM_GROUP INTERACTIVE_GROUP
SUBSTR(CATEGORY,1,20 SUBSTR(COMMENTS,1,60) -------------------- ------------------------------------------------------MAINTENANCE BATCH MAINTENANCE MAINTENANCE MAINTENANCE MAINTENANCE MAINTENANCE MAINTENANCE INTERACTIVE Consumer Consumer Consumer Consumer Consumer Consumer Consumer Consumer Consumer group group group group group group group group group for for for for for for for for for urgent maintenance tasks batch operations diagnostics health checks SQL tuning space management advisors gathering optimizer statistics medium-priority maintenance tasks interactive, OLTP operations
Consumer group for Consumer group for Consumer group for Consumer group for System maintenance
users not included in any consumer g users not assigned to any consumer g system administrators low-priority sessions task consumer group
select plan_id, plan, active_sess_pool_mth, cpu_method from dba_rsrc_plans; PLAN_ID ---------11184 11185 11186 11190 11188 11189 11187 PLAN -----------------------------MIXED_WORKLOAD_PLAN ORA$AUTOTASK_SUB_PLAN ORA$AUTOTASK_HIGH_SUB_PLAN INTERNAL_PLAN DEFAULT_PLAN INTERNAL_QUIESCE DEFAULT_MAINTENANCE_PLAN ACTIVE_SESS_POOL_MTH -----------------------------ACTIVE_SESS_POOL_ABSOLUTE ACTIVE_SESS_POOL_ABSOLUTE ACTIVE_SESS_POOL_ABSOLUTE ACTIVE_SESS_POOL_ABSOLUTE ACTIVE_SESS_POOL_ABSOLUTE ACTIVE_SESS_POOL_ABSOLUTE ACTIVE_SESS_POOL_ABSOLUTE CPU_METHOD ----------------------EMPHASIS EMPHASIS EMPHASIS EMPHASIS EMPHASIS EMPHASIS EMPHASIS
For example, this is how the "CONSUMER GROUPS" looks like in the EM:
The 11g subprocedures of the DBMS_RESOURCE_MANAGER, might have a few more parameters compared to the 10g an Besides that, the number of subprocedures in 11g has increased (offering more ways to manage resources). But, ofcourse, "over 9i, 10g, 11g", the "general idea" is the same. Just to get an idea, here is a listing of the most important subprocedures of that package. dbms_resource_manager.create_consumer_group( consumer_group IN VARCHAR2, comment IN VARCHAR2, cpu_mth IN VARCHAR2 DEFAULT 'ROUND-ROBIN', category IN VARCHAR2 DEFAULT 'OTHER'); dbms_resource_manager.create_plan( plan IN VARCHAR2, comment IN VARCHAR2, cpu_mth IN VARCHAR2 DEFAULT active_sess_pool_mth IN VARCHAR2 DEFAULT parallel_degree_limit_mth IN VARCHAR2 DEFAULT 'PARALLEL_DEGREE_LIMIT_ABSOLUTE', queueing_mth IN VARCHAR2 DEFAULT mgmt_mth IN VARCHAR2 DEFAULT sub_plan IN BOOLEAN DEFAULT max_iops IN NUMBER DEFAULT max_mbps IN NUMBER DEFAULT dbms_resource_manager.clear_pending_area; dbms_resource_manager.create_pending_area; dbms_resource_mananger.create_plan_directive( plan IN VARCHAR2, group_or_subplan IN VARCHAR2, comment IN VARCHAR2, cpu_p1 IN NUMBER DEFAULT NULL, cpu_p2 IN NUMBER DEFAULT NULL, cpu_p3 IN NUMBER DEFAULT NULL, cpu_p4 IN NUMBER DEFAULT NULL, cpu_p5 IN NUMBER DEFAULT NULL, cpu_p6 IN NUMBER DEFAULT NULL, cpu_p7 IN NUMBER DEFAULT NULL, cpu_p8 IN NUMBER DEFAULT NULL, active_sess_pool_p1 IN NUMBER DEFAULT NULL, queueing_p1 IN NUMBER DEFAULT NULL, parallel_degree_limit_p1 IN NUMBER DEFAULT NULL, switch_group IN VARCHAR2 DEFAULT NULL, switch_time IN NUMBER DEFAULT NULL, switch_estimate IN BOOLEAN DEFAULT FALSE, max_est_exec_time IN NUMBER DEFAULT NULL, undo_pool IN NUMBER DEFAULT NULL, max_idle_time IN NUMBER DEFAULT NULL, max_idle_blocker_time IN NUMBER DEFAULT NULL, switch_time_in_call IN NUMBER DEFAULT NULL, mgmt_p1 IN NUMBER DEFAULT NULL, mgmt_p2 IN NUMBER DEFAULT NULL, mgmt_p3 IN NUMBER DEFAULT NULL, mgmt_p4 IN NUMBER DEFAULT NULL, mgmt_p5 IN NUMBER DEFAULT NULL, mgmt_p6 IN NUMBER DEFAULT NULL, mgmt_p7 IN NUMBER DEFAULT NULL, mgmt_p8 IN NUMBER DEFAULT NULL, switch_io_megabytes IN NUMBER DEFAULT NULL, switch_io_reqs IN NUMBER DEFAULT NULL,
NULL, 'ACTIVE_SESS_POOL_ABSOLUTE',
-- a "pending" area is a working area for de -- first clear the existing one, then create
-- multiple 'levels' provide a way of explic -- and leftover resources are to be used.
switch_for_call
IN BOOLEAN
DEFAULT NULL);
17.2 An Example:
As shown CONSUMER The plan So, here
in the beginning of section 17.1, we will create a plan called "DAYPLAN", and two groups, called "OLTP_CG", "REPORTING_CG" and OTHER_GROUPS. enables us to specify how CPU resources are to be allocated among the consumer groups and subplans OLTP_CG get's 75% and REPORTING_CG 15%.
You need to do all statements in one session, that is, not one part "today", and the next "tomorrow", because after a restart of the instance, your "Pending Area" is "gone". As user SYS we will do the following: -- Create two example users, which sessions will be in one of the RESOURCE GROUPS. CREATE USER oltp_user identified by secret DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp; GRANT CONNECT TO oltp_user; CREATE USER report_user identified by secret DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp; GRANT CONNECT TO report_user; -- Initialize a new working area: exec Dbms_Resource_Manager.Clear_Pending_Area(); exec Dbms_Resource_Manager.Create_Pending_Area(); -- Now create our DAYPLAN plan: BEGIN Dbms_Resource_Manager.Create_Plan( plan => 'DAYPLAN', comment => 'Plan for a combining oltp and reporting workers.'); END; / -- Next, create the OLTP_CG, REPORTING_CG and OTHER_GROUPS resource CONSUMER groups: BEGIN Dbms_Resource_Manager.Create_Consumer_Group( consumer_group => 'OLTP_CG', comment => 'OTLP processing - high priority'); END; / BEGIN Dbms_Resource_Manager.Create_Consumer_Group( consumer_group => 'REPORTING_CG', comment => 'Reporting users - low priority'); END; / -- Next, we need the Plan Directives: BEGIN Dbms_Resource_Manager.Create_Plan_Directive ( plan => 'DAYPLAN',
group_or_subplan => 'OLTP_CG', comment => 'High Priority', cpu_p1 => 75, cpu_p2 => 10, parallel_degree_limit_p1 => 4); END; / BEGIN Dbms_Resource_Manager.Create_Plan_Directive ( plan => 'DAYPLAN', group_or_subplan => 'REPORTING_CG', comment => 'Low Priority', cpu_p1 => 15, cpu_p2 => 50, parallel_degree_limit_p1 => 4); END; / BEGIN Dbms_Resource_Manager.Create_Plan_Directive ( plan => 'DAYPLAN', group_or_subplan => 'OTHER_GROUPS', comment => 'Low Priority', cpu_p1 => 10, cpu_p2 => 50, parallel_degree_limit_p1 => 4); END; / -- Next, validate and submit the working area: exec Dbms_Resource_Manager.Validate_Pending_Area; exec Dbms_Resource_Manager.Submit_Pending_Area();
SQL> exec Dbms_Resource_Manager.Validate_Pending_Area; BEGIN Dbms_Resource_Manager.Validate_Pending_Area; END; * ERROR at line 1: ORA-29382: validation of pending area failed ORA-29375: sum of values 110 for level 2, plan DAYPLAN exceeds 100 ORA-06512: at "SYS.DBMS_RMIN", line 434 ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 696 ORA-06512: at line 1
SQL> exec Dbms_Resource_Manager.Submit_Pending_Area(); BEGIN Dbms_Resource_Manager.Submit_Pending_Area(); END; * ERROR at line 1: ORA-29382: validation of pending area failed ORA-29375: sum of values 110 for level 2, plan DAYPLAN exceeds 100 ORA-06512: at "SYS.DBMS_RMIN", line 443 ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 703 ORA-06512: at line 1
Ok, there are errors here. Look at this error: "ORA-29375: sum of values 110 for level 2, plan DAYPLAN exce Ofcourse, we cannot go over a total of 100%, so that's why the error coms up. Now, you for yourself correct the error, and submit the statements again. If you are done, validate the working area again: exec Dbms_Resource_Manager.Validate_Pending_Area; exec Dbms_Resource_Manager.Submit_Pending_Area();
We are done. We have a PLAN, we have two custom created CONSUMER GROUPS, and DIRECTIVES. As the last step, you can assign users to the consumer groups, like this: BEGIN Dbms_Resource_Manager_Privs.Grant_Switch_Consumer_Group( grantee_name => 'oltp_user', consumer_group => 'OLTP_CG', grant_option => FALSE); Dbms_Resource_Manager_Privs.Grant_Switch_Consumer_Group( grantee_name => 'report_user', consumer_group => 'REPORTING_CG', grant_option => FALSE); Dbms_Resource_Manager.Set_Initial_Consumer_Group('oltp_user', 'OLTP_CG'); Dbms_Resource_Manager.Set_Initial_Consumer_Group('report_user', 'REPORTING_CG'); END; /
For flashback operations, the UNDO tablespace should be large enough, and the UNDO RETENTION should not be After a transaction is committed, undo data is no longer needed for rollback or transaction recovery purpos However, for consistent read purposes, long-running queries may require this old undo information for producing older images of data blocks. Furthermore, the success of several Oracle Flashback features can also depend upon the availability of older undo information. Old undo information with an age that is less than the current undo retention period is said to be unexpired and is retained for consistent read and Oracle Flashback operations. Important SPFILE/INIT.ORA parameters (see also Chapter 4.): UNDO: UNDO_MANAGEMENT UNDO_TABLESPACE UNDO_RETENTION FLASH RECOVERY AREA: DB_RECOVERY_FILE_DEST DB_RECOVERY_FILE_DEST_SIZE DB_FLASHBACK_RETENTION_TARGET
=Auto thus using automatic undo mode, instead of older manual Rollback s =<tablespace_name> Should be of a large "enough" size in seconds, should be large "enough"
= directory / filesystem, or ASM Diskgroup specifies the size, and should be large 'enough" specifies in minutes how far back you can "flashback" the database. How far back one can actually flashback the database, depends on how muc Oracle has kept in the recovery area.
Use the FLASHBACK TABLE statement, to restore an earlier state of a table in the event of human or applicat The time in the past to which the table can be flashed back is dependent on the amount of undo data in the Also, Oracle Database cannot restore a table to an earlier state across any DDL operations that change the Also, ENABLE ROW MOVEMENT should have been set on the table. And, the UNDO tablespace must have sufficient "historical" information to make a flashback possible.
Example 1: Restore the table employeeso to its state 1 minute prior to the current system time: FLASHBACK TABLE employees TO TIMESTAMP (SYSTIMESTAMP - INTERVAL '1' minute); Example 2: Restore the table employees_demo to a former SCN: FLASHBACK TABLE employees TO SCN 715340; Example 3: Restore a dropped table: FLASHBACK TABLE employees TO BEFORE DROP; FLASHBACK TABLE employees TO BEFORE DROP RENAME TO employees_old;
Flashback Query allows the contents of a table to be queried with reference to a certain earlier point in t or earlier SCN, using the "AS OF" clause. Example 1: AS OF TIMESTAMP:
SELECT EMP_ID, EMP_NAME FROM employees AS OF TIMESTAMP TO_TIMESTAMP('2009-11-08 12:34:12', 'YYYY-MM-DD HH24 Example 2: AS OF SCN: SELECT EMP_ID, EMP_NAME FROM employees AS OF SCN 1186349;
FLASHBACK DATABASE TO TIMESTAMP SYSDATE-1; FLASHBACK DATABASE TO SCN <scn_number>; ALTER DATABASE OPEN RESETLOGS;
The timespan you can use for flashback database is determined by the DB_FLASHBACK_RETENTION_TARGET paramete The maximum flashback can be determined by querying the V$FLASHBACK_DATABASE_LOG view.
The most dramatic difference between 10g and 11g, with respect to FLASHBACK, is 11g's FLASBACK DATA ARCHIVE Here, if you want, and have the resources , you can track historic data of tables (that are marked) for a pe and using (possibly) large space for storing that historic data. Here is the general approach: - Create a locally managed tablespace of a size that you think will suffice. Since you can specify a retention period for the FLASHBACK DATA ARCHIVE, the period choosen is ofcourse paramount in the sizing. There is a huge difference between a week, month, or a year. Also, choose a disksubsystem with sufficient redundancy, if the FDA is going to play an important role. - Create the FLASBACK DATA ARCHIVE, similar to these examples: CREATE FLASHBACK ARCHIVE fda1 TABLESPACE tbs1 RETENTION 1 YEAR; CREATE FLASHBACK ARCHIVE fda1 TABLESPACE tbs1 RETENTION 1 MONTH; - Mark the tables for which you want to preserve all history, like so: ALTER TABLE <tablename> FLASBACK ARCHIVE fba1; In the same way, enable all the tables for which you want to track the historic records. From then on, you can use the "flashback query" feature, like shown in example 2 of section 18.1 Some interesting system views are: DBA_FLASHBACK_ARCHIVE DBA_FLASHBACK_ARCHIVE_TABLES DBA_FLASHBACK_ARCHIVE_TS
Ok, thats it. Hope you had some use of this file. Good Luck on the exam !
and commands
port files" for the Oracle exam 1Z0-052 (11gR1/R2) e used for 1Z0-042 (10gR1/R2) as well.
for the exam, but it's of NO USE for experienced folks in Oracle 10g/11g
hat might be relevant for the exams. The target exam is 1Z0-052 (11gR1/R2).
UNDO / REDO: V$UNDOSTAT V$TRANSACTION DBA_UNDO_EXTENTS DBA_HIST_UNDOSTAT HEALTH MONITOR: V_$HM_CHECK V_$HM_CHECK_PARAM V_$HM_FINDING V_$HM_INFO V_$HM_RECOMMENDATION V_$HM_RUN
SERVICES: DBA_SERVICES ALL_SERVICES or V$SERVICES V$ACTIVE_SERVICES V$SERVICE_STATS V$SERVICE_EVENTS V$SERVICE_WAIT_CLASSES V$SERV_MOD_ACT_STATS V$SERVICE_METRICS V$SERVICE_METRICS_HISTORY DBA_RSRC_GROUP_MAPPINGS DBA_SCHEDULER_JOB_CLASSES DBA_THRESHOLDS
SOME OTHER IMPORTANT OBJECTS: DBA_CLUSTER DBA_VIEWS DBA_SEQUENCES DBA_DB_LINKS PARAMETERS: V$PARAMETER V$PARAMETER2 V$SYSTEM_PARAMETER V$SYSTEM_PARAMETER2 PASSWORDFILE (SYSDBA/OPER) V$PWFILE_USERS USERS, ROLES, PRIVILEGES: DBA_ROLES DBA_USERS DBA_SYS_PRIVS DBA_TAB_PRIVS V$OBJECT_PRIVILEGE ROLE_ROLE_PRIVS ROLE_SYS_PRIVS ROLE_TAB_PRIVS AUDITING: DBA_AUDIT_TRAIL DBA_FGA_AUDIT_TRAIL DBA_AUDIT_SESSION DBA_AUDIT_STATEMENT DBA_AUDIT_OBJECT DBA_COMMON_AUDIT_TRAIL DBA_OBJ_AUDIT_OPTS DBA_PRIV_AUDIT_OPTS DBA_STMT_AUDIT_OPTS
BACKUP / RECOVERY: V$BACKUP V$BACKUP_ARCHIVELOG_DETAILS V$BACKUP_ARCHIVELOG_SUMMARY V$BACKUP_ASYNC_IO V$BACKUP_CONTROLFILE_DETAILS V$BACKUP_CONTROLFILE_SUMMARY V$BACKUP_COPY_DETAILS V$BACKUP_COPY_SUMMARY V$BACKUP_CORRUPTION V$BACKUP_DATAFILE V$BACKUP_DATAFILE_DETAILS V$BACKUP_DATAFILE_SUMMARY V$BACKUP_DEVICE V$BACKUP_FILES V$BACKUP_PIECE V$BACKUP_PIECE_DETAILS V$BACKUP_REDOLOG V$BACKUP_SET V$BACKUP_SET_DETAILS V$BACKUP_SET_SUMMARY V$BACKUP_SPFILE V$BACKUP_SPFILE_DETAILS V$BACKUP_SPFILE_SUMMARY V$BACKUP_SYNC_IO V$UNUSABLE_BACKUPFILE_DETAILS V$RMAN_BACKUP_JOB_DETAILS V$RMAN_BACKUP_SUBJOB_DETAILS V$RMAN_BACKUP_TYPE V$ARCHIVE V$ARCHIVED_LOG V$ARCHIVE_DEST V$ARCHIVE_DEST_STATUS V$ARCHIVE_GAP V$ARCHIVE_PROCESSES V$BACKUP_ARCHIVELOG_DETAILS V$BACKUP_ARCHIVELOG_SUMMARY V$RECOVER_FILE
V$INSTANCE_RECOVERY
instance. The following is a short list of those processes. of all (active and inactive) background processes.
dified blocks from the database buffer cache to the files on a disk. mum of 20 database writer processes.
redo log entries to a disk. Redo log entries are generated in the redo log buffer A) and the log writer process writes the redo log entries sequentially into an online redo log file.
ed database buffers in the SGA are written to the datafiles by a database writer process (DBWn). oint. The checkpoint process signals DBWn Processes. The Log writer (LGWR) or Checkpoint process (CHKPT) and the datafiles to indicate when the lastcheckpoint occurred (SCN) t process writes checkpoint information to control files and data file headers. nstance recovery when a failed instance is restarted.
a recovery when a user process fails. It cleans up the cache and frees resources (among others "locks")
edo log files to archival storage when the log files are full or a log switch occurs. ve log mode to run archive processes.
s management-related background tasks, for example: ven metric violates its threshold value additional processes for SQL objects that have been recently modified he Automatic Workload Repository (AWR). wakes up periodically and checks the job log. If a job is due, it spawns Jnnn processes to handle jobs.
s that performs rebalancing of disk resources controlled by ASM. ment=special storage stucture; a separate ASM Instance is involved. It's optional) nt instance contains two main background processes. One coordinates rebalance activity for disk groups. performs the actual rebalance data extent movements. There can be many of these at a time, 1, and so forth. An Automatic Storage Management instance also has most of the same background processes e (SMON, PMON, LGWR, and so on).
for an Oracle instance. VKTM publishes two sets of time: a wall clock time (just like a real clock) a higher resolution time. rites old row-versions of tables with 'flashback archive' enabled into flashback data archives on commit.
IAG) runs oradebug commands and triggers diagnostic dumps as part of the new ADR ory) feature, which is a replacement (and major enhancement) for the much-reviled RDA. Agent) is a utility that can be downloaded from Metalink to collect diagnostics info.
tor (SMC) and slaves (Wnnn) perform space allocation and reclamation.
process takes care of setting up resource manager related tasks and resource plans.
are threads. You must use a "process viewer" to see the threads.
WHERE BACKGROUND=1;
B ------------------------------------ 1
1 1 1
PROGRAM ------------------------------ -----------------------------------------------oracle@oesv2029.antapex.org (RBAL) oracle@oesv2029.antapex.org (ASMB) oracle@oesv2029.antapex.org (LCK0) oracle@oesv2029.antapex.org (MMNL) oracle@oesv2029.antapex.org (MMON) oracle@oesv2029.antapex.org (CJQ0) oracle@oesv2029.antapex.org (RECO)
kground processes. field you can select from those views, like:
files. The instance consists of a shared memory area, cesses. An instance can exist independently of database files.
. These files can exist independently of a database instance. re just files), still exists on the filesystems.
Large Pool
- free space for SGA - data area, for example, used for rman backup/restore operations - receive/response queue's in shared server architecture - UGA's in shared server architecture
Reserved Pool
Java Pool
all session-specific Java code and data within the JVM.
ARCn
ARCn
logwriter LGWR
LGWRn
CKPT
DBWn
Database Writers DBWn reads and writes database blocks.
OR
dispatcher(s) Dnnn
Client Client
areas are located in the server process's PGA. rivate SQL area is kept in the SGA.
tartup parameters.
the default location for Oracle-managed datafiles. This location is also used as the default location les and online redo logs if none of the DB_CREATE_ONLINE_LOG_DEST_n initialization parameters are specified. ath_to_directory'] ation (where n=1-5) ['Path'] where n = 1, 2, 3, ... 5) specifies the default location for Oracle-managed control files and online redo logs. LINE_LOG_DEST_n parameter is specified, then the control file or online redo log is multiplexed ther DB_CREATE_ONLINE_LOG_DEST_n parameters. o log is created in each location, and one control file is created in each location.
tory / filesystem, or ASM disk group es the default location for the flash recovery area. The flash recovery area contains logs, and RMAN backups. hout also specifying the DB_RECOVERY_FILE_DEST_SIZE initialization parameter is not allowed. ecifies (in bytes) the hard limit on the total space to be used iles created in the flash recovery area.
cation in which Oracle Database can store and manage files y. It is distinct from the database area.
eters if you have set values for the LOG_ARCHIVE_ EST parameters. You must disable those the flash recovery area. You can instead set values for the . If you do not set values for local LOG_ARCHIVE_ lash recovery area will implicitly set LOG_ARCHIVE_
back you can "flashback" the database. flashback the database, depends on how much flashback data
file system structures and we see that the p_dest, user_dump_dest) are replaced by a single "diagnostic_dest" parameter
ease 1, the diagnostics for each database instance are located in a dedicated directory, h the DIAGNOSTIC_DEST initialization parameter. The structure of the directory
<dbname>/<instname> Automatic Diagnostic Repository (ADR) Home. For example, if the database name is proddb db1, the ADR home directory would be <diagnostic_dest>/diag/rdbms/proddb/proddb1.
s placed to "C:\ORACLE", you would find the new style XML alert.log "log.xml" in, for example, \test11g\alert\log.xml"
h is optional, indicates the domain (logical location) within a ation of the settings for these two parameters must unique within a network. base with a global database name of rameters of the new parameter file as follows:
ion parameter specifies the standard block size for the used for the SYSTEM tablespace and by default in other can support up to four additional nonstandard block
ock sizes can be created using the CREATE ifying the BLOCKSIZE clause. These nonstandard he following power-of-two values: 2K, 4K, 8K, 16K or 32K. s, you must configure subcaches within the buffer cache l of the nonstandard block sizes that you intend to use.
parameter determines the maximum number of at can be connected to Oracle Database concurrently. The be a minimum of one for each background process plus e number of background processes will vary according u are using. For example, if you are using Advanced eature, you will have additional background processes. orage Management, then add three additional processes
m number of sessions that can be created in the system. a session, this parameter effectively determines the maximum number of m. You should always set this parameter explicitly to a value equivalent um number of concurrent users, plus the number of background processes,
e 11g Release 1, database (You can disable case sensitivity and return to etting the SEC_CASE_SENSITIVE_LOGO
hentication | none = OS authentication | shared for RAC d file and you have been granted the SYSDBA or n you can connect and be authenticated by a
ckets Layer (SSL) and Kerberos strong authentication methods e the SYSDBA and SYSOPER privileges.
ll there for compatibility reasons. remote users will be unable to connect without a password. ly be in effect from the local host
racle system-wide usable memory. The database tunes memory to the MEMORY_TARGET value, and PGA as needed. T and PGA_AGGEGATE_TARGET unneccessay in 11g. But those parameters are not obsolete. er limit to what MEMORY_TARGET can get. MORY_MAX_TARGET, you should set SGA_TARGET=0 and PGA_AGGREGATE_TARGET=0, or do not
Management - ASSM - Automatic Shared Memory Management (or the 10g way of Memory Management) l size of all SGA components. If SGA_TARGET is specified, ls are automatically sized:
memory pools are set to nonzero values, then those values Automatic Shared Memory Management. You would set minimum values if s a minimum amount of memory to function properly. lly sized components and are not affected by Automatic Shared Memory Management:
on parameter specifies the maximum size of the System of the instance. You can dynamically alter the initialization of the buffer caches, shared pool, large pool, Java pool, the extent that the sum of these sizes and the sizes of the fixed SGA, variable SGA, and redo log buffers) does not
the primary blocksize). Specify in bytes, K, M, G. If SGA_TARGET is not set, MB or 4 MB * number of CPUs, whichever is greater
s the target aggregate PGA memory available hed to the instance. otal memory assigned to the server processes (working on behalf of the clients).
to 0 automatically sets the WORKAREA_SIZE_POLICY parameter to MANUAL. are sized using the old style *_AREA_SIZE parameters, like the SORT_AREA_SIZE parameter.
ng the SORT_AREA_SIZE parameter unless the instance is configured . Oracle recommends that you enable automatic sizing of SQL working areas ET instead. SORT_AREA_SIZE is retained for backward compatibility.
higher than "SGA_TARGET", because MEMORY_TARGET will assign memory to the SGA and total Instance PGA parameter means full Automatic Memory Management.
ic undo tablespace.
matic undo management. If MANUAL, sets manual undo management mode. The default is AUTO.
ted, undo data is no longer needed for rollback or transaction recovery purposes. purposes, long-running queries may require this old undo information data blocks. Furthermore, the success of several Oracle Flashback features lability of older undo information. age that is less than the current undo retention period s retained for consistent read and Oracle Flashback operations.
priority over retaining committed undo data, which thus can be overwritten. ld must be guaranteed, even at the expense of DML operations, e can be set against the undo tablespace during or after creation:
ENTION NOGUARANTEE;
e level of collection for database and operating system statistics. these statistics for a variety of purposes, including making self-management decisions.
L ensures collection of all major statistics required for database and provides best overall performance.
ameter is set to ALL, additional statistics are added to the set of statistics tting. The additional statistics are timed OS statistics and plan execution statistics.
parameter to BASIC disables the collection of many of the important statistics eatures and functionality.
CCESS parameter specifies which of the Server Manageability Packs should be active.
s AWR, ADDM, and so on. L Tuning Advisor, SQLAccess Advisor, and so on.
acle\oradata\test10g\control02.ctl", "C:\oracle\oradata\test10g\control03.ctl")
est11g\control02.ctl','c:\oradata\test11g\control03.ctl'
following entries:
the OPEN_CURSORS parameter remains set to 1000 even though it has a tting in the parameter file for an instance prevents over parameter settings for instance prod1. These two types of settings
the setting on all instances except the instance with sid prod1:
the instance with sid prod1 also assumes the new setting of 2000:
n Instance.
e are several places where the SYSDBA her discuss the EM.
ation to log on as a SYSDBA rd authentication, and username is listed in the PASSWORD FILE r can log on as a SYSDBA.
o the "startup force" command, which basically "shutdown abort" the database, and then
media recovery is required, you can start an instance, mount a instance, and have the recovery process automatically start by using and with the RECOVER clause
the following conditions: the statement is issued. database waits for all currently connected
ed back. (If long uncommitted transactions t complete quickly, despite its name.) rs currently connected to the database to ls back active transactions and disconnects
down of an instance while allowing active SHUTDOWN command with the TRANSACTIONAL clause. s with the following conditions: re new transactions allowed to be started,
with the RESTRICTED SESSION privildge, are able to log on. Database to normal operation.
y DBA transactions, queries, fetches, or PL/SQL statements. lled a "quiesced state". ssions are prevented to get active. Database to normal operation.
IO operations to files and headers. atabase is suspended all preexisting ions are allowed to complete and any new database accesses are placed in a queued state. Database to normal operation.
cludes a built in repository kload Repository (AWR), base and other relevant
rkload Repository (AWR) to determine tes the root causes of the performance problems, expected benefits.
d is controlled by the STATISTICS_LEVEL initialization parameter. o enable statistics gathering by AWR. ameter to BASIC disables many Oracle Database features,
dictionary: DBA_ADDM_FDG_BREAKDOWN DBA_ADDM_FINDINGS DBA_ADDM_INSTANCES DBA_ADDM_SYSTEM_DIRECTIVES DBA_ADDM_TASKS DBA_ADDM_TASK_DIRECTIVES DBA_ADVISOR_ACTIONS DBA_ADVISOR_COMMANDS DBA_ADVISOR_DEFINITIONS DBA_ADVISOR_DEF_PARAMETERS DBA_ADVISOR_DIR_DEFINITIONS DBA_ADVISOR_DIR_INSTANCES DBA_ADVISOR_DIR_TASK_INST DBA_ADVISOR_EXECUTIONS
-> Enterprise Manager - ADDM findings - create and view SNAPSHOTS - Advisors -> PLSQL packages - DBMS_ADDM - DBMS_WORKLOAD_REPOSITORY - DBMS_ADVISOR
DBA_ADVISOR_DIR_TASK_INST DBA_ADVISOR_EXECUTIONS DBA_ADVISOR_EXECUTION_TYPES DBA_ADVISOR_EXEC_PARAMETERS DBA_ADVISOR_FDG_BREAKDOWN DBA_ADVISOR_FINDINGS DBA_ADVISOR_FINDING_NAMES DBA_ADVISOR_JOURNAL DBA_ADVISOR_LOG DBA_ADVISOR_OBJECTS DBA_ADVISOR_OBJECT_TYPES DBA_ADVISOR_RECOMMENDATIONS DBA_ADVISOR_SQLA_COLVOL DBA_ADVISOR_SQLA_REC_SUM DBA_ADVISOR_SQLA_TABLES DBA_ADVISOR_USAGE V$STATISTICS_LEVEL (not all are shown) DBA_HIST_SYSTEM_EVENT DBA_HIST_ACTIVE_SESS_HISTORY DBA_HIST_SESSMETRIC_HISTORY DBA_HIST_SESS_TIME_STATS DBA_HIST_SYSSTAT
ts and reports performance problems with the database. page in Oracle Enterprise Manager (EM). performance problems that require your attention.
What for? Creates an ADDM task for analyzing in database analysis mode and executes it Creates an ADDM task for analyzing in instance analysis mode and executes it. Creates an ADDM task for analyzing a subset of instances in partial analysis mode and executes it Deletes an already created ADDM task (of any kind) Deletes a finding directive Deletes a parameter directive Deletes a segment directive Deletes a SQL directive Returns a string containing the SQL text of an ASH query identifying the rows in ASH with impact for the finding Retrieves the default text report of an executed ADDM task Creates a directive to limit reporting of a specific finding type. Creates a directive to prevent ADDM from creating actions to alter the value of a specific system parameter Creates a directive to prevent ADDM from creating actions to "run Segment Advisor" for specific segments Creates a directive to limit reporting of actions on specific SQL
_ADDM.GET_REPORT()
What for? Adds a colored SQL ID Displays a global or Oracle Real Application Clusters (RAC) ASH Spot report in HTML format. Displays a global or Oracle Real Application Clusters (RAC) ASH Spot report in Text format. Displays the ASH report in HTML Displays the ASH report in text Displays the AWR Diff-Diff report in HTML Displays the AWR Diff-Diff report in text Displays the Global AWR Compare Periods Report in HTML Displays the Global AWR Compare Periods Report in text Displays the Global AWR report in HTML Displays the Global AWR report in text Displays the AWR report in HTML Displays the AWR report in text Displays the AWR SQL Report in HTML format Displays the AWR SQL Report in text format Creates a single baseline Creates a baseline template Creates a manual snapshot immediately Drops a range of snapshots Removes a baseline template that is no longer needed Activates service Modifies the snapshot settings Modifies the window size for the Default Moving Window Baseline Renames a baseline Shows the values of the metrics corresponding
-- 1440 = 24 x 60 minutes
What for? Adds a workload reference to an Advisor task (Caution: Deprecated Subprogram) Adds a single statement to a workload Establishes a link between the current SQL Access Advisor task and a SQL Tuning Set Cancels a currently executing task operation Copies the contents of a SQL workload object to a SQL Tuning Set Creates an external file from a PL/SQL CLOB variable, which is useful for creating scripts and reports Creates a new task object Creates a new workload object (Caution: Deprecated Subprogram) Creates a new Advisor task in the repository Deletes an entire workload object (Caution: Deprecated Subprogram) Deletes an entire workload object (Caution: Deprecated Subprogram) Deletes one or more statements from a workload (Caution: Deprecated Subprogram) Removes a link between the current SQL Access Advisor task and a SQL Tuning Set object Deletes the specified task from the repository Executes the specified task Retrieves specific recommendation attributes from a task Creates and returns a report for the specified task Creates and returns an executable SQL script of the Advisor task's recommendations in a buffer Implements the recommendations for a task Imports data into a workload from the current SQL cache (Caution: Deprecated Subprogram) Imports data into a workload from the current SQL cache (Caution: Deprecated Subprogram) Imports data from a SQL Tuning Set into a SQL workload data object (Caution: Deprecated Subprogram) Imports data into a workload from the current SQL cache (Caution: Deprecated Subprogram) Imports data into a workload from the current SQL cache (Caution: Deprecated Subprogram) Stops a currently executing task, ending its operations as it would at a normal exit Sets the annotation_status for a particular recommendation Performs an analysis on a single SQL statement Resets a workload to its initial starting point (Caution: Deprecated Subprogram) Resets a task to its initial state Imports data into a workload from schema evidence Modifies a default task parameter
Sets the value of a workload parameter Sets the specified task parameter value Shows how to restate the materialized view Updates a task object Updates an existing recommendation for the specified task Updates a workload object Updates one or more SQL statements in a workload Updates a task's attributes
) is an integral part of the Oracle RDBMS ges to solve any exitsing performance issues measured.
been implemented. _HOME/rdbms/admin/addmrpt.sql script, se Manager application. Besides this DBA_ADVISOR_ prefix) allow retrieval I. The preferred way of accessing ADDM is through the overview including recommendations on how to
u need to make sure that the AWR has been populated VEL is set to TYPICAL or ALL at 60 minute intervals.
sh to examine. E.g. when examining a s from the timestamps before the query was started
terval=>60,retention=>43200);
oundaries within
ndations made by
process. There is
se them in 11g, if you want Automatic Shared Memory Management "full" Automatic Memory Management
e Database instance to automatically manage a target memory size initialization parameter (MEMORY_TARGET) EMORY_MAX_TARGET). g memory as needed between a (instance PGA).
ORY_TARGET will redistribute memory to the SGA ns full Automatic Memory Management.
ATE_TARGET for AMM. you still want Automatic Shared Memory Management.
maximum amount of memory that you would want determine the maximum value e larger than or the same
- put the tablespace in BACKUP MODE - with "!" you can issue OS commands from sqlplus - just using tar as an example; could also hav been another suitable command.
make an archived log of your current online log, in order to capture all recent transactions in an archived logfile. backup the (cold) archived redologs (including the latest) to tape tapedvice and backuplocations are just examples.
atabase you want to backup), you need to connect to the target database.
og) here are a few examples: -- using OS authentication -- using the system account -- where SID is supposed to be the name/sid of the target, -- and the catalog is supposed to be stored in the database RCAT
consistent backup.
-- backup as a backupset to the default "device" -- The following variation of the command creates image copy backups of all datafiles in the database.
e, or in NON-ARCHIVE mode.
VE mode, and let RMAN create (inconsistent) backups. tore, a recovery is needed, using the archived redologs, are more recent than just the backup has).
are done in the database, are first written to the database file(s) (at checkpoint).. ill use the next one (redo03.dbf).
d in such file) , then place the database in Archive mode. py of that file (with all transactions) in an
for example:
/appl/oracle/archives/redo_344.log
for example:
/appl/oracle/archives/redo_345.log
for example:
/appl/oracle/archives/redo_346.log
much to keep etc..) to tools like RMAN. create ONLINE backups, that is,
RECOVERY AREA")
create a fast recovery area for your database. em, or Oracle Automatic Storage Management (ASM) disk group
y files. Oracle creates archived logs and flashback logs . nd image copies in the fast recovery area, and it uses it ea also acts as a disk cache for tape
es the default location for the flash recovery area. The flash recovery area contains logs, and RMAN backups. hout also specifying the DB_RECOVERY_FILE_DEST_SIZE initialization parameter is not allowed. ecifies (in bytes) the hard limit on the total space to be used iles created in the flash recovery area.
cation in which Oracle Database can store and manage files y. It is distinct from the database area.
eters if you have set values for the LOG_ARCHIVE_ EST parameters. You must disable those the flash recovery area. You can instead set values for the . If you do not set values for local LOG_ARCHIVE_ lash recovery area will implicitly set LOG_ARCHIVE_
tal backup of a database. de after a previous incremental backup (level 0 or 1). n full database backups.
tains only the block that are changed since the former
0 incremental backup,
hanges compared to full backup (for this day) s after the full backup s after incremental1 s after incremental2 s after incremental3 s after incremental4 s after incremental5 s after incremental6 s after incremental7 s after incremental8 s after incremental9 s after incremental10
n items in your backup/recovery policy ntals created thereafter. uld be very wise to do so.
ndividual changes.
d you need to restore and recover, ck at the situation of 16:00h r 16:00h, you can recover up to the last
ed channels (serverprocesses).
tes to the destination (disk, or tape). t's declaration, so that it can correctly
do not specify a diskdestination, and a FAST RECOVERY AREA is defined, e backup will be stored in the FAST RECOVERY AREA.
file=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)'; file=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
\DB_1\DATABASE\SNCFTEST11G.ORA'; # default
ps/backup%d_DB_%u_%s_%p';
changed the "configure controlfile autobackup on", rom the RMAN prompt in section 8.7.1.
k, and a default persistent config setting disk, you may leave the channel declaration out.
RMAN will apply full, and differential, and archive logs atabase is opened. line redologs were available in this example.
og sequence number
the RESETLOGS option. arlier point in time, whilst the online redologs
Availability subpage.
l file of each target database on which it performs operations. prod2 databases. RMAN stores the metadata for backups ups of prod2 in the control file of prod2.
ate, central database . xample) reporting and listing of stored backups, much easier. n from the RMAN repository. to backups for itself) in the controlfile.
d REPORT commands.
provide lists of backups and other objects relating to backup and recovery.
e repository compared to their status on disk. e tapes "are less online" compared to diskstorage. formation, you can use CROSSCHECK on tape as well.
, the RESTORE ... VALIDATE HEADER command on disk or in the media management catalog
in the repository, against what exists on media. te a certain backup t are not needed to satisfy the retention policy entries for any backups not found when a Crosscheck was performed
res, ADVISE FAILURE to display repair options, how to use the Data Recovery Advisor:
ons, and possibly without actually You might have some doubts here.
e tablespace itself by
-- virtual column
table can be preserved for the whole session, lause indicates that the data should be deleted
dictionary. However,
the database. The access he data in the external sible for performing the matches the external
RACLE_LOADER driver.
nto an external table, are created, the database u to load external ata into a database.
es have in common.
r a particular set of department_id=20, only once in the cluster es contain the value.
r, Oracle Database stores together rows ue is stored only once in each data block, If you specify neither INDEX nor HASHKEYS,
the number of hash values for the hash cluster. e the same hash key value. ion of the cluster.
are reversed, for example, 103 ut inserts into the index over
em from time to time. taurant A", "restaurant B", "restaurant C" and so on. nt is a column with many unique values erse-key index would be ideal here, because Oracle into the b-tree.
lumn or columns in
y, commission_pct);
-tree index structure. In index-organized table, able. Each index entry ndex is the data, and the
sense, it is an index on
itioning column is id, four partitions are created named tablespaces (gear1, gear2, ...).
es table q1_sales_by_region
In this example, three range partitions are created, re not named, system generated names are assigned, tablespaces (ts1, ...,ts4).
tabase verifies that nal Service like Kerberos. passwords in the database.
ry_tablespace,1,20),
UBSTR(TEMPORARY_TABLESPACE,1,20) -------------------------------EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP EMP
CREATED PASSWORD ------------------------- -----------------------------15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 21-NOV-09 26-NOV-09 27-NOV-09 27-NOV-09 EXTERNAL 26-NOV-09 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07 15-OCT-07
ACCOUNT_STATUS -------------------------------OPEN OPEN OPEN OPEN OPEN OPEN OPEN OPEN OPEN OPEN EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED
sing roles, which are named groups of related privileges. ct privileges to the roles, and then grant roles to users. . Unlike schema objects, roles are not contained in any schema.
user or ROLE) with the GRANT statement. a user or ROLE) with the REVOKE statement.
GRANTABLE --------NO NO
ile.ora parameters.
diting and directs all audit records to an operating system's audit trail file, by "AUDIT_FILE_DEST". diting and directs all audit records to the SYS.AUD$ table. diting and directs all audit records to the the SYS.AUD$ table.
tes the SQLBIND and SQLTEXT CLOB columns of the SYS.AUD$ table. diting and writes all audit records to XML format OS files. diting and prints all columns of the audit trail, including SqlText and SqlBind values.
s the auditing of operations issued by user SYS, OPER privileges. The audit records are written to the operating system's audit trail. L format if the AUDIT_TRAIL initialization parameter is set to XML.
LEVEL parameter has also been set, then it overrides the AUDIT_TRAIL parameter e system audit log using the SYSLOG utility.
pecifies the operating system directory into which the audit trail is written hen the AUDIT_TRAIL initialization parameter is set to OS, XML, or XML,EXTENDED. racle Database writes the audit records in XML format if the AUDIT_TRAIL nitialization parameter is set to an XML option. racle Database also writes mandatory auditing information to this location, nd if the AUDIT_SYS_OPERATIONS initialization parameter is set, writes audit records for user SYS.
6 | 7] | SYSLOG | DAEMON | KERN | MAIL | AUTH | LPR | NEWS | UUCP | CRON } | CRIT | ALERT | EMERG }
diting is enabled (AUDIT_SYS_OPERATIONS = TRUE), then SYS audit records AUDIT_SYSLOG_LEVEL is set and standard audit records are being sent to the operating system records are written to the system audit log.
r fine-grained auditing. rds concerning CONNECT and DISCONNECT. for all GRANT, REVOKE, AUDIT, NOAUDIT, and ALTER SYSTEM statements in the database. for all objects in the database. e-grained audit trail entries, mandatory and SYS audit records written in XML format. ges (access to objects like tables) are enabled for audit. ges are enabled for audit. enabled for audit. ted to an XML format OS file, it can be read using a text editor or via the V$XML_AUDIT_TRAIL view, ation to the DBA_AUDIT_TRAIL view.
e authorized by the specified system privilege. For Example, AUDIT CREATE ANY TRIGGER g the CREATE ANY TRIGGER system privilege. QL statements or groups of statements that affect a particular type of database object. ts the CREATE TABLE, TRUNCATE TABLE etc.. Statements. specific objects, such as on the EMPLOYEE table
T ----------------------
into contacts
touched by Arnold.
HERE username='ARNOLD'
e, and which logs significant database events and messages. /instance errors, as well as the creation or alteration of
file might have been generated. ibly the SQL statement that was involved.
If it were in a database,
nd replaced by DIAGNOSTIC_DEST
oints to the directory set by DIAGNOSTIC_DEST, agnostic subdirectories of all databases/instances or services.
OSTIC_DEST" parameter. The DIAGNOSTIC_DEST parameter is leading. diagnostic information to Oracle Support, in case
agnostic data from all Oracle products and components. hich contains all diagnostic information
traces, and incidents. ormation would still be contained ng is available from one "root" level.
UE FROM v$diag_info; (Unix/Linux Example) NAME ------------------------Diag Enabled ADR Base ADR Home Diag Trace Diag Alert Diag Incident Diag Cdump Health Monitor Default Trace File Active Problem Count Active Incident Count VALUE -----------------------------------------------------------TRUE /opt/app/oracle /opt/app/oracle/diag/rdbms/db11/db11 /opt/app/oracle/diag/rdbms/db11/db11/trace /opt/app/oracle/diag/rdbms/db11/db11/alert /opt/app/oracle/diag/rdbms/db11/db11/incident /opt/app/oracle/diag/rdbms/db11/db11/cdump /opt/app/oracle/diag/rdbms/db11/db11/hm /opt/app/oracle/diag/rdbms/db11/db11/trace/db11_ora_3014.trc 0 0
----------------------
test11g_ora_1704.trc
database B instance_of_B
ADR_HOME
alert
incident
hm
such as traces, dumps, the alert log, ucture across multiple instances and multiple products. gement (ASM), and other Oracle products e of each product stores diagnostic data
ple, in an Oracle RAC environment with shared storage directory within the ADR. le Support to correlate and analyze diagnostic data
of the alert log. of the alert log. of the alert log then wait for additional message to be written to the log.
DR homes. More than one ADR home can be current at any one time. also one level higher (in the directory structure), which
in an Oracle product or component such as the database. ugh the alert log for. An important aspect of ADR is, signed a unique incident ID . ngle occurrence of the problem. unique numeric incident ID within ADR.
ludes an error code (such as ORA 600) and in some cases, ve the same root cause if their problem keys match.
ic data to Oracle Support, you can "Package" that info. ar incident and store this packaged information
st alert). Then choose "View Problem Details". you can choose to "Quick Package" the diagnostic information.
dents for one or more problems. ted into a package using the Incident Packaging Service (IPS). ckage, or remove selected files from the package.
ified. When you are ready to upload ADRCI, which saves the data into a zip file.
CREATE_TIME
- shows a simple list of all incidents - obtain detailed info about a particular incident
exists only as metadata in the ADR. logical package. The logical package in subsequent commands.
s just a suitable path on your filesystem. ignated path. For example, y "/home/oracle/packages" from logical package number 5:
defined for the instance rts in the database t have been cleared group and type for each alert rs, and other information about the system metrics
based on threshold
alth Monitoring.
which runs diagnostic checks in the database. ponse to critical errors. Oracle EM or the DBMS_HM package.
checks) examine various layers tions, physical and logical block corruptions, . The health checks generate reports ing problems. Health checks can be run in two ways:
th checks automatically in response to a critical error. her the DBMS_HM PL/SQL package regular basis if desired, ith you on a service request.
s open (that is, in OPEN mode or MOUNT mode). s available but the database itself is closed (that is, in NOMOUNT mode).
e,substr(description,1,50) FROM v$hm_check OFFLINE_CAPABLE --------------N Y Y Y Y SUBSTR(DESCRIPTION,1,50) -------------------------------------------------Check for HM Functionality Checks integrity of all database files Checks integrity of a datafile block Checks integrity of redo log content
Y N N Y Y Y Y Y Y Y Y Y Y Y Y N
N N N Y Y Y Y Y Y Y Y Y Y N Y N
Checks logical content of a block Checks a transaction for corruptions Checks integrity of an undo segment Checks all control files in the database Checks a multiplexed copy of the control file Check for all datafiles in the database Checks a datafile Checks all members of a log group Checks a particular member of a log group Checks an archived log Checks redo log content Checks file accessability Checks file accessability Revalidate corrupted txn Creates dummy failures Checks dictionary integrity
0 0
fk 126 on
on: No further
fk 126 on
on: No further
1:00</FINDING_CREATION_TIME> ency$.dobj# fk 126 on object DEPENDENCY$ faile - description: No further damage description
1:00</FINDING_CREATION_TIME> ency$.dobj# fk 126 on object DEPENDENCY$ faile - description: No further damage description
ME -----------------------ry Integrity Check ry Integrity Check ture Integrity Check ture Integrity Check ture Integrity Check ture Integrity Check ture Integrity Check ture Integrity Check ry Integrity Check ture Integrity Check
RUN_MODE -------MANUAL MANUAL REACTIVE REACTIVE REACTIVE REACTIVE REACTIVE REACTIVE MANUAL REACTIVE
TIMEOUT ---------------------0 0 0 0 0 0 0 0 0 0
START_TIME LAST_RESUME_TIME END_TIME ------------------------- ------------------------- ---------------29-NOV-09 10.39.14.915000000 AM 29-NOV-09 29-NOV-09 10.40.10.787000000 AM 29-NOV-09 29-NOV-09 07.02.25.961000000 PM 29-NOV-09 29-NOV-09 07.06.51.502000000 PM 29-NOV-09 29-NOV-09 07.22.32.572000000 PM 29-NOV-09 29-NOV-09 07.33.21.167000000 PM 29-NOV-09 29-NOV-09 07.37.54.375000000 PM 29-NOV-09 29-NOV-09 08.38.04.961000000 PM 29-NOV-09 30-NOV-09 01.00.29.428000000 PM 30-NOV-09 11-AUG-08 12.02.38.588000000 PM 11-AUG-08
stency stency
CHILD_COUNT ---------------------0 0 0 0 0 1 0 0 0 0 0 0
CLASS_NAME -------------------------------PERSISTENT_DATA PERSISTENT_DATA PERSISTENT_DATA PERSISTENT_DATA PERSISTENT_DATA PERSISTENT_DATA PERSISTENT_DATA PERSISTENT_DATA PERSISTENT_DATA PERSISTENT_DATA PERSISTENT_DATA PERSISTENT_DATA
TIME_DETECTED MODIF ------------------------- ----21-APR-09 08.47.28.380000000 PM 29-NOV-09 10.39.20.597000000 AM 29-NOV-09 10.39.20.665000000 AM 29-NOV-09 10.40.14.810000000 AM 29-NOV-09 10.40.15.287000000 AM 29-NOV-09 07.22.33.826000000 PM 29-NOV-09 07.22.33.821000000 PM 29-NOV-09 07.33.21.479000000 PM 29-NOV-09 07.37.55.124000000 PM 29-NOV-09 08.38.05.273000000 PM 30-NOV-09 01.00.42.851000000 PM 30-NOV-09 01.00.42.928000000 PM
k Configurations.
rvice Registration" at the local listener, at startup. of each database instance. e listener.ora file. .ora for that service. an entry for each "service" (like a Database Service)
stener.ora does not need to contain it is listening on, like shown below:
database service and its service handlers are available. at acts as a connection point to a database. the instance name, database service names, enables the listener to start a service handler
on (like uptime etc..), and to what services it is handling requests. guration file in order for new settings to take effect without stopping and starting the listener.
port (and some other info) is omitted, the default is assumed, like the
"tnsnames.ora" file with all data needed d in the tnsnames.ora file. a middle tier (layer).
ect to a single server by acting as a connection concentrator to connection. This is done through multiplexed network connections, Manager reduces operating system resource requirements by control' and much more.
ager. To route clients to the database server .ora file or the directory server racle Connection Manager and the listener.
(locating remote services), first a Directory Service " file should be read.
-- use the IP or dns hostname of the remote server where the listener resides.
n string as "username/password@alias",
ver configuration:
instance SGA
large pool
network
request queue
response queue
client client
listener
ncoming network session requests dedicated server process picks up a request from a common queue.
stener about the number of connections which will then hand off the request
y. Each client connection hared memory used by the dispatcher her places a virtual circuit on ks up the virtual circuit from tual circuit before attempting
initial number of shared servers to start and the ers to keep. This is the only required parameter for using shared servers. the maximum number of shared servers that
fies the total number of shared server user neously. Setting this parameter enables you to reserve
tcher processes in the shared server architecture. e maximum number of dispatcher processes that parameter can be ignored for now. It will only be en the number of dispatchers is auto-tuned according number of virtual circuits that are available for
ratio of one shared e rate of requests is nnections-to-servers te of requests is high or rver ratio could be
MESSAGES
REQUESTS
SQL> desc v$dispatcher_config Name ----------------------------CONF_INDX NETWORK DISPATCHERS CONNECTIONS SESSIONS POOL TICKS INBD_TIMOUT OUTBD_TIMOUT MULTIPLEX LISTENER SERVICE
"DBA_CONSTRAINTS".
Owner of the table Name associated with constraint definition Type of constraint definition Table associated with this constraint Text of search condition for table check Owner of table used in referential constraint Name of unique constraint definition for referenced table The delete rule for a referential constraint Enforcement status of constraint - ENABLED or DISABLED Is the constraint deferrable - DEFERRABLE or NOT DEFERRABLE Is the constraint deferred by default - DEFERRED or IMMEDIATE Was this constraint system validated? - VALIDATED or NOT VALIDATED Was the constraint name system generated? - GENERATED NAME or USER NAME Creating this constraint should give ORA-02436. Rewrite it before 2000 AD. If set,this flag will be used in optimizer The date when this column was last enabled or disabled The owner of the index used by this constraint The index used by this constraint Is the object invalid
ique, but it's also the PRIMARY KEY of the table o) a PRIMARY KEY in another table
COLUMN_NAME --- -------------------LOCID LOCID DEPID DEPID DEPTNAME LOCID EMPID EMPID SALARY DEPID DEPID DEPTNO EMPNO
DEFERRABLE ------------- -------------NOT DEFERRABLE NOT DEFERRABLE NOT DEFERRABLE NOT DEFERRABLE NOT DEFERRABLE NOT DEFERRABLE NOT DEFERRABLE NOT DEFERRABLE NOT DEFERRABLE NOT DEFERRABLE
DEFERRED --------IMMEDIATE IMMEDIATE IMMEDIATE IMMEDIATE IMMEDIATE IMMEDIATE IMMEDIATE IMMEDIATE IMMEDIATE IMMEDIATE
VALIDATED ------------VALIDATED VALIDATED VALIDATED VALIDATED VALIDATED VALIDATED VALIDATED VALIDATED VALIDATED VALIDATED
STATUS -------ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED
ENABLED
Or, alternatively: alter table DEPARTMENTS modify constraint FK_DEPT_LOC disable; alter table EMPLOYEES modify constraint FK_EMP_DEPT disable; alter table DEPARTMENTS modify constraint PK_DEPT disable;
alter table EMPLOYEES modify constraint PK_EMP disable; alter table LOCATIONS modify constraint PK_LOC disable; alter alter alter alter alter table table table table table DEPARTMENTS modify constraint PK_DEPT enable; EMPLOYEES modify constraint PK_EMP enable; LOCATIONS modify constraint PK_LOC enable; DEPARTMENTS modify constraint FK_DEPT_LOC enable; EMPLOYEES modify constraint FK_EMP_DEPT enable;
|constraint_name||';'
|constraint_name||';'
s checked and is guaranteed to hold for all rows. or modified rows, but existing data may violate the constraint.
is not checked so data may violate the constraint. disallows any modification of the constrained columns.
alter table DEPARTMENTS modify constraint FK_DEPT_LOC disable; alter table LOCATIONS modify constraint PK_LOC disable;
e. Per default, Oracle will always re are duplicate values, so it does not work.
ey constraint is that the NON Unique index. at there was a duplicate LOCID.
ith a plan and specifies how resources are to be allocated to the group.
---------------------------------------------for for for for for for for for urgent maintenance tasks batch operations diagnostics health checks space management advisors gathering optimizer statistics medium-priority maintenance tasks interactive, OLTP operations
up up up up
up up up up
users not included in any consumer grou users not assigned to any consumer grou system administrators low-priority sessions task consumer group
few more parameters compared to the 10g and 9i versions. offering more ways to manage resources).
- a "pending" area is a working area for defining a new plan. - first clear the existing one, then create a new working area.
- multiple 'levels' provide a way of explicitly specifying how all primary - and leftover resources are to be used.
e RESOURCE GROUPS.
- you see? Here you couple a CONSUMER GROUP to a PLAN and to LIMITS.
- level 1 cpu of all 3 groups, add up to 100%: 75+15+10=100 - How about level 2?
, 'REPORTING_CG');
ACK Options.
ough, and the UNDO RETENTION should not be too small. for rollback or transaction recovery purposes. require this old undo information ess of several Oracle Flashback features
d be large 'enough" back you can "flashback" the database. flashback the database, depends on how much flashback data
f a table in the event of human or application error. ependent on the amount of undo data in the system. across any DDL operations that change the structure of the table.
olved between commits. N extension to the FROM clause. you insert rows. Then, afterwards, you
past time or system change number (SCN). ete database recovery. and the database must have been put The database must be mounted in exclusive mode but not open.
o FLASHBACK, is 11g's FLASBACK DATA ARCHIVE (FDA). c data of tables (that are marked) for a period of your choice,
A ARCHIVE, the period choosen e between a week, month, or a year. e FDA is going to play an important role.
t process (CHKPT)
kground processes
rchives on commit.
offline files Archived redolog file Archived redolog file Archived redolog file Archived redolog file
ARCn
ARCn
ONLINE REDOLOGS
Muliplexed controlfiles
Datafiles
in tablespaces
LOAD_REPOSITORY
eprecated Subprogram)
SQL SQL SQL All SQL All All SQL All SQL SQL SQL SQL All All All All All All SQL SQL SQL SQL SQL All All All SQL All SQL All
Access Advisor Access Advisor Access Advisor Advisors Access Advisor Advisors Advisors Access Advisor Advisors Access Advisor Access Advisor Access Advisor Access Advisor Advisors Advisors Advisors Advisors Advisors Advisors Access Advisor Access Advisor Access Advisor Access Advisor Access Advisor Advisors Advisors Advisors Access Advisor Advisors Access Advisor Advisors
only only
Access Advisor Advisors Access Advisor Advisors Advisors Access Advisor Access Advisor Advisors
only only
only only
ACCOUNT_STATUS -- -------------------------------OPEN OPEN OPEN OPEN OPEN OPEN OPEN OPEN OPEN OPEN EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED EXPIRED & LOCKED
-----------------------------------
s/db11/db11/incident
s/db11/db11/trace/db11_ora_3014.trc
_RESUME_TIME END_TIME MODIFIED_TIME STATUS SRC_INCIDENT --------------------- ------------------------- ------------------------- ----------- -----------------M 29-NOV-09 10.39.21.188000000 AM 29-NOV-09 10.39.21.188000000 AM COMPLETED 0 M 29-NOV-09 10.40.15.476000000 AM 29-NOV-09 10.40.15.476000000 AM COMPLETED 0 M 29-NOV-09 07.02.27.452000000 PM 29-NOV-09 07.02.27.452000000 PM COMPLETED 0 M 29-NOV-09 07.06.54.247000000 PM 29-NOV-09 07.06.54.247000000 PM COMPLETED 0 M 29-NOV-09 07.22.33.846000000 PM 29-NOV-09 07.22.33.846000000 PM COMPLETED 0 M 29-NOV-09 07.33.21.501000000 PM 29-NOV-09 07.33.21.501000000 PM COMPLETED 0 M 29-NOV-09 07.37.55.143000000 PM 29-NOV-09 07.37.55.143000000 PM COMPLETED 0 M 29-NOV-09 08.38.05.306000000 PM 29-NOV-09 08.38.05.306000000 PM COMPLETED 0 M 30-NOV-09 01.00.44.218000000 PM 30-NOV-09 01.00.44.218000000 PM COMPLETED 0 M 11-AUG-08 12.02.42.550000000 PM 11-AUG-08 12.02.42.550000000 PM COMPLETED 0
NUM_INCIDENT
TIME_DETECTED MODIFIED PRIORITY ------ ------------------------- ------------------------- -------21-APR-09 08.47.28.380000000 PM 19-NOV-09 10.00.37.594000000 29-NOV-09 10.39.20.597000000 AM 19-NOV-09 10.00.37.594000000 29-NOV-09 10.39.20.665000000 AM 19-NOV-09 10.00.37.594000000 29-NOV-09 10.40.14.810000000 AM 19-NOV-09 10.00.37.594000000 29-NOV-09 10.40.15.287000000 AM 19-NOV-09 10.00.37.594000000 29-NOV-09 07.22.33.826000000 PM 29-NOV-09 07.22.33.826000000 29-NOV-09 07.22.33.821000000 PM 29-NOV-09 07.22.33.826000000 29-NOV-09 07.33.21.479000000 PM 29-NOV-09 07.22.33.826000000 29-NOV-09 07.37.55.124000000 PM 29-NOV-09 07.22.33.826000000 29-NOV-09 08.38.05.273000000 PM 29-NOV-09 07.22.33.826000000 30-NOV-09 01.00.42.851000000 PM 29-NOV-09 07.22.33.826000000 30-NOV-09 01.00.42.928000000 PM 29-NOV-09 07.22.33.826000000
STATUS TYPE DESCRIPTION ------------ ------------- --------PM CRITICAL CLOSED FAILURE PM CRITICAL OPEN FAILURE PM CRITICAL OPEN FAILURE PM CRITICAL OPEN FAILURE PM CRITICAL OPEN FAILURE PM HIGH OPEN FAILURE PM HIGH OPEN FAILURE PM HIGH OPEN FAILURE PM HIGH OPEN FAILURE PM HIGH OPEN FAILURE PM CRITICAL OPEN FAILURE PM CRITICAL OPEN FAILURE
Control file C:\ORA SQL dictionary heal SQL dictionary heal SQL dictionary heal SQL dictionary heal One or more non-sys Datafile 5: 'C:\ORA Datafile 5: 'C:\ORA Datafile 5: 'C:\ORA Datafile 5: 'C:\ORA SQL dictionary heal SQL dictionary heal
SRC_INCIDENT - -----------------0000 AM COMPLETED 0 0000 AM COMPLETED 0 0000 PM COMPLETED 0 0000 PM COMPLETED 0 0000 PM COMPLETED 0 0000 PM COMPLETED 0 0000 PM COMPLETED 0 0000 PM COMPLETED 0 0000 PM COMPLETED 0 0000 PM COMPLETED 0
NUM_INCIDENT 0 0 0 0 0 0 0 0 0 0
ERROR_NUMBER 0 0 0 0 0 0 0 0 0 0
PROBLEM_ID 0 0 0 0 0 0 0 0 0 0
E DESCRIPTION ---------- --------ED FAILURE FAILURE FAILURE FAILURE FAILURE FAILURE FAILURE FAILURE FAILURE FAILURE FAILURE FAILURE
Control file C:\ORADATA\TEST11G\CONTROL01.CTL is missing SQL dictionary health check: dependency$.dobj# fk 126 on SQL dictionary health check: dependency$.dobj# fk 126 on SQL dictionary health check: dependency$.dobj# fk 126 on SQL dictionary health check: dependency$.dobj# fk 126 on One or more non-system datafiles are missing Datafile 5: 'C:\ORADATA\TEST11G\STAGING.DBF' is missing Datafile 5: 'C:\ORADATA\TEST11G\STAGING.DBF' is missing Datafile 5: 'C:\ORADATA\TEST11G\STAGING.DBF' is missing Datafile 5: 'C:\ORADATA\TEST11G\STAGING.DBF' is missing SQL dictionary health check: dependency$.dobj# fk 126 on SQL dictionary health check: dependency$.dobj# fk 126 on
0 0 0 0 0 0 0 0 0 0
t t t t