Oracle Database Architecture Overview
Oracle Database Architecture Overview
Parameter Files
In order for Oracle to start it needs some basically information, this information is
supplied by using a parameter file. The parameter file can be either a pfile or a spfile:
pfile - a very simple plain text file which can be manually edited via vi or
notepad
spfile - a binary which cannot be manually edited (Oracle 9i or higher required)
The parameter file for Oracle is the commonly know file init.ora or init<oracle
sid>.ora, the file contains key/value pairs of information that Oracle uses when
starting the database. The file contains information such as database name, caches
sizes, location of control files, etc.
By Default the location of the parameter file is
windows - $ORACLE_ HOME\database
unix - $ORACLE_ HOME/dbs
The main difference between the spfile and pfile is that instance parameters can be
changed dynamically using a spfile, where as you require a instance reboot to load
pfile parameters.
To convert the file from one of the other you can perform the following
create pfile using a spfile
startup pfile='c:\oracle\pfile\initD10.ora';
Data Files
By Default Oracle will create at least two data files, the system data file which holds
the data dictionary and sysaux data file which non-dictionary objects are stored,
however there will be many more which will hold various types of data, a data file
will belong to one tablespace only (see tablespaces for further details).
Data files can be stored on a number of different filesystem types
Cooked - these are normally filesystems that can be accessed using "ls"
commands in unix
Raw - these are raw disk partitions which cannot be viewed, normally used to
avoid filesystem buffering.
ASM - automatic storage management is Oracle new database filesystem
(see asm for further details).
Clustered FS - this is a special filesystem used in Oracle RAC environments.
Data files contain the following
Segments - are database objects, a table, a index, rollback segments. Every
object that consumes space is a segment. Segments themselves consist of one
or more extents.
Extents - are a contiguous allocation of space in a file. Extents, in turn, consist
of data blocks
Blocks - are the smallest unit of space allocation in Oracle. Blocks normally are
2KB, 4KB, 8KB, 16KB or 32KB in size but can be larger.
The relationship between segments, extents and blocks looks like this
The parameter DB_BLOCK_SIZE determines the default block size of the database.
Determining the block size depends on what you are going to do with the database, if
you are using small rows then use a small block size (oracle recommends 8KB), if you
are using LOB's then the block size should be larger.
2KB or 4KB
OLTP - online transaction processing database would benefit from a small block size
8KB (default)
16KB or 32KB
DW - data warehouses, media database would benefit from a larger block size
Notes
You can have different block sizes within the database, each tablespace having a different block size depending on what is stored in t
tablespace. For an example
System tablespace could use the default 8KB and the OLTP tablespace could use a block size of 4KB.
There are few parameters that cannot be changed after installing Oracle and the DB_BLOCK_SIZE is one of them, so make sure t
correct choice when installing Oracle.
A data block will be made up of the following, the two main area's are the free space
and the data area.
Header
contains information regarding the type of block (a table block, index block, etc), transaction information rega
and past transactions on the block and the address (location) of the block on the disk
Table Directory
contains information about the tables that store rows in this block
Row Directory
contains information describing the rows that are to be found on the block. This is an array of pointers to wher
are to be found in the data portion of the block.
Block overhead
The three above pieces are know as the Block Overhead and are used by Oracle to manage the block itself.
Free space
Data
Tablespaces
A tablespace is a container which holds segments. Each and every segment belongs to
exactly one tablespace. Segments never cross tablespace boundaries. A tablespace
itself has one or more files associated with it. An extent will be contained entirely
within one data file.
So in summary the Oracle hierarchy is as follows:
Undo File
When you change data you should be able to either rollback that change or to provide
a read consistent view of the original data. Oracle uses undo data (change vectors) to
store the original data, this allows a user to rollback the data to its original state if
required. This undo data is stored in the undo tablespace. See undo for further
information.
Control file
The control is one of the most important files within Oracle, the file contains data and
redo log location information, current log sequence numbers, RMAN backup set
details and the SCN (system change number - see below for more details). This file
should have multiple copies due to it's importance. This file is used in recovery as the
control file notes all checkpoint information which allows oracle to recover data from
the redo logs. This file is the first file that Oracle consults when starting up.
The view V$CONTROLFILE can be used to list the controlfiles, you can also use the
V$CONTROLFILE_RECORD_SECTION to view the controlfile's record structure.
You can also log any checkpoints while the system is running by setting the
LOG_CHECKPOINTS_TO_ALERT to true.
See recovering critical files for more information.
Password file
This file optional and contains the names of the database users who have been granted
the special SYSDBA and SYSOPER admin privilege.
Log files
The alert.log file contains important startup information, major database changes and
system events, this will probably be the first file that will be looked at when you have
database issues. The file contains log switches, db errors, warnings and other
messages. If this file is removed Oracle creates another one automatically.
Trace Files
Traces files are debugging files which can trace background process information
(LGWR, DBWn, etc), core dump information (ora-600 errors, etc) and user
processing information (SQL).
Oracle Processes
Oracle server processes perform all the database operations such as inserting and
deleting data. The oracle processes working with the SGA (oracle memory structure)
manage the database.
There are two types of Oracle process
User process - Responsible for running the application that connects to the
database
Oracle server process - Perform oracle tasks that manage the database.
There are a number of server processes that could be running , Windows will only
have one process called Oracle, this process will have one thread for each of the
below processes.
Process Monitor
System Monitor
PMON
SMON
Distributed database
recovery
RECO
Recovers transactions that are left in a prepared state because of a crash or loss of connection durin
The checkpoint process is charged with instructing the database block buffer writers to write the da
then updates the data file headers and control file to indicate when the checkpoint was performed.
checkpoints and recovery time, the more checkpointing the less recovery time is need when a crash
The ckpt process does not do the checkpoint but assists with the checkpointing process by updating
files.
A checkpointing process involves the following:
Checkpoint process
CKPT
Updating the data file headers and control files after the checkpoint completes
DBWn
Responsible for writing dirty blocks to disk when free space within the database buffer cache is low,
the buffer cache out to the disk. It uses the LRU (Least Recently Used) algorithm which retains data
long it has been since someone asked for that data. The database buffer cache is flushed to disk
when a server process can't find a clean reusable buffer after checking a threshold number
every 3 seconds
users process has searched to long for a free buffer when reading a buffer into the buffer c
Instance is shutdown
segment is dropped
If you have multiple CPU's then it is advised to run multiple database writers. Use the DB_WRITER_P
increase the number of database writers, the instance has to be rebooted.
Responsible for flushing to disk the contents of the redo log buffer located in the SGA. Both committ
are written to the redo log buffer. The redo log buffer is flushed to disk before the data blocks are w
buffer is flushed to disk
Log writer
LGWR
every 3 seconds
when the redo log buffer is a third full or contains 1 Mb of buffered data
Used when the database is in archive-mode, it copies the online redo log file to another location wh
would be used to perform media recovery. There can be a maximum of ten archive processes runnin
LOG_ARCHIVE_MAX_PROCESSES parameter determines how many archive processes will be started (d
Archive process
ARCn
Manageability Monitor
Collects statistics to help the database manage itself. The MMON process collects the AWR (automat
MMON snapshot information which is used by the ADDM (automatic database diagnostic monitor), also MMO
thresholds are exceeded.
Manageability Monitor
Light
MMNL
The process flushes ASH information to disk when the buffer is full, it also captures session history a
Memory Manager
MMAN
Uses the the metrics collected to determine the ideal distribution of memory within oracle. It const
adjusts the memory allocations according to workloads.
Job Queue
Coordination
CJQ0
Used to schedule and run user jobs. It spawns job queue slave processes (J000-J999) which actually
J000J999
These processes are what actually run the schedule jobs requested by CJQ0.
FMON
Maps files to immediate storage layers and physical devices. Results are normally kept in the DBMS_S
the 3rd party LVM (logical volume manager) supplier will supply a driver to map to.
Recovery Writer
RVWR
This process is started when you implement flashback logging, it logs the before image (taken from
oracle block before it is changed, this is written to the flashback log files.
Change Tracking
Writer
CTWR
This process tracks any data blocks that have changed which then RMAN can use to speed up backup
read the entire data file to see what has changed.
Queue Monitor
Coordinator
QMNC
BSP
Used in OPS and keeps each servers SGA in the clusters consistent with each other.
LMON
LMD
Used in OPS and controls the global locks and global resources for the block buffer cache in a cluste
Lock process
LCKn
Used in OPS and is the same as the LMD daemon but handles requests for all global resources other t
Dispatcher process
Dnnn
Dispatcher processes that are used when using a shared server environment
Snnn
Shared Server processes that are used when using a shared server environment
Oracle process
spawner
PSP0
Process spawner has the job of creating and managing other Oracle processes.
SHAD
Streams Advanced
Queuing process
Used in OPS and monitors all instances in a cluster to detect a failure of an instance.
There are a number of useful command and views you can use to get information
regarding the running processes.
Useful SQL
Display all processes
select
substr(s.username,1,18) username,
substr(s.program,1,20) program,
decode(s.command,
0,'No Command',
1,'Create Table',
2,'Insert',
3,'Select',
6,'Update',
7,'Delete',
9,'Create Index',
15,'Alter Table',
21,'Create View',
23,'Validate Index',
35,'Alter Database',
39,'Create Tablespace',
41,'Drop Tablespace',
40,'Alter Tablespace',
53,'Drop User',
62,'Analyze Table',
63,'Analyze Index',
s.command||': Other') command
from
v$session s,
v$process p,
v$transaction t,
v$rollstat r,
v$rollname n
where s.paddr = p.addr
and s.taddr = t.addr (+)
and t.xidusn = r.usn (+)
and r.usn = n.usn (+)
order by 1
;
Useful Views
V$BGPROCESS
V$PROCESS
PGA
This is memory that is private to a single process or thread and is not accessible by any other process or threa
(Process Global Area)
UGA
(User Global Area)
This is memory that is assoicated with your session, it can be found in the PGA or SGA depending on whether y
connected to the database via shared server
Shared Server - the UGA will be in the SGA
Dedicated Server - the UGA will be in the PGA
SGA
There are five memory structures that make up the System Global Area (SGA). The
SGA will store many internal data structures that all processes need access to, cache
data from disk, cache redo data before writing to disk, hold parsed SQL plans and so
on.
Shared Pool
The shared pool consists of the following areas:
Library cache includes the shared SQL area, private SQL areas, PL/SQL procedures and packages the cont
such as locks and library cache handles. Oracle code is first parsed, then executed , this parsed code is st
library cache, oracle first checks the library cache to see if there is an already parsed and ready to execu
the statement in there, if there is this will reduce CPU time considerably, this is called a soft parse, If Or
parse it then this is called a hard parse. If there is not enough room in the cache oracle will remove olde
code, obviously it is better to keep as much parsed code in the library cache as possible. Keep an eye on
hits which is an indication that a lot of hard parsing is going on.
Dictionary cache is a collection of database tables and views containing information about the database,
structures, privileges and users. When statements are issued oracle will check permissions, access, etc an
this information from its dictionary cache, if the information is not in the cache then it has to be read in
and placed in to the cache. The more information held in the cache the less oracle has to access the slow
The parameter SHARED_POOL_SIZE is used to determine the size of the shared pool, there is no way to ad
caches independently, you can only adjust the shared pool size.
The shared pool uses a LRU (least recently used) list to maintain what is held in the buffer, see buffer cac
details on the LRU.
You can clear down the shared pool area by using the following command
alter system flush shared_pool;
Buffer cache
This area holds copies of read data blocks from the datafiles. The buffers in the cache contain two lists, t
and the least used list (LRU). The write list holds dirty buffers which contain modified data not yet writte
The LRU list has the following
dirty buffers contain data that has been read from disk and modified but hasn't been written to d
It's the database writers job to make sure that they are enough free buffers available to users session, if n
will write out dirty buffers to disk to free up the cache.
There are 3 buffer caches
Default buffer cache, which is everything not assigned to the keep or recycle buffer pools, DB_C
Keep buffer cache which keeps the data in memory (goal is to keep warm/hot blocks in the pool
possible), DB_KEEP_CACHE_SIZE.
Recycle buffer cache which removes data immediately from the cache after use (goal here is to
blocks as soon as it is no longer needed), DB_RECYCLE_CACHE_SIZE.
The standard block size is determined by the DB_CACHE_SIZE, if tablespaces are created with a different
then you must also create an entry to match that block size.
The redo buffer is where data that needs to be written to the online redo logs will be cached temporarily
written to disk, this area is normally less than a couple of megabytes in size. These entries contain necess
information to reconstruct/redo changes by the INSERT, UPDATE, DELETE, CREATE, ALTER and DROP comm
The contents of this buffer are flushed:
Redo buffer
When its gets one third full or contains 1MB of cached redo log data.
Use the parameter LOG_BUFFER parameter to adjust but be-careful increasing it too large as it will reduc
but commits will take longer.
This is an optional memory area that provide large areas of memory for:
Parallel execution of statements - to allow for the allocation of inter-processing message buffers
coordinate the parallel query servers.
Large Pool
Streams are used for enabling data sharing between databases or application environment.
Streams Pool
Use the parameter STREAMS_POOL_SIZE parameter to adjust
The fixed SGA contains a set of variables that point to the other components of the
SGA, and variables that contain the values of various parameters., the area is a kind of
bootstrap section of the SGA, something that Oracle uses to find other bits and pieces
of the SGA
For more information regarding setting up the SGA click here.
PGA and UGA
The PGA (Process Global Area) is a specific piece of memory that is associated with a
single process or thread, it is not accessible by any other process or thread, note that
each of Oracles background processes have a PGA area. The UGA (User Global Area)
is your state information, this area of memory will be accessed by your current
session, depending on the connection type (shared server) the UGA can be located in
the SGA which is accessible by any one of the shared server processes, because a
dedicated connection does not use shared servers the memory will be located in the
PGA
Shared server - UGA will be part of the SGA
Dedicated server - UGA will be the PGA
Memory Area
Dedicated Server
Shared Server
Private
Shared
PGA
SGA
PGA
PGA
PGA
PGA
Oracle creates a PGA area for each users session, this area holds data and control
information, the PGA is exclusively used by the users session. Users cursors, sort
operations are all stored in the PGA. The PGA is split in to two areas
Session Information PGA in an instance running with a shared server requires additional memory
(runtime area)
for the user's session, such as private SQL areas and other information.
Stack space
(private sql area)
The memory allocated to hold a sessions variables, arrays, etc and other
information relating to the session.
System Parameters
workarea_size_policy
pga_aggregate_target
Oracle Transactions
A transaction is logical piece of work consisting of one or more SQL statements. A
transaction is started whenever data is read or written and they are ended by a
COMMIT or ROLLBACK. DDL statements always perform a commit first this is
called an implicit commit this is because the user did not issue the commit.
Oracle uses transaction locking and multiversion concurrency control using undo
records to ensure serializability in transactions, this stops any user conflicts while
ensuring database consistency.
Transaction Properties
Database transactions should exhibit attributes described by the ACID properties:
Atomicity - A transaction either happens completely, or none of it happens
Consistency - A transaction takes the database from consistent state to the next
Isolation - The effects of a transaction may not be visible to other transaction
until the transaction has committed.
Durability - Once the transaction is committed, it is permanent all changes are
written to the redo log first then the data files.
Transaction Concurrent Control
Oracle uses locking to ensure data consistency but the locking is done via the least
restrictive fashion, with the goal of maintaining the maximum amount of concurrency.
Concurrency problems can be any of the following
Dirty Reads
Occurs when a transaction reads data that has been updated by an ongoing transaction but
has not been committed permanently to the database, it is possible that the transaction
may be rolled back.
Phantom Reads
Are caused by the appearance of new data in between two database operations in a
transaction.
Lost Updates
Is caused by transactions trying to read data while it is being updated by other transaction.
When a transaction finds data that it has read previously has been modified by some other
Non-Repeatable
transaction, you have a non-repeatable-read or fuzzy read. Basically when you read data at
Reads
one time and its different when you read it again.
To overcome the above problems you could serialize all the transactions making sure
that data is consistent, however this does not scale well. Oracle serializes the
transaction via isolation levels and the management of undo data.
Isolation Levels
The main isolation levels are the following
Serializable
Then transaction will lock all the tables it is accessing to prevent other transactions
updating data until it either rollbacks or commits
Repeatable Read
A transaction that reads the data twice from a table at different points in time will find
the same values each time. Both dirty reads and non-repeatable are avoided with this
level of isolation.
Read uncommitted Allows a transaction to read another transaction's immediate value before it commits
Read committed
Guarantees that the row data won't change while you're accessing a particular row in a
table.
Oracle uses locks and multiversion concurrency control system, it uses row-level
locking (it never uses lock escalation), it will automatically place the lock for you and
store the lock information in the data block, locks are held until the transaction is
either committed or rolled back. Multiversion concurrency is a timestamp approach to
read the original data, oracle will write the original data to a undo record in
the undo tablespace, queries then have a consistent view of the data which provide
read consistency- they only see data from a single point in time, for more information
see Oracle locking.
Oracle Locks
There are a number of different locks in Oracle and tables that can obtain information
regarding locks.
DML Locks
Oracle uses row-level locks, this is to protect the row while its being changed, the lock
will never block a reader of the same row. A table lock is also placed but this ensures
that no DDL is used on the table.
DDL Locks
When changing table attributes Oracle places a exclusive lock on the table to prevent
any modifications to the rows. This lock is also used during DML transactions to make
sure the table is not changed when changing or inserting data.
Latches
Latches protect the memory structure with the SGA, they control the processes that
access the memory area's.
Internal Locks
Are used by oracle to protect access to structures such as data files, tablespaces and
rollback segments.
Distributed Locks
Blocking Locks
Occurs when a lock is placed on an object by a user to prevent other users accessing the
same object.
DeadLocks
Occurs when two sessions block each other while each waits for a resource that the
other session is holding. Oracle always steps in to resolve the issue by killing one of the
sessions, check the alert.log for deadlocks.
Useful Views
DBA_LOCK
lists all locks or latches held in the database, and all outstanding requests for a lock or
latch
DBA_WAITERS
DBA_BLOCKERS
displays a session if it is not waiting for a locked object but is holding a lock on an
object for which another session is waiting
V$LOCK
This view lists the locks currently held by the Oracle Database and outstanding requests
for a lock or latch
V$SESSION
database triggers
example
The pragma directive tells oracle that this is a new autonomous transaction and that it
is independent from its parent.
A trigger cannot contain a commit or rollback statement, however by using
autonomous transactions you can overcome this limitation, it is considered bad
practice but it is possible.
create table tab1 (col1 number);
create table log (timestamp date, operation varchar2(2000));
Undo Data
Undo data provides read consistency, there are two ways to control the undo manual
or automatic. see undo data for more details
Oracle Transaction
Simple Oracle transaction
1. User requests a connection to oracle
2. A new dedicated server process is started for the user
3. User executes a statement to insert data in to a table
4. Oracle checks the users privileges, it first checks the library cache (cache hit)
for the information and if not found retrieve it from disk.
5. Check to see if the SQL statement has been parsed before (library cache) if it
has then this is called a soft parse, otherwise the code has to be compiled a hard
parse.
6. Oracle creates a private SQL area in the users session's PGA
7. Oracle checks to see if the data is in the buffer cache, otherwise perform a read
from the data file
8. Oracle will then apply row-level locks where needed to prevent others
changing the row (select statements are still allowed on the row)
9. Oracle then writes the change vectors to the redo log buffer
10.Oracle then modifies the row in the data buffer cache
11. The user commits the transaction making it permanent, the row-level locks are
released
12.The log writer process immediately writes out the changed data in the redo log
buffers to the online redo log files, in other words the change is now
recoverable.
13.Oracle informs the user process that the transaction was completed successfully
14.It may be sometime before the data buffer cache writes out the change to the
data files.
Note: if the users transaction was an update then the before update row would have
been written to the undo buffer cache, this would be used if the user rolls back the
change of if another user run's a select on that data before the new update was
committed.
Pessimistic Locking
When a user queries some data and picks a row to change the below statement is used:
Pessimistic Locking
what the "for update nowait" statement does is to lock the row against updates by
other sessions. This is why this approach is called pessimistic locking. We lock the
row before we attempt to update it because we doubt that the row will remain
unchanged otherwise. We'll get three outcomes from this statement:
if the underlying data has not changed, we get our row back and this row will
be locked from updates by others (but not reads).
if another user is in the process of modifying that row, we will get a ORA00054 Resource Busy error. We are blocked and must wait for the other user to
finish with it.
if, in the time between selecting the data and indicating our intention to update,
someone has already changed the row then we will get zero rows back. The
data will be stale. The application needs to re-query the data and lock it before
allowing the end user to modify any of the data in order to avoid a lost update
scenario.
Optimistic Locking
Optimistic locking, is to keep the old and new values in the application and apon
updating the data us a update like below:
Optimistic Locking
update table
set column1 = :new_column1, column2 = :new_column2, ....
where column1 = :old_column1
and column2 = :old_column2
...
We are optimistically hoping that the data has not changed, if we are lucky the row is
updated, if not we update zero rows and now we have two options get the user to rekey the data back in or should be we try and merge the data (lots of code to do this)
So, the best method in Oracle would be to use pessimistic locking as the user can have
confidence that the data they are modifying on the screen is currently owned by them
- in other words the row is checked out and nobody could modify it. While you may
be thinking what if the user walks away the row is locked, in this scenario its would
be better to get the application to release the lock or use Resource Profiles in the
database to time out idle sessions. Remember that even if a row is locked you can
still read that row, it is never blocked for reading in Oracle.
Blocked Inserts
The only time an INSERT will block is when you have a table with a primary key or
unique constraint placed on it and two sessions simultaneously attempt to insert a row
with the same value, it is most avoided via the use of Oracle sequences in the
generation of primary keys as they are highly concurrent method of generating unique
keys in a multi-user environment.
Blocked Updates and Deletes
To avoid update and delete blocking use either one of the two locking
methods Pessimistic or Optimistic.
Deadlocks
Deadlocks occur when two people hold a resource that the other wants. Oracle records
all deadlocks in a trace file. The number one cause of deadlocks is un-indexed foreign
keys
if you update the parent table's primary key the child table will be locked in the
absence of an index
if I delete a parent table, the entire child table will be locked, again in the
absence of an index.
Lock Escalation
In other RDBMS when a users locks 100 rows (this may vary) the lock is escalated to
a table lock, however Oracle will never escalates a lock, NEVER. Oracle does
practice lock conversion/lock promotion they are synonymous.
If a user select a row using FOR UPDATE two locks are placed, one exclusive lock on
the row and the other a ROW SHARE LOCK on the table itself. This will prevent
other users placing a exclusive lock on the table, thus preventing them from altering
the table structure.
Type of Locks
There a number of different types of locks as listed below:
DML Locks - DML (data manipulation language), in general SELECT,
INSERT, UPDATE and DELETE. DML locks will be locks on a specific row
of data, or a lock at the table level, which locks every row in the table.
DDL locks - DDL (data definition language), in general CREATE, ALTER and
so on. DDL locks protect the definition of the structure of objects.
Internal locks and latches - These are locks that Oracle uses to protect its
internal data structure.
Distributed Locks - These are used by OPS to ensure that different nodes are
consistent with each other.
Deadlocks - Occurs when two sessions block each other while each waits for a
resource that the other session is holding.
PCM - PCM (Parallel Cache Management) These are locks that protect one or
more cached data blocks in the buffer cache across multiple instances, also used
in OPS.
DML Locks
There are two main types of DML locks TX (Transaction) and TM (DL Enqueue). A
TX lock is acquired when a transaction initiates its first change and is held until the
transaction performs a COMMIT or ROLLBACK. It is used as a queuing mechanism
so that other sessions can wait for the transaction to complete. A TM lock is used to
ensure that the structure of the table is not altered while you are modifying its
contents.
The complete set of DML locks are
Row Share
permits concurrent access but prohibits others from locking table for exclusive access
Row Exclusive
Share
prevent others from locking in share mode or updating the rows on the whole table
Exclusive
Below are tables that can be used to identify locks, transaction ID, etc, the code can be
used to obtain this information.
Useful SQL
select username,
v$lock.sid,
trunc(id1/power(2,16)) rbs,
bitand(id1, to_number('ffff', 'xxxx'))+0 slot,
id2 seq,
lmode,
request
from v$lock, v$session
where v$lock.type = 'TX'
and v$lock.sid = v$session.sid
and v$session.username = USER;
Using NOWAIT
Note: the above commands will abort if the lock is not release in the specified time
Useful Views
V$TRANSACTION
V$SESSION
V$LOCK
lists the locks currently held by the Oracle Database and outstanding requests for a lock or latch.
V$LOCKED_OBJECT
lists all locks acquired by every transaction on the system. It shows which sessions are holding DML locks (
type enqueues) on what objects and in what mode.
DBA_LOCK
lists all locks or latches held in the database, and all outstanding requests for a lock or latch
DBA_BLOCKERS
displays a session if it is not waiting for a locked object but is holding a lock on an object for which anoth
waiting
DBA_DDL_LOCKS
lists all DDL locks held in the database and all outstanding requests for a DDL lock
DBA_DML_LOCKS
lists all DML locks held in the database and all outstanding requests for a DML lock.
DDL Locks
DDL locks are automatically placed against objects during a DDL operation to protect
them from changes by other sessions.
There are three types of DDL locks
Exclusive DDL Locks - these prevent other sessions from gaining a DDL lock
or TM lock themselves. You can query a table but not modify it. Exclusive
locks will normally lock the object until the statement has finished. However in
some instances you can use the ONLINE option which only uses a low-level
lock this still locks DDL operations but allows DML to occur normally.
Share DDL Locks - This protect the structure of the referenced object against
modification by other sessions, but allows modification to the data. Shared
DDL locks allow you to modify the contents of a table but not their structure.
Breakable Parse Locks - This allows an object, such as a query plan cached in
the shared pool to register its reliance on some objects. If you perform a DDL
against that object, Oracle will review the list of objects that have registered
their dependence, and invalidate them. Hence these locks are breakable, they do
not prevent the DDL from occurring. Breakable parse locks are used when a
session parses a statement, a parse lock is taken against every object referenced
by that statement. These locks are taken in order to allow the parsed, cached
statement to be invalidated (flushed) in the shared pool if a reference object is
dropped or alter in some way. Use the below SQL to identify any parse locks on
views, procedures,grants, etc
Identify Locks
structures such as the database block buffer cache or the library cache in the shared
pool. They are lightweight low-level serialization mechanism to protect the inmemory data structures of the SGA. They do not support queuing and do not protect
database objects such as tables or data files.
Enqueues are another more sophisticated serialized device used when update rows in a
database table. The requestor will queue up and wait for the resource to become
available, hence these are not as fast as a latch.
It is possible to use manual locking using the FOR UPDATE statement or LOCK
TABLE statement, or you can create your own locks by using the DBMS_LOCK
package.
Deadlocks
Occurs when two sessions block each other while each waits for a resource that the
other session is holding.
Multi-versioning
Oracle operates a multi-version read-consistent concurrency model. Oracle provides:
Read-consistent queries: Queries that produce consistent results with respect
to a point in time by using rollback segments.
Non-Blocking queries: Queries are never blocked by writers of data, as they
would be in other databases.
When Oracle reads a table its uses the rollback segment of any rows that data has
changed since when it started the read. This allows a point in time read of a table. This
also allows Oracle not to lock a table while reading large tables.
Transaction and Row Locks
I am now going to describe in detail how a lock works, you don't need this details but
its good to understand what is going on under the covers. Row level locks protect
selected rows in a data block during a transaction, a transaction acquires a enqueue
and an exclusive lock for each individual row modified by one of the following
Insert
Delete
Update
Select with for update
These locks are stored within the data block and each lock refers to the transaction
enqueue and as they are stored in the blocck they have a database wide view. The lock
is held until either a commit or a rollback is executed, SMON also acquires it in
exclusive mode when recovering (undo-ing) a transaction. Transaction locks are used
as a queuing mechanism for processes awaiting release of an object locked by a
transaction process.
Every data block (except for temp and rollback segements) will have a number of
predefined transaction slots. Undo segments have a different type of transaction slot
called transaction tables, the transaction slots are otherwise know as interested
transaction lists (ITLs) and are controlled by the INITRANS parameter (default is 2
for tables and 3 for indexes). A transaction slots uses 24 bytes of free space in the
block, and the maximum number of transactions slots is controlled by the
MAXTRANS parameter, hwoever you can only use upto 50% of the block for
transactions slots.
ITL slots are required for every transaction, it contains the transaction ID (XID) which
is a pointer to an entry in the transaction table of a rollback segment. You can still read
the data but other processes wanting to change the data must wait until the lock is
released (commit or rollback). The ITL entry contains a XID, undo byes address
(UBA) information, flags indicating the transaction status (Flag) and lock count (Lck)
showing the number of rows locked by this transaction within the block and SCN at
which the transaction is updated. Basically the XID identifies the undo information
about that transaction.
You can use the view X$KTUXE to obtain information on the number of rows that are
affected, also the view V$TRANSACTION can be used to get more details on a
transaction.
When the transaction completed Oracle performs the bare minimum to commit the
transaction, it involves updating the flag in the transaction table and the block is not
revisisted. This is know as a fast commit, during this time the ITL in the data block is
still pointing to the transaction table of the corresponding rollback segment. If an
transaction wants to use the block (persume the change has not been comiited) it see's
that its point to a rollback segement makes a copy of the block in memory, gets the
UBA from the ITL, reads the data from the undo and uses it to rollback the change
defined by the undo. If the transaction is committed the rows are no longer locked but
the lock byte is the row header is not cleared until the next time a DML action is
performed on the block. The block cleanout is delayed by some discrete time interval
because of the fast commit, this is called delay block cleanout. The cleanout
operation closes the open ITLs and generates the redo information as a block cleanout
many involve updating the block with a new SCN, this is why you see redo generation
for some select statements.
Redo
All the Oracle changes made to the db are recorded in the redo log files, these files
along with any archived redo logs enable a dba to recover the database to any point in
the past. Oracle will write all commited changes to the redo logs first before applying
them to the data files. The redo logs guarantee that no committed changes are ever
lost. Redo log files consist of redo records which are group of change vectors each
referring to specific changes made to a data block in the db. The changes are first kept
in the redo buffer but are quickly written to the redo log files.
There are two types of redo log files online and archive. Oracle uses the concept of
groups and a minimum of 2 groups are required, each group having at least one file.
They are used in a circular fashion when one group fills up oracle will switch to the
next log group.
The LGWR process writes redo information from the redo buffer to the online redo
logs when
user commits a transaction
redo log buffer becomes 1/3 full
Active
the files in the log group are required for instance recovery
Inactive
the files in the log group are not required for instance recovery and can be over written
Unused
Stale
Deleted
<blank>
shutdown database
rename file
startup database in mount mode
Renaming log file in existing group alter database rename file 'old name' to'new name'
open database
backup controlfile
Maintaining
Useful Views
V$LOG
V$LOGFILE
Archived Logs
When a redo log file fills up and before it is used again the file is archived for safe
keeping, this archive file with other redo log files can recover a database to any point
in time. It is best practice to turn on ARCHIVELOG mode which performs the
archiving automatically.
The log files can be written to a number of destinations (up to 10 locations), even to a
standby database, using the
parameters log_archive_dest_n andlog_archive_min_succeed_dest you can control
how Oracle writes its log files.
Configuration
alter system set log_archive_dest_1 = 'location=c:\oracle\archive' scope=spfile;
alter system set log_archive_format = 'arch_%d_%t_%r_%s.log' scope=spfile;
Enabling
shutdown database
startup database in mount mode
alter database archivelog;
startup database in open mode
Archive format options
%r - resetlogs ID (required parameter)
%s - log sequence number (required parameter)
%t - thread number (required parameter)
%d - database ID (not required)
Disabling
Displaying
Maintainance
Useful Views
V$ARCHIVED_LOG
V$INSTANCE
V$DATABASE
I have a more detailed section on redo in my Data Guard section called Redo
Processing.
Undo Data
Undo data provides read consistency, Oracle provides two ways to allocate and
manage undo(rollback) space among transactions. If you use the manual approach you
will be using traditional rollback segments but is easier to let oracle automatically
control the rollback segments which is called AUM (automatic undo management),
the only part on the DBA side is to size the undo tablespace, then oracle will
automatically create the undo segments within the tablespace.
Using AUM you can take advantage of flashback recovery, flashback query, flashback
versions query, flashback transaction query and flashback table - seeflashback for
further details.
There are three parameters associated with AUM
UNDO_MANAGEMENT
(default manual)
This is the only mandatory parameter and can be set to either auto or manual.
UNDO_TABLESPACE
(default undo
tablespace)
This specifies the tablespace to be used, of course the tablespace needs to be a undo tablespace. If you do n
value oracle will automatically pick the one available. If no undo tablespace exists then oracle will use the s
tablespace which is not a good idea (always create one).
UNDO_RETENTION
(seconds)
Once a transaction commits the undo data for that transaction stays in the undo tablespace until space is req
which case it will be over written.
When a transaction commits the undo data is not required anymore, the undo data
however will stay in the undo tablespace unless space is required then newer
transactions will overwrite it. During a long running query that need to retain older
undo data for consistency purposes , there might be a possibility that some data it
needs has been over written by other new transactions, this would produce the
"snapshot too old" error message, which indicates that the before image has been
overwritten. To prevent this oracle uses the undo_retention system parameter which
try's and keeps the data in the undo tablespace for as long a possible meeting
the undo_retention target, however this is not guaranteed.
Undo data can be in 3 states
State
uncommitted undo
information
never
committed undo information also known as unexpired undo, required to support after undo_retention period or undo tablespace spa
(unexpired)
undo_retention interval
unless guaranteed option is set (see below)
expired undo information
always
There are times when you want to guarantee the undo retention at any cost even if it
means transactions fail, the option retention guarantee will guarantee that the data
will stay in the undo tablespace until the interval has expired, even if there are space
pressure problems in the undo tablespace, the default is not to set the guarantee
retention period.
I have have noticed on my travels that once undo is expired it is no longer available
even if the undo tablespace is not under any space pressure, the only way to keep it is
to increase the undo_retention parameter. You can prove this by checking the oldest
undo data avilable via the dba_hist_undostat view, the oldest data will match the
undo_retention period you set via the undo_retention parameter.
Undo Sizing
Depending on how much undo data you want to keep will determine the size of the
undo tablespace, a simple formula is used when calculating the undo tablespace size
UR * UPS * DB_BLOCK_SIZE
undo tablespace size
The Oracle Enterprise Manager uses the desired time period for undo retention and
analyses the impact of the desired undo retention setting.
Undo Commands
Undo System Management
Management
Setting
Retention
Undo Control
Creating
Removing
guarantee
Useful Views
DBA_ROLLBACK_SEGS
DBA_TABLESPACES
DBA_UNDO_EXTENTS
describes the extents comprising the segments in all undo tablespaces in the database
DBA_HIST_UNDOSTAT
displays the history of histograms of statistical data to show how well the system is working. The a
statistics include undo space consumption, transaction concurrency, and length of queries execute
instance. This view contains snapshots of V$UNDOSTAT.
V$UNDOSTAT
displays a histogram of statistical data to show how well the system is working. The available stati
undo space consumption, transaction concurrency, and length of queries executed in the instance.
this view to estimate the amount of undo space required for the current workload. Oracle uses thi
tune undo usage in the system. The view returns null values if the system is in manual undo manag
mode.
V$ROLLNAME
lists the names of all online rollback segments. It can only be accessed when the database is open.
V$ROLLSTAT
V$TRANSACTION
Flashback
FlashBack Architecture
There are a number of flashback levels
row level
table level
database level
flashback database
Oracle 10g has several error-correction techniques that use undo data, however they
are only available if you use automatic undo management (AUM),
Flashback query - retrieves data from a past point in time
Flashback versions query - shows you different versions of data rows, plus start
and end times of a particular transaction that created that row
Flashback transaction query - lets you retrieve historical data for a given
transaction and the SQL code to undo the transaction.
Flashback table - recovers a table to its state at a past point in time, without
having to perform a point in time recovery.
There are two other flashback technologies that do not use the undo data, they use
flashback logs and recyclebin instead.
flashback database - restore the whole database back to a point in time.
flashback drop - allows you to reverse the effects of a drop table statement,
without resorting to a point-in-time recovery
DBMS_FLASHBACK, flashback table query, flashback transaction query, flashback
version query and select .. as of .. statements all use the undo segments.Flashback
database uses the flashback logs and flashback drop uses the recycled bin.
When using flashback, if any operations violate a constraint the flashback operation
will be rolled back, you can disable constraints but its probably not a good idea. If
you have a table using a foreign key it is a good idea to flashback both tables.
Flashback technology requires you to lock the whole table if it cannot it will fail
immediately.
RMAN can only do flashback database and no other flashback technology.
Flashback Query
Using flashback query involves using a select statement with an AS OF clause. you
can select data from a past point in time. If you get a ORA-08180 it means that the
data is no longer available in the undo segments.
Privilege
Note: using a scn will put you with 3 secs if you need to be dead accurate use
Reinserting
Supplemental logging
Flashback transaction
query
Note: this will give you the SQL to reserver the change that was applied to the data
to use.
You use flashback version query to obtain the xid and then use the xid in the flashback
transaction query statement
Now use the XID in the flashback transaction query to obtain the SQL to undo the cha
select xid, start_scn, commit_scn, operation, logon_user, undo_sql
from flashback_transaction_query
where xid=hextoraw('0003002F00038BA9');
XID
start_scn
commit_scn
operation
user
undo_sql
-----------------------------------------------------------------------------------0003002F00038BA9
195243
195244
delete
vallep
insert into H
('EMPNO', 'EM
values ('222'
Flashback Table
There are two distinct table related flashback table features in oracle, flashback table
which relies on undo segments and flashback drop which lies on the recyclebin not the
undo segments.
Flashback table lets you recover a table to a previous point in time, you don't have to
take the tablespace offline during a recovery, however oracle acquires exclusive DML
locks on the table or tables that you are recovering, but the table continues to be
online.
When using flashback table oracle does not preserve the ROWIDS when it restores
the rows in the changed data blocks of the tables, since it uses DML operations to
perform its work, you must have enabled row movement in the tables that you are
going to flashback, only flashback table requires you to enable row movement.
If the data is not in the undo segments then you cannot recover the table by using
flashback table, however you can use other means to recover the table.
Restriction on flashback table recovery
You cannot use flashback table on SYS objects
You cannot flashback a table that has had preceding DDL operations on the
table like table structure changes, dropping columns, etc
The flashback must entirely exceed or it will fail, if flashing back multiple
tables all tables must be flashed back or none.
Any constraint violations will abort the flashback operation
You cannot flashback a table that has had any shrink or storage changes to the
table (pctfree, initrans and maxtrans)
Privilege
Enable triggers
Note: Oracle disables triggers by default when falshing back a table
Flashback Drop
Flashback drop lets you reinstate previously dropped tables exactly as it was before
the drop, below is a table of what is kept where when a table is dropped:
Recyclebin: tables and indexes
Data dictionary: unique keys, primary key, not-null constraints, triggers and
grants
Not recovered: foreign key constraints
If two tables exist in the recyclebin with the same name the newest one will be
restored unless you state which one you want to restore. If you restore a table it is
removed from the recyclebin.
Recover
truncate table
truncate table
Note: table will not be in the recyclebin
drop user
Note: will not store anything in the recyclebin
purge recyclebins
Naming Convention
BIN$globalUID$version
Space pressure on a tablespace will cause it to purge the recyclebins of the users
within that tablespace, it is based on a FIFO order. When a tablespace has the auto
extend feature turned on it will clear down the recyclebin first, then auto extend.
Limitations on flashback drop:
Recyclebin is only available to non-system, locally managed tablespaces.
There is no guaranteed timeframe for how long an object will be stored in the
recyclebin
DML and DDL cannot be used on objects in the recyclebin
Must use the recyclebin name to query the table
All dependent objects are retrieved when you perform a flashback drop.
Virtual private database (VPD) and FGA policies defined on tables are not
protected for security reasons
Partitioned index-organised tables are not protected by the recycle bin.
Referential constraints are not protected by the recycle bin. They must be recreated after table has been rebuilt.
Flashback Database
The database can be taken back in time by reversing all work done sequentially. The
database must be opened with resetlogs as if an incomplete recovery has happened.
This is ideal if you have a database corruption (wrong transaction, etc) and require the
database to be rewound before the corruption occurred. If you have media or a
physical problem a normal recovery is required.
Flashback database is not enabled by default, when enabled flashback database a
process (RVWR recovery Writer) copies modified blocks to the flashback buffer.
This buffer is then flushed to disk (flashback logs). Remember the flashback logging
is not a log of changes but a log of the complete block images. Not every changed
block is logged as this would be too much for the database to cope with, so only as
many blocks are copied such that performance is not impacted. Flashback database
will construct a version of the data files that is just before the time you want. The data
files probably will be in a inconsistent state as different blocks will be at
different SCNs, to complete the flashback process, Oracle then uses the redo logs to
recover all the blocks to the exact time requested thus synchronizing all the data files
to the same SCN. Archiving mode must be enabled to use flashback database. An
important note to remember is that Flashback can never reserve a change only to redo
them.
The advantage in using flashback database is speed and convenience with which you
can take the database back in time.
You can use rman, sql and Enterprise manager to flashback a database. If the flash
recovery area does not have enough room the database will continue to function but
flashback operations may fail. It is not possible to flashback one tablespace, you must
flashback the whole database. If performance is being affected by flashback data
collection turn some tablespace flashbacking off.
You cannot undo a resized data file to a smaller size. When using backup recovery
area and backup recovery files controlfiles , redo logs, permanent files and
flashback logs will not be backed up.
Enable
Monitoring
Flashback Buffer
startup mount;
flashback database to timestamp to_timestamp('15-02-07 10:00:00', 'dd-mm-yy hh2
alter database open read only; (check schema ok start db, if not continue, opti
shutdown abort;
startup mount;
flashback database to timestamp to_timestamp('15-02-07 10:02:00', 'dd-mm-yy hh2
alter database open read only; (check schema ok start db, if not continue, opti
When happy
alter database open resetlogs;
Note: if one or more tablespaces are not generating flashback data, then before
carrying out a flashback operation the files making up the tablespace must be taken
offline. Offline files are ignored by recover and flashback. Remember that you must
make these files to the same point as the flashback otherwise the database will not
open.
Flashback Recovery Area
The alert log and DBA_OUTSTANDING_ALERTS will hold status information
regarding the flash recovery area. You can use the commands backup copy or backup
for flash recovery area. Controlfiles and redo logs are permanently stored in the flash
recovery area.
select * from v$recovery_file_dest;
Monitoring
Note: this details space_used, space_limit, space_reclaimable, # of files
One note is that you can only restore back to the restore point, you cannot restore back
to a point in time using restore points, you must then use the backups and archived
logs to do a point in time recovery.
Not Guaranteed
create
remove
Guaranteed
create guaranteed restore point create restore point test_guarantee guarantee flashback database;
remove guaranteed restore
point
Other Operations
Resumable Space
If you were running a long batch program and the tablespace run out of space, this
would cause a error, you would increase the amount of space in the tablespace and
rerun your job, this could take quite a bit of time.
Oracle's resumable space will suspend the running job that has run into problems due
to lack of space and will automatically continue when the space issue has been fixed.
You can make all operations run in resumable space allocation mode by using a alter
session command. The following database operations are resumable
Queries - they can always be resumed after the temporary tablespace has run
out of space.
DML Operations - insert, delete and update can all be resumed
DDL Operations - index operations involving creating, rebuilding and altering
are resumable as are create table as select operations
import and export operations - SQL loader jobs are resumable but to must use
the resumable parameter in the SQL loader job.
You can resume operations that are of the following types of errors
out of space - typical error message is the ORA-01653
maximum extents errors - typical error message is the ORA-01628
users space quota errors - typical error message is the ORA-01536
grant resumable to vallep;
Privilege
grant execute on dbms_resumable to vallep;
resumable_timeout=7200;
execute dbms_resumable.set_timeout(18000);
Note: time is in seconds
Useful Views
DBA_SYS_PRIVS
DBA_RESUMABLE
V$SESSION