Background Processes in Oracle
Background Processes in Oracle
A background process is defined as any process that is listed in V$PROCESS and has a nonnull value in the pnamecolumn.
Not all background processes are mandatory for an instance. Some are mandatory and
some are optional. Mandatory background processes are DBWn, LGWR, CKPT, SMON, PMON,
and RECO. All other processes are optional, will be invoked if that particular feature is
activated.
Oracle background processes are visible as separate operating system processes in
Unix/Linux. In Windows, these run as separate threads within the same service. Any issues
related to background processes should be monitored and analyzed from the trace files
generated and the alert log.
Background processes are started automatically when the instance is started.
To findout background processes from database:
SQL> select SID,PROGRAM from v$session where TYPE='BACKGROUND';
To findout background processes from OS:
$ ps -ef|grep ora_|grep SID
1) Database
Followers
Current Status
Circular
Operating System
Security System
Domestic abuse statistics
Whenever a log switch is occurring as redolog file is becoming CURRENT to ACTIVE stage,
oracle calls DBWn and synchronizes all the dirty blocks in database buffer cache to the
respective datafiles, scattered or randomly.
Database writer (or Dirty Buffer Writer) process does multi-block writing to disk
asynchronously. One DBWn process is adequate for most systems. Multiple database writers
can be configured by initialization parameter DB_WRITER_PROCESSES, depends on
the number of CPUs allocated to the instance. To have more than one DBWn only make
sense if each DBWn has been allocated its own list of blocks to write to disk. This is done
through the initialization parameter DB_BLOCK_LRU_LATCHES. If this parameter is not set
correctly, multiple DB writers can end up contending for the same block list.
The possible multiple DBWR processes in RAC must be coordinated through the locking and
global cache processes to ensure efficient processing is accomplished.
When the database is shutting down with some dirty blocks in the SGA, then oracle
calls DBWn.
DBWn has a time out value (3 seconds by default) and it wakes up whether there are
any dirty blocks or not.
When a server process cannot find a clean reusable buffer after scanning a
threshold number of buffers.
When a huge table wants to enter into SGA and oracle could not find enough free
space where it decides to flush out LRU blocks and which happens to be dirty blocks. Before
flushing out the dirty blocks, oracle calls DBWn.
2)
LGWR writes redo data from redolog buffers to (online) redolog files, sequentially.
Redolog file contains changes to any datafile. The content of the redolog file is file id, block
id and new content.
LGWR will be invoked more often than DBWn as log files are really small when compared to
datafiles (KB vs GB). For every small update we dont want to open huge gigabytes of
datafiles, instead write to the log file.
Redolog file has three stages CURRENT, ACTIVE, INACTIVE and this is a cyclic process.
Newly created redolog file will be in UNUSED state.
When the LGWR is writing to a particular redolog file, that file is said to be in CURRENT
status. If the file is filled up completely then a log switch takes place and the LGWR starts
writing to the second file (this is the reason every database requires a minimum of 2
redolog groups). The file which is filled up now becomes from CURRENT to ACTIVE.
Log writer will write synchronously to the redolog groups in a circular fashion. If any
damage is identified with a redolog file, the log writer will log an error in the LGWR trace file
and the alert log. Sometimes, when additional redolog buffer space is required, the LGWR
will even write uncommitted redolog entries to release the held buffers. LGWR can also use
group commits (multiple committed transaction's redo entries taken together) to write to
redologs when a database is undergoing heavy write operations.
In RAC, each RAC instance has its own LGWR process that maintains that instances thread
of redo logs.
When DBWn signals the writing of redo records to disk. All redo records associated
with changes in the block buffers must be written to disk first (The write-ahead protocol).
While writing dirty buffers, if the DBWn process finds that some redo information has not
been written, it signals the LGWR to write the information and waits until the control is
returned.
3)
Checkpoint is a background process which triggers the checkpoint event, to synchronize all
database files with the checkpoint information. It ensures data consistency and faster
database recovery in case of a crash.
When checkpoint occurred it will invoke the DBWn and updates the SCN block of the all
datafiles and the control file with the current SCN. This is done by LGWR. This SCN is called
checkpoint SCN.
Checkpoint event can be occurred in following conditions:
Manual checkpoint.
SQL> ALTER SYSTEM CHECKPOINT;
4)
If the database is crashed (power failure) and next time when we restart the
database SMON observes that last time the database was not shutdown gracefully. Hence it
requires some recovery, which is known as INSTANCE CRASH RECOVERY. When performing
the crash recovery before the database is completely open, if it finds any transaction
committed but not found in the datafiles, will now be applied from redolog files to datafiles.
If SMON observes some uncommitted transaction which has already updated the
table in the datafile, is going to be treated as a in doubt transaction and will be rolled back
with the help of before image available inrollback segments.
SMON also cleans up temporary segments that are no longer in use.
In RAC environment, the SMON process of one instance can perform instance
recovery for other instances that have failed.
SMON wakes up about every 5 minutes to perform housekeeping activities.
5)
If a client has an open transaction which is no longer active (client session is closed) then
PMON comes into the picture and that transaction becomes in doubt transaction which will
be rolled back.
PMON is responsible for performing recovery if a user process fails. It will rollback
uncommitted transactions. If the old session locked any resources that will be unlocked by
PMON.
PMON is responsible for cleaning up the database buffer cache and freeing resources that
were allocated to a process.
PMON also registers information about the instance and dispatcher processes with Oracle
(network) listener.
PMON also checks the dispatcher & server processes and restarts them if they have failed.
PMON wakes up every 3 seconds to perform housekeeping activities.
In RAC,
This process is intended for recovery in distributed databases. The distributed transaction
recovery process finds pending distributed transactions and resolves them. All in-doubt
transactions are recovered by this process in the distributed database setup. RECO will
connect to the remote database to resolve pending transactions.
Pending distributed transactions are two-phase commit transactions involving multiple
databases. The database that the transaction started is normally the coordinator. It will send
request to other databases involved in two-phase commit if they are ready to commit. If a
negative request is received from one of the other sites, the entire transaction will be rolled
back. Otherwise, the distributed transaction will be committed on all sites. However, there is
a chance that an error (network related or otherwise) causes the two-phase commit
transaction to be left in pending state (i.e. not committed or rolled back). It's the role of the
RECO process to liaise with the coordinator to resolve the pending two-phase commit
transaction. RECO will either commit or rollback this transaction.
The number of archiver processes that can be invoked initially is specified by the
initialization parameter LOG_ARCHIVE_MAX_PROCESSES (by default 2, max 10). The actual
number of archiver processes in use may vary based on the workload.
ARCH processes, running on primary database, select archived redo logs and send them to
standby database. Archive log files are used for media recovery (in case of a hard disk
failure and for maintaining an Oracle standby database via log shipping). Archives the
standby redo logs applied by the managed recovery process (MRP).
In RAC, the various ARCH processes can be utilized to ensure that copies of the archived
redo logs for each instance are available to the other instances in the RAC setup should they
be needed for recovery.
Job queue processes carry out batch processing. All scheduled jobs are executed by these
processes. The initialization parameter JOB_QUEUE_PROCESSES specifies the maximum job
processes that can be run concurrently. These processes will be useful in
refreshing materialized views.
This is the Oracles dynamic job queue coordinator. It periodically selects jobs (from JOB$)
that need to be run, scheduled by the Oracle job queue. The coordinator process
dynamically spawns job queue slave processes (J000-J999) to run the jobs. These jobs
could be PL/SQL statements or procedures on an Oracle instance.
CQJ0 - Job queue controller process wakes up periodically and checks the job log. If a job is
due, it spawns Jnnnn processes to handle jobs.
From Oracle 11g release2, DBMS_JOB and DBMS_SCHEDULER work without setting
JOB_QUEUE_PROCESSES. Prior to11gR2 the default value is 0, and from 11gR2 the default
value is 1000.
Dedicated Server
Dedicated server processes are used when MTS is not used. Each user process gets a
dedicated connection to the database. These user processes also handle disk reads from
database datafiles into the database block buffers.
LISTENER
The LISTENER process listens for connection requests on a specified port and passes these
requests to either a distributor process if MTS is configured, or to a dedicated process if MTS
is not used. The LISTENER process is responsible for load balance and failover in case a RAC
instance fails or is overloaded.
CALLOUT Listener
Used by internal processes to make calls to externally stored procedures.
Lock monitor manages global locks and resources. It handles the redistribution of instance
locks whenever instances are started or shutdown. Lock monitor also recovers instance lock
information prior to the instance recovery process. Lock monitor co-ordinates with the
Process Monitor (PMON) to recover dead processes that hold instance locks.
This process is also related to advanced queuing, and is meant for allowing a
publish/subscribe style of messaging between applications.
Intended for multi threaded server (MTS) setups. Dispatcher processes listen to and receive
requests from connected sessions and places them in the request queue for further
processing. Dispatcher processes also pickup outgoing responses from the result queue and
transmit them back to the clients. Dnnn are mediators between the client processes and the
shared server processes. The maximum number of dispatcher process can be specified
using the initialization parameter MAX_DISPATCHERS.
Intended for multi threaded server (MTS) setups. These processes pickup requests from the
call request queue, process them and then return the results to a result queue. These user
processes also handle disk reads from database datafiles into the database block
buffers. The number of shared server processes to be created at instance startup can be
specified using the initialization parameter SHARED_SERVERS. Maximum shared server
processes can be specified by MAX_SHARED_SERVERS.
These processes are used for parallel processing. It can be used for parallel execution of
SQL statements or recovery. The Maximum number of parallel processes that can be
invoked is specified by the initialization parameter PARALLEL_MAX_SERVERS.
Trace writer writes trace files from an Oracle internal tracing facility.
These processes are used to simulate asynchronous I/O on platforms that do not support it.
The initialization parameter DBWR_IO_SLAVES is set for this purpose.
Operating System
Security System
Domestic abuse statistics
Acknowledgement
Windows Xp
DRCn
These network receiver processes establish the connection from the source database NSVn
process. When the broker needs to send something (e.g. data or SQL) between databases,
it uses this NSV to DRC connection. These connections are started as needed.
Maintains fast-start failover state between the primary and target standby databases. FSFP
is created when fast-start failover is enabled.
complete. Each LNS has a user configurable buffer that is used to accept outbound redo
data from the LGWR process. The NET_TIMEOUT attribute is used only when the LGWR
process transmits redo data using a LGWR Network Server(LNS) process.
In Data Guard environment, this managed recovery process will apply archived redo logs to
the standby database.
The remote file server process, in Data Guard environment, on the standby database
receives archived redo logs from the primary database.
The logical standby process is the coordinator process for a set of processes that
concurrently read, prepare, build, analyze, and apply completed SQL transactions from the
archived redo logs. The LSP also maintains metadata in the database. The RFS process
communicates with the logical standby process (LSP) to coordinate and record which files
arrived.
Services requests for archive redo logs from FAL clients running on multiple standby
databases. Multiple FAL servers can be run on a primary database, one for each FAL
request.
Pulls archived redo log files from the primary site. Initiates transfer of archived redo logs
when it detects a gap sequence.
Creates and deletes the master table at the time of export and import. Master table contains
the job state and object information. Coordinates the Data Pump job tasks performed by
Data Pump worker processes and handles client interactions. The Data Pump master
(control) process is started during job creation and coordinates all tasks performed by the
Data Pump job. It handles all client interactions and communication, establishes all job
contexts, and coordinates all worker process activities on behalf of the job. Creates the
Worker Process.
tasks that are assigned by the Data Pump master process, such as the loading and
unloading of metadata and data.
Shadow Process
When client logs in to an Oracle Server the database creates and Oracle process to service
Data Pump API.
Client Process
The client process calls the Data pump API.
MMAN dynamically adjust the sizes of the SGA components like buffer cache, large pool,
shared pool and java pool and serves as SGA memory broker. It is a new process added to
Oracle 10g as part of automatic shared memory management.
the
CTWR will be useful in RMAN. Optimized incremental backups using block change
tracking (faster incremental backups) using a file (named block change tracking file). CTWR
(Change Tracking Writer) is the background process responsible for tracking the blocks.
ASMB
This ASMB process is used to provide information to and from cluster synchronization
services used by ASM to manage the disk resources. It's also used to update statistics and
provide a heart beat mechanism.
Re-Balance RBAL
RBAL is the ASM related process that performs rebalancing of disk resources controlled
by ASM.