DBA Interview Questions
DBA Interview Questions
A database is used to describe the physical files used to store information (data files, control files, redo log files)
Oracle defines the term instance as the memory structure and the background processes used to access data from a
database (SGA, PGA)
A control file contains information about the associated database that is required for access by an instance, both at
startup and during normal operation. Control file information can be modified only by Oracle; no database administrator
or user can edit a control file.
INIT.ora
When Oracle is trying to open your database, it goes through three distinct stages, Startup (nomount), Mount Open
When you issue the startup command, the first thing the database will do is enter the nomount stage. During the
nomount stage, Oracle first opens and reads the initialization parameter file (init.ora) to see how the database is
configured. After the parameter file is accessed, the memory areas associated with the database instance are allocated.
Also, during the nomount stage, the Oracle background processes are started.
When the startup command enters the mount stage, it opens and reads the control file. The control file is a binary file
that tracks important database information, such as the location of the database datafiles. In the mount stage, Oracle
determines the location of the datafiles, but does not yet open them. Once the datafile locations have been identified,
the database is ready to be opened.
The last startup step for an Oracle database is the open stage. When Oracle opens the database, it accesses all of the
datafiles associated with the database. Once it has accessed the database datafiles, Oracle makes sure that all of the
database datafiles are consistent.
* SMON - System Monitor process recovers after instance failure and monitors temporary segments and extents.
SMON in a non-failed instance can also perform failed instance recovery for other failed RAC instance.
* PMON - Process Monitor process recovers failed process resources. If MTS (also called Shared Server Architecture)
is being utilized, PMON monitors and restarts any failed dispatcher or server processes. In RAC, PMON’s role as
service registration agent is particularly important.
* DBWR - Database Writer or Dirty Buffer Writer process is responsible for writing dirty buffers from the database block
cache to the database data files. Generally, DBWR only writes blocks back to the data files on commit, or when the
cache is full and space has to be made for more blocks. The possible multiple DBWR processes in RAC must be
coordinated through the locking and global cache processes to ensure efficient processing is accomplished.
* LGWR - Log Writer process is responsible for writing the log buffers out to the redo logs. In RAC, each RAC instance
has its own LGWR process that maintains that instance’s thread of redo logs.
* ARCH – The optional Archive process writes filled redo logs to the archive log location(s). In RAC, the various ARCH
processes can be utilized to ensure that copies of the archived redo logs for each instance are available to the other
instances in the RAC setup should they be needed for recovery.
When an Oracle instance fails, Oracle performs an instance recovery when the associated database is re-started.
Instance recovery occurs in two steps:
Cache recovery: Changes being made to a database are recorded in the database buffer cache. These changes are
also recorded in online redo log files simultaneously. When there are enough data in the database buffer cache, they
are written to data files. If an Oracle instance fails before the data in the database buffer cache are written to data files,
Oracle uses the data recorded in the online redo log files to recover the lost data when the associated database is re-
started. This process is called cache recovery.
Transaction recovery: When a transaction modifies data in a database, the before image of the modified data is
stored in an undo segment. The data stored in the undo segment is used to restore the original values in case a
transaction is rolled back. At the time of an instance failure, the database may have uncommitted transactions. It is
possible that changes made by these uncommitted transactions have gotten saved in data files. To maintain read
consistency, Oracle rolls back all uncommitted transactions when the associated database is re-started. Oracle uses the
undo data stored in undo segments to accomplish this. This process is called transaction recovery.
7. How do you control number of Datafiles one can have in an Oracle database?
The number of data files in oracle database is controlled by the initialization parameter DB_FILES
setting this value too high can cause DBWR issues. before 9i maximum no of datafiles in a oracle
database is 1022. after 9i and this limit is applicable to the no of data files in a tablespace.
9. What is a Tablespace?
A tablespace is a logical storage unit of an Oracle database. It is a logical unit because a tablespace is not visible in the
file system of the computer on which the database is present.
SYSTEM is the default tablespace of an Oracle database that stores data dictionary tables and indexes. The tablespace
builds a bridge between the Oracle database and the file system in which the table's or the index's data is stored.
One can use multiple tablespaces that offer flexibility in performing database operations, such as separation of one
application's data from another's data, storing different tablespace data files on separate disk drives to avoid input-
output contentions, maintaining backups for individual tablespaces, and many more.
11. Which default Database roles are created when you create a Database?
1. Every dirty block in the buffer cache is written to the data files. That is, it synchronizes the datablocks in
the buffer cache with the datafiles on disk.
It's the DBWR that writes all modified databaseblocks back to the datafiles.
2. The latest SCN is written (updated) into the datafile header.
3. The latest SCN is also written to the controlfiles.
The update of the datafile headers and the control files is done by the LGWR(CKPT if CKPT is enabled). As of version
8.0, CKPT is enabled by default.
Reading in Oracle is completely handled by Server processes. All instruction (reading or writing) from clients processes
first goes to server process.
• The server process first checks the buffer cache for the presence of data.
• If not found then only copy the data from datafile to buffer cache.
• Then send the data to the client
DBWR process
The Oracle shared pool contains Oracle's library cache, which is responsible for collecting, parsing, interpreting, and
executing all of the SQL statements that go against the Oracle database. Hence, the shared pool is a key component,
so it's necessary for the Oracle database administrator to check for shared pool contention.
The shared pool is like a buffer for SQL statements. Oracle's parsing algorithm ensures that identical SQL statements
do not have to be parsed each time they're executed. The shared pool is used to store SQL statements, and it includes
the following components:
Database Buffer cache is one of the most important components of System Global Area (SGA). Database Buffer Cache
is the place where data blocks are copied from datafiles to perform SQL operations. Buffer Cache is shared memory
structure and it is concurrently accessed by all server processes.
18. How many maximum Redo Logfiles one can have in a Database?
When an Oracle Instance is started, the characteristics of the Instance are established by parameters specified within the
initialization parameter file. These initialization parameters are either stored in a PFILE or SPFILE. SPFILEs are available in
Oracle 9i and above. All prior releases of Oracle are using PFILEs.
A PFILE is a static, client-side text file that must be updated with a standard text editor like "notepad" or "vi". This file
normally reside on the server, however, you need a local copy if you want to start Oracle from a remote machine. DBA's
commonly refer to this file as the INIT.ORA file.
An SPFILE (Server Parameter File), on the other hand, is a persistent server-side binary file that can only be modified with
the "ALTER SYSTEM SET" command. This means you no longer need a local copy of the pfile to start the database from a
remote machine. Editing an SPFILE will corrupt it, and you will not be able to start your database anymore.
The Program Global Area (PGA) is a memory buffer that contains data and control information for a server process. A
PGA is created by Oracle when a server process is started. The PGA (Program or Process Global Area) is a memory
area (RAM) that stores data and control information for a single process. For example, it typically contains a sort area,
hash area, session cursor cache, etc.
Automatic PGA Memory Management may be used in place of setting the sort_area_size, hash_area_size,
sort_area_retained_size, sort_area_hash_size and other related memory management parameters
PGA_AGGREGATE_TARGET specifies the target aggregate PGA memory available to all server processes attached to
the instance. You must set this parameter to enable the automatic sizing of SQL working areas used by memory-
intensive SQL operators such as sort, group-by, hash-join, bitmap merge, and bitmap create.
Oracle uses this parameter as a target for PGA memory. Use this parameter to determine the optimal size of each work
area allocated in AUTO mode (in other words, when WORKAREA_SIZE_POLICY is set to AUTO.
Oracle attempts to keep the amount of private memory below the target specified by this parameter by adapting the size
of the work areas to private memory. When increasing the value of this parameter, you indirectly increase the memory
allotted to work areas. Consequently, more memory-intensive operations are able to run fully in memory and less will
work their way over to disk.
When setting this parameter, you should examine the total memory on your system that is available to the Oracle
instance and subtract the SGA. You can assign the remaining memory to PGA_AGGREGATE_TARGET.
Oracle Large Pool is an optional memory component of the oracle database SGA. This area is used for providing large
memory allocations in many situations that arise during the operations of an oracle database instance.
1. Session memory for the shared server and the Oracle XA Interface when distributed transactions are involved
2. I/O Server Processes
3. Parallel Query Buffers
4. Oracle Backup and Restore Operations using RMAN
Large Pool plays an important role in Oracle Database Tuning since the allocation of the memory for the above
components otherwise is done from the shared pool. Also due to the large memory requirements for I/O and Rman
operations, the large pool is better able to satisfy the requirements instead of depending on the Shared Pool Area.
PCTINCREASE refers to the percentage by which each next extent (beginning with the third extend) will grow. The size
of each subsequent extent is equal to the size of the previous extent plus this percentage increase. PCTINCREASE of 0
or 100 gives you nice round extent sizes that can easily be reused
PCTFREE is a block storage parameter used to specify how much space should be left in a database block for future
updates. For example, for PCTFREE=10, Oracle will keep on adding new rows to a block until it is 90% full. This leaves
10% for future updates
PCTUSED is a block storage parameter used to specify when Oracle should consider a database block to be empty
enough to be added to the freelist. Oracle will only insert new rows in blocks that is enqueued on the freelist. For
example, if PCTUSED=40, Oracle will not add new rows to the block unless sufficient rows are deleted from the block
so that it falls below 40% empty
Row migration occurs when an update to that row would cause it to not fit on the block anymore (with all of the other
data that exists there currently). A migration means that the entire row will move and we just leave behind the
«forwarding address». So, the original block just has the rowid of the new block and the entire row is moved.
Row chaining occurs when a row is too large to fit into a single database block. For example, if you use a 4KB blocksize
for your database, and you need to insert a row of 8KB into it, Oracle will use 3 blocks and store the row in pieces.
25. What is 01555 - Snapshot Too Old error and how do you avoid it?
Using LMT, each tablespace manages it's own free and used space within a bitmap structure stored in one of the
tablespace's data files.
Dictionary contention is reduced
Space wastage reduced
No rollback generated
Fragmentation reduced
CREATE TABLESPACE ts2 DATAFILE '/oradata/ts2_01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
NO
Cost Based Optimizer (CBO) - This method is used if internal statistics are present. The CBO checks several possible
execution plans and selects the one with the lowest cost, where cost relates to system resources.
31. How do you collect statistics for a table, schema and Database?
By default Oracle 10g automatically gathers optimizer statistics using a scheduled job called GATHER_STATS_JOB. By
default this job runs within a maintenance windows between 10 P.M. to 6 A.M. week nights and all day on weekends.
The job calls the DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC internal procedure which gathers
statistics for tables with either empty or stale statistics, similar to the DBMS_STATS.GATHER_DATABASE_STATS
procedure using the GATHER AUTO option. The main difference is that the internal job prioritizes the work such that
tables most urgently requiring statistics updates are processed first.
Estimate single-table predicate selectivities where available statistics are missing or may lead to bad
estimations.
Estimate statatistics for tables and indexes with missing statistics.
Estimate statatistics for tables and indexes with out of date statistics.
Dynamic sampling is controled by the OPTIMIZER_DYNAMIC_SAMPLING parameter which accepts values from "0"
(off) to "10" (agressive sampling) with a default value of "2". At compile-time Oracle determines if dynamic sampling
would improve query performance. If so it issues recursive statements to estimate the necessary statistics. Dynamic
sampling can be beneficial when:
The sample time is small compared to the overall query execution time.
Dynamic sampling results in a better performing query.
The query may be executed multiple times.
Columns which have high cardinality of data, ssn, dob, ID, sequence number etc
37. A Column is having many repeated values which type of index you should create on this column, if you have to?
- deleted entries represent 20% or more of the current entries. Data in the index becomes sparse
- the index depth is more then 4 levels. (blevel column in DBA_INDEXES is greater 4)
47. You want users to change their passwords every 2 months. How do you enforce this?
Using a profile
48. How do you delete duplicate rows in a table?
After performing a DELETE operation you need to COMMIT or ROLLBACK the transaction to make the change
permanent or to undo it.
TRUNCATE removes all rows from a table. The operation cannot be rolled back and no triggers will be fired. As such,
TRUCATE is faster and doesn't use as much undo space as a DELETE.
compress – When “Y”, export will mark the table to be loaded as one extent for the import utility. If “N”, the current
storage options defined for the table will be used. Although this option is only implemented on import, it can only be
specified on export.
consistent – [N] Specifies the set transaction read only statement for export, ensuring data consistency. This option
should be set to “Y” if activity is anticipated while the exp command is executing.
52. What is the difference between Direct Path and Convention Path loading?
A conventional path load executes SQL INSERT statement(s) to populate table(s) in an Oracle database. A direct path
load eliminates much of the Oracle database overhead by formatting Oracle data blocks and writing the data blocks
directly to the database files. A direct load, therefore, does not compete with other users for database resources so it
can usually load data at near disk speed. Certain considerations, inherent to this method of access to database files,
such as restrictions, security and backup implications, are discussed in this chapter.
Conventional path load (the default) uses the SQL INSERT statement and a bind array buffer to load data into database
tables. This method is used by all Oracle tools and applications
Instead of filling a bind array buffer and passing it to Oracle with a SQL INSERT command, a direct path load parses the
input data according to the description given in the loader control file, converts the data for each input field to its
corresponding Oracle column datatype, and builds a column array structure (an array of <length, data>
pairs).SQL*Loader then uses the column array structure to format Oracle data blocks and build index keys. The newly
formatted database blocks are then written directly to the database (multiple blocks per I/O request using asynchronous
writes if the host platform supports asynchronous I/O).
When loading a partitioned or subpartitioned table, SQL*Loader partitions the rows and maintains indexes (which can
also be partitioned). Note that a direct path load of a partitioned or subpartitioned table can be quite resource intensive
for tables with many partitions or subpartitions.
Index Organized Tables are tables that, unlike heap tables, are organized like B*Tree indexes.
CREATE TABLE admin_docindex(
token char(20),
doc_id NUMBER,
token_frequency NUMBER,
token_offsets VARCHAR2(512),
CONSTRAINT pk_admin_docindex PRIMARY KEY (token, doc_id))
ORGANIZATION INDEX
TABLESPACE admin_tbs
PCTTHRESHOLD 20
OVERFLOW TABLESPACE admin_tbs2;
Global index is for the entire partitioned table, covers all the partitions.
Local index is separate index for each partition…..local index is preferred to global for performance
56. What is the difference between Range Partitioning and Hash Partitioning?
Maps data to partitions based on ranges of partition key values that you establish for each partition.
Hash Partitioning, which maps data to partitions based on a hashing algorithm, evenly distributing data between the
partitions. This is typically used where ranges aren't appropriate, i.e. customer number, product ID
58. Can you import objects from Oracle ver. 7.3 to 9i?
59. How do you move tables from one tablespace to another tablespace?
60. How do see how much space is used and free in a tablespace?
61. How can you see the current DDL statements in the database?
Block change tracking causes the changed database blocks to be flagged in a file. As data blocks change, the
Change Tracking Writer (CTWR) background process tracks the changed blocks in a private area of memory. When a
commit is issued against the data block, the block change tracking information is copied to a shared area in Large Pool
called the CTWR buffer. During the checkpoint, the CTWR process writes the information from the CTWR RAM buffer to
the change-tracking file.To achive this we need to enable the block change traking in our database:
db file scattered read - The process has issued an I/O request to read a series of contiguous blocks from a data file into
the buffer cache, and is waiting for the operation to complete. This typically happens during a full table scan or full index
scan.
db file sequential read - The process has issued an I/O request to read one block from a data file into the buffer cache,
and is waiting for the operation to complete. This typically happens during an index lookup or a fetch from a table by
ROWID when the required data block is not already in memory. Do not be misled by the confusing name of this wait
event!
1. Which types of backups you can take in Oracle?
2. A database is running in NOARCHIVELOG mode then which type of backups you can take?
Cold
3. Can you take partial backups if the Database is running in NOARCHIVELOG mode?
NO
4. Can you take Online Backups if the the database is running in NOARCHIVELOG mode?
NO
5. How do you bring the database in ARCHIVELOG mode from NOARCHIVELOG mode?
log_archive_dest_1='location=/u02/oradata/cuddle/archive'
log_archive_start=TRUE
6. You cannot shutdown the database for even some minutes, then in which mode you should run
the database?
7. Where should you place Archive logfiles, in the same disk where DB is or another disk?
Another disk
Export / import
10. Should you take the backup of Logfiles if the database is running in ARCHIVELOG mode?
YES
Hot backup
Recovery Manager (or RMAN) is an Oracle provided utility for backing-up, restoring and recovering Oracle Databases.
RMAN ships with the database server and doesn't require a separate installation.
4. Ability to delete the older ARCHIVE REDOLOG files, with the new one's automatically.
6. Ability to report the files needed for the backup. Recovery Catalog
NO, separate.
20. Can you use Backupsets created by RMAN with any other utility?
no
21. Where RMAN keeps information of backups if you are using RMAN without Catalog?
23. You want to retain only last 3 backups of datafiles. How do you go for it in RMAN?
the CONFIGURE RETENTION POLICY command. The REPORT OBSOLETE and DELETE OBSOLETE commands
can be executed periodically or regularly to view obsolete files and to delete them, respectively.
The retention policy is continuous. As the data file, control file, and archived redo log backups are produced over time,
RMAN keeps track of them and decides which to retain and which to mark as obsolete. RMAN does not automatically
delete the backups or copies.
24. Which is more efficient Incremental Backups using RMAN or Incremental Export?
YES
26. How do you recover from the loss of datafile if the DB is running in NOARCHIVELOG mode?
27. You loss one datafile and it does not contain important objects. The important objects are there in other datafiles which are
intact. How do you proceed in this situation?
28. You lost some datafiles and you don't have any full backup and the database was running in NOARCHIVELOG mode. What
you can do now?
29. How do you recover from the loss of datafile if the DB is running in ARCHIVELOG mode?
30. You loss one datafile and DB is running in ARCHIVELOG mode. You have full database backup of 1 week old and partial
backup of this datafile which is just 1 day old. From which backup should you restore this file?
32. The current logfile gets damaged. What you can do now?
34. What is Cancel Based, Time based and Change Based Recovery?
Partial recovery
35. Some user has accidentally dropped one table and you realize this after two days. Can you recover this table if the DB is
running in ARCHIVELOG mode?
36. Do you have to restore Datafiles manually from backups if you are doing recovery using RMAN?
no
37. A database is running in ARCHIVELOG mode since last one month. A datafile is added to the database last week. Many
objects are created in this datafile. After one week this datafile gets damaged before you can take any backup. Now can you
recover this datafile when you don't have any backups?
38. How do you recover from the loss of a controlfile if you have backup of controlfile?
39. Only some blocks are damaged in a datafile. Can you just recover these blocks if you are using RMAN?
STARTUP NOMOUNT;
RUN
{
ALLOCATE CHANNEL c1 DEVICE TYPE sbt;
RESTORE CONTROLFILE;
ALTER DATABASE MOUNT;
RESTORE DATABASE;
}
40. Some datafiles were there on a secondary disk and that disk has become damaged and it will take some days to get a new
disk. How will you recover from this situation?
41. Have you faced any emergency situation? Tell us how you resolved it?
42. At one time you lost parameter file accidentally and you don't have any backup. How you will recreate a new parameter file
with the parameters set to previous values.
3. You have written a script to take backups. How do you make it run automatically every week?
4. What is OERR utility?
6. How do you see how much hard disk space is free in Linux?
7. What is SAR?
8. What is SHMMAX?
11. How do you see how many memory segments are acquired by Oracle Instances?
12. How do you see which segment belongs to which database instances?
14. How do you set Kernel Parameters in Red Hat Linux, AIX and Solaris?
16. What is the difference between Soft Link and Hard Link?
All the databases running on the server and their oracle homes
18. How do you see how many processes are running in Unix?
Kill -9 pid
WAIT EVENTS
When Oracle executes an SQL statement, it is not constantly executing. Sometimes it has to wait for a specific event to happen
befor it can proceed.
For example, if Oracle (or the SQL statement) wants to modify data, and the corresponding database block is not currently in
the SGA, Oracle waits for this block to be available for modification.
All possible wait events can be found in v$event_name. In Oracle 10g R1, there are some 806 different wait events.
What Oracle waits for and how long it has totally waited for these events can be monitored through the following views:
• v$session_event
• v$session_wait
• v$system_event
Important events
Important events are:
If two processes try (almost) simultaneausly the same block and the block is not resident in the buffer cache, one process will
allocate a buffer in the buffer cache and lock it and the read the block into the buffer. The other process is locked until the block
is read. This wait is refered to as buffer busy wait.
See also this link.
A process reads multiple blocks (mostly as part of a full table scan or an index fast full scan). It can also indicate a multiblock
read when the process reads parts of a sort segement.
In most cases, this event means that a foreground process reads a single block (because it reads a block from an index or
because it reads a block by rowid).
enqueue
latch free
This wait event indicates that the size of the log buffer is chosen too small.
Wait classes
Wait events can be categorized by wait classes. These classes are exposed through v$session_wait_class.
The following wait classes exist:
Administrative
Application
Cluster
Concurrency
Configuration
Commit
Idle Waits
Network
Other
System I/O
Scheduler
User I/O
Parameters
The parameters P1, P2 and P3 in v$session_wait are dependent on the wait.
P1 refers sometimes to the datafile number.
If this number is greater than db_files, it refers to a temp file.
The name of the datafile for a number can be retrieved through v$datafiles.
Oracle Latch
What is Latch ?
A server or background process acquires a latch for a very short time while manipulating or looking at one of these structures.
During DB performance we will see LATCH event ...so what is latch event and how many types of latch events ?
The latch free event is updated when a server process attempts to get a latch, and the latch is unavailable on the first attempt.
Most Popular latch wait event are ...
Possible Causes
1. Inefficient SQL that accesses incorrect indexes iteratively (large index range scans) or many full table scans.
2. DBWR not keeping up with the dirty workload; hence, foreground process spends longer holding the latch looking for a free
buffer
3. Cache may be too small
Possible Suggestion
1. Look for: Statements with very high logical I/O or physical I/O, using unselective indexes
2. Increase DB_CACHE_SIZE parameter value.
3. The cache buffers lru chain latches protect the lists of buffers in the cache. When adding, moving, or removing a buffer from a
list, a latch must be obtained.
For symmetric multiprocessor (SMP) systems, Oracle automatically sets the number of LRU latches to a value equal to one half
the number of CPUs on the system. For non-SMP systems, one LRU latch is sufficient.
Contention for the LRU latch can impede performance on SMP machines with a large number of CPUs. LRU latch contention is
detected by querying V$LATCH, V$SESSION_EVENT, and V$SYSTEM_EVENT. To avoid contention, consider tuning the
application, bypassing the buffer cache for DSS jobs, or redesigning the application.
Possible Causes
1. Repeated access to a block (or small number of blocks), known as a hot block
2. From AskTom:
Possible Suggestion
1. From AskTom:
When I see this, I try to see what SQL the waiters are trying to execute. Many times,
what I find, is they are all running the same query for the same data (hot blocks). If
you find such a query -- typically it indicates a query that might need to be tuned (to
access less blocks hence avoiding the collisions).
If it is long buffer chains, you can use multiple buffer pools to spread things out. You
can use DB_BLOCK_LRU_LATCHES to increase the number of latches. You can use both
together.
The cache buffers chains latches are used to protect a buffer list in the buffer cache. These latches are used when searching
for, adding, or removing a buffer from the buffer cache. Contention on this latch usually means that there is a block that is
greatly contended for (known as a hot block).
To identify the heavily accessed buffer chain, and hence the contended for block, look at latch statistics for the cache buffers
chains latches using the view V$LATCH_CHILDREN. If there is a specific cache buffers chains child latch that has many more
GETS, MISSES, and SLEEPS when compared with the other child latches, then this is the contended for child latch.
This latch has a memory address, identified by the ADDR column. Use the value in the ADDR column joined with the X$BH
table to identify the blocks protected by this latch. For example, given the address (V$LATCH_CHILDREN.ADDR) of a heavily
contended latch, this queries the file and block numbers:
X$BH.TCH is a touch count for the buffer. A high value for X$BH.TCH indicates a hot block.
Many blocks are protected by each latch. One of these buffers will probably be the hot block. Any block with a high TCH value is
a potential hot block. Perform this query a number of times, and identify the block that consistently appears in the output. After
you have identified the hot block, query DBA_EXTENTS using the file number and block number, to identify the segment.
After you have identified the hot block, you can identify the segment it belongs to with the following query:
In the query, &obj is the value of the OBJ column in the previous query on X$BH.