18Cs53: Database Management Systems: Introduction To Transaction Processing Concepts and Theory
18Cs53: Database Management Systems: Introduction To Transaction Processing Concepts and Theory
18Cs53: Database Management Systems: Introduction To Transaction Processing Concepts and Theory
MODULE – V
TOPICS:
Transaction Processing: Introduction to Transaction Processing, Transaction and System
concepts, Desirable properties of Transactions, Characterizing schedules based on
recoverability, Characterizing schedules based on Serializability, Transaction support in SQL.
• Transaction: An executing program (process) that includes one or more database access
operations
– Example: Bank balance transfer of $100 dollars from a checking account to a saving
account in a BANK database
MODULE – V 18CS53: DBMS
– Bank transfer program parameters: savings account number, checking account number,
transfer amount
– Note: An application program may contain several transactions separated by Begin and
End transaction boundaries
– Parallel processing: processes are concurrently executed in multiple CPUs (Figure 21.1)
• Granularity (size) of a data item - a field (data item value), a record, or a whole disk block
– read_item(X): Reads a database item named X into a program variable. To simplify our
notation, we assume that the program variable is also named X.
– write_item(X): Writes the value of program variable X into the database item named X.
⚫ Basic unit of data transfer from the disk to the computer main memory is one disk block (or
page). A data item X (what is read or written) will usually be the field of some record in the
database, although it may be a larger unit such as a whole record or even a whole block.
• Copy that disk block into a buffer in main memory (if that disk block is not already in some main
memory buffer).
• Copy that disk block into a buffer in main memory (if it is not already in some main memory
buffer).
• Copy item X from the program variable named X into its correct location in the buffer.
• Store the updated block from the buffer back to disk (either immediately or at some later point in
time).
• Copy that disk block into a buffer in main memory (if it is not already in some main memory
buffer).
• Copy item X from the program variable named X into its correct location in the buffer.
• Store the updated block from the buffer back to disk (either immediately or at some later point in
time).
Occurs when two transactions update the same data item, but both read the same original value
before update (Figure 21.3(a), next slide)
This occurs when one transaction T1 updates a database item X, which is accessed (read) by
another transaction T2; then T1 fails for some reason (Figure 21.3(b)); X was (read) by T2 before its
value is changed back (rolled back or UNDONE) after T1 fails
2. A transaction or system error : Some operation in the transaction may cause it to fail, such
as integer overflow or division by zero. Transaction failure may also occur because of
erroneous parameter values or because of a logical programming error. In addition, the user
may interrupt the transaction during its execution.
3. Local errors or exception conditions detected by the transaction: certain conditions
necessitate cancellation of the transaction. For example, data for the transaction may not be
found. A condition, such as insufficient account balance in a banking database, may cause a
transaction, such as a fund withdrawal, to be canceled - a programmed abort causes the
transaction to fail.
4. Concurrency control enforcement: The concurrency control method may decide to abort
the transaction, to be restarted later, because it violates serializability or because several
transactions are in a state of deadlock.
5. Disk failure: Some disk blocks may lose their data because of a read or write malfunction or
because of a disk read/write head crash. This kind of failure and item 6 are more severe than
items 1 through 4.
6. Physical problems and catastrophes: This refers to an endless list of problems that includes
power or air-conditioning failure, fire, theft, sabotage, overwriting disks or tapes by mistake,
and mounting of a wrong tape by the operator.
A transaction is an atomic unit of work that is either completed in its entirety or not done at all. A
transaction passes through several states (Figure 21.4, similar to process states in operating systems).
Transaction states:
• Active state (executing read, write operations)
• Partially committed state (ended but waiting for system checks to determine success or failure)
• Committed state (transaction succeeded)
• Failed state (transaction failed, must be rolled back)
• Terminated State (transaction leaves system)
DBMS Recovery Manager needs system to keep track of the following operations (in the system log
file):
• begin_transaction: Start of transaction execution.
• read or write: Read or write operations on the database items that are executed as part of a
transaction.
• end_transaction: Specifies end of read and write transaction operations have ended. System
may still have to check whether the changes (writes) introduced by transaction can be
permanently applied to the database (commit transaction); or whether the transaction has to be
rolled back (abort transaction) because it violates concurrency control or for some other reason.
Recovery manager keeps track of the following operations (cont.):
• commit_transaction: Signals successful end of transaction; any changes (writes) executed by
transaction can be safely committed to the database and will not be undone.
• abort_transaction (or rollback): Signals transaction has ended unsuccessfully; any changes or
effects that the transaction may have applied to the database must be undone.
System operations used during recovery (see Chapter 23):
• undo(X): Similar to rollback except that it applies to a single write operation rather than to a
whole transaction.
• redo(X): This specifies that a write operation of a committed transaction must be redone to
ensure that it has been applied permanently to the database on disk.
Durability or permanency: Once a transaction is committed, its changes (writes) applied to the
database must never be lost because of subsequent failure
• Atomicity: Enforced by the recovery protocol.
• Consistency preservation: Specifies that each transaction does a correct action on the database
on its own. Application programmers and DBMS constraint enforcement are responsible for this.
• Isolation: Responsibility of the concurrency control protocol.
• Durability or permanency: Enforced by the recovery protocol.
Schedules of Transactions
• Transaction schedule (or history): When transactions are executing concurrently in an
interleaved fashion, the order of execution of operations from the various transactions forms what
is known as a transaction schedule (or history).
• Figure 21.5 (next slide) shows 4 possible schedules (A, B, C, D) of two transactions T1 and T2:
– Order of operations from top to bottom
– Each schedule includes same operations
– Different order of operations in each schedule
An ordering of all the operations of the transactions subject to the constraint that, for each transaction Ti
that participates in S, the operations of Ti in S must appear in the same order in which they occur in Ti.
Note: Operations from other transactions Tj can be interleaved with the operations of Ti in S.
• For n transactions T1, T2, ..., Tn, where each Ti has mi read and write operations, the number of
possible schedules is (! is factorial function):
(m1 + m2 + … + mn)! / ( (m1)! * (m2)! * … * (mn)! )
• Generally very large number of possible schedules
• Some schedules are easy to recover from after a failure, while others are not
• Some schedules produce correct results, while others produce incorrect results
• Rest of chapter characterizes schedules by classifying them based on ease of recovery
(recoverability) and correctness (serializability)
Example: Schedule A below is non-recoverable because T2 reads the value of X that was
written by T1, but then T2 commits before T1 commits or aborts
To make it recoverable, the commit of T2 (c2) must be delayed until T1 either commits, or
aborts (Schedule B)
• If T1 commits, T2 can commit
• If T1 aborts, T2 must also abort because it read a value that was written by T1; this value must be
undone (reset to its old value) when T1 is aborted
– known as cascading rollback
• Schedule A: r1(X); w1(X); r2(X); w2(X); c2; r1(Y); w1(Y); c1 (or a1)
• Schedule B: r1(X); w1(X); r2(X); w2(X); r1(Y); w1(Y); c1 (or a1); ...
Recoverable schedules can be further refined:
• Cascadeless schedule: A schedule in which a transaction T2 cannot read an item X until the
transaction T1 that last wrote X has committed.
• The set of cascadeless schedules is a subset of the set of recoverable schedules.
Schedules requiring cascaded rollback: A schedule in which an uncommitted transaction T2
that read an item that was written by a failed transaction T1 must be rolled back.
• Example: Schedule B below is not cascadeless because T2 reads the value of X that was written
by T1 before T1 commits
• If T1 aborts (fails), T2 must also be aborted (rolled back) resulting in cascading rollback
• To make it cascadeless, the r2(X) of T2 must be delayed until T1 commits (or aborts and rolls
back the value of X to its previous value) – see Schedule C
• Schedule B: r1(X); w1(X); r2(X); w2(X); r1(Y); w1(Y); c1 (or a1);
• Schedule C: r1(X); w1(X); r1(Y); w1(Y); c1; r2(X); w2(X); ...
Equivalence of Schedules
• Result equivalent: Two schedules are called result equivalent if they produce the same final state
of the database.
• Difficult to determine without analyzing the internal operations of the transactions, which is not
feasible in general.
• May also get result equivalence by chance for a particular input parameter even though schedules
are not equivalent in general (see Figure 21.6, next slide)
• Conflict equivalent: Two schedules are conflict equivalent if the relative order of any two
conflicting operations is the same in both schedules.
• Commonly used definition of schedule equivalence
• Two operations are conflicting if:
– They access the same data item X
– They are from two different transactions
– At least one is a write operation
• Read-Write conflict example: r1(X) and w2(X)
• Write-write conflict example: w1(Y) and w2(Y)
• Changing the order of conflicting operations generally causes a different outcome
• Example: changing r1(X); w2(X) to w2(X); r1(X) means that T1 will read a different value for X
• Example: changing w1(Y); w2(Y) to w2(Y); w1(Y) means that the final value for Y in the
database can be different
• Note that read operations are not conflicting; changing r1(Z); r2(Z) to r2(Z); r1(Z) does not
change the outcome
• Conflict equivalence of schedules is used to determine which schedules are correct in general
(serializable)
A schedule S is said to be serializable if it is conflict equivalent to some serial schedule S’.
• A serializable schedule is considered to be correct because it is equivalent to a serial schedule,
and any serial schedule is considered to be correct
– It will leave the database in a consistent state.
– The interleaving is appropriate and will result in a state as if the transactions were serially
executed yet will achieve efficiency due to concurrent execution and interleaving of
operations from different transactions.
• Serializability is generally hard to check at run-time:
– Interleaving of operations is generally handled by the operating system through the
process scheduler
– Difficult to determine beforehand how the operations in a schedule will be interleaved
– Transactions are continuously started and terminated
Practical approach:
• Come up with methods (concurrency control protocols) to ensure serializability.
• DBMS concurrency control subsystem will enforce the protocol rules and thus guarantee
serializability of schedules
• Current approach used in most DBMSs:
– Use of locks with two phase locking.
–
Testing for conflict serializability
Algorithm 21.1:
• Looks at only r(X) and w(X) operations in a schedule
• Constructs a precedence graph (serialization graph) – one node for each transaction, plus
directed edges
• An edge is created from Ti to Tj if one of the operations in Ti appears before a conflicting
operation in Tj
• The schedule is serializable if and only if the precedence graph has no cycles.
• With SQL, there is no explicit Begin Transaction statement. Transaction initiation is done
implicitly when particular SQL statements are encountered.
• Every transaction must have an explicit end statement, which is either a COMMIT or
ROLLBACK.
Characteristics specified by a SET TRANSACTION statement in SQL:
⚫ Access mode: READ ONLY or READ WRITE. The default is READ WRITE unless the
isolation level of READ UNCOMITTED is specified, in which case READ ONLY is assumed.
⚫ Diagnostic size n, specifies an integer value n, indicating the number of conditions that can be
held simultaneously in the diagnostic area. (To supply run-time feedback information to calling
program for SQL statements executed in program)
Read Write
Y N
N N
Transaction can be blocked (forced to wait) if the item is held by other transactions in conflicting lock
mode
Conflicts are write-write or read-write (read-read is not conflicting)
Two-Phase Locking Techniques: Essential components
(i) Lock Manager: Subsystem of DBMS that manages locks on data items.
(ii) Lock table: Lock manager uses it to store information about locked data items, such as: data
item id, transaction id, lock mode, list of waiting transaction ids, etc. One simple way to
implement a lock table is through linked list (shown). Alternatively, a hash table with item id
as hash key can be used.
created from Ti to Tj (Ti is waiting on Tj to unlock the item). The system checks for cycles; if a cycle
exists, a state of deadlock is detected (see Figure 22.5).
2. Deadlock prevention
There are several protocols. Some of them are:
1. Conservative 2PL, as we discussed earlier.
2. No-waiting protocol: A transaction never waits; if Ti requests an item that is held by Tj in conflicting
mode, Ti is aborted. Can result in needless transaction aborts because deadlock might have never
occurred
3. Cautious waiting protocol: If Ti requests an item that is held by Tj in conflicting mode, the system
checks the status of Tj; if Tj is not blocked, then Ti waits – if Tj is blocked, then Ti aborts. Reduces the
number of needlessly aborted transactions
Wait-die:
If Ti requests an item X that is held by Tj in conflicting mode, then
if TS(Ti) < TS(Tj) then Ti waits (on a younger transaction Tj)
else Ti dies (if Ti is younger than Tj, it aborts)
[In wait-die, transactions only wait on younger transactions that started later, so no cycle ever occurs in
wait-for graph – if transaction requesting lock is younger than that holding the lock, requesting
transaction aborts (dies)]
2. Deadlock prevention (cont.)
4. Wound-wait and wait-die (cont.):
Wound-wait:
If Ti requests an item X that is held by Tj in conflicting mode, then
if TS(Ti) < TS(Tj) then Tj is aborted (Ti wounds younger Tj)
else Ti waits (on an older transaction Tj)
[In wound-wait, transactions only wait on older transactions that started earlier, so no cycle ever occurs
in wait-for graph – if transaction requesting lock is older than that holding the lock, transaction holding
the lock is preemptively aborted]
Examples of Starvation:
1. In deadlock detection/resolution it is possible that the same transaction may consistently be selected as
victim and rolled-back.
2. In conservative 2PL, a transaction may never get started because all the items needed are never
available at the same time.
3. In Wound-Wait scheme a younger transaction may always be wounded (aborted) by a long running
older transaction which may create starvation.
• Read_TS(X): The largest timestamp among all the timestamps of transactions that have successfully
read item X
• Write_TS(X): The largest timestamp among all the timestamps of transactions that have
successfully written X
• When a transaction T requests to read or write an item X, TS(T) is compared with read_TS(X) and
write_TS(X) to determine if request is out-of-order
Multi-version CCMs
Assumes that multiple versions of an item can exists at the same time
An implicit assumption is that when a data item is updated, the new value replaces the old value
Only current version of an item exists
Multi-versions techniques assume that multiple versions of the same item coexist and can be utilized by
the CCM We discuss two variations of multi-version CCMs:
• Multi-version Timestamp Ordering (TO)
• Multi-version Two-phase Locking (2PL)
data item X created by a write operation of transactions. With each Xi a read_TS (read timestamp) and a
write_TS (write timestamp) are associated.
read_TS(Xi): The read timestamp of Xi is the largest of all the timestamps of transactions that have
successfully read version Xi.
write_TS(Xi): The write timestamp of Xi is the timestamp of the transaction that wrote version Xi.
A new version of Xi is created only by a write operation.
To ensure serializability, the following two rules are used. If transaction T issues write_item (X) and
version i of X has the highest (latest) write_TS(Xi) of all versions of X that is also less than or equal to
TS(T), and read _TS(Xi) > TS(T), then abort and roll-back T; otherwise create a new version Xi and set
read_TS(X) = write_TS(Xj) = TS(T). If transaction T issues read_item (X), find the version i of X that
has the highest write_TS(Xi) of all versions of X that is also less than or equal to TS(T), then return the
value of Xi to T, and set the value of read _TS(Xi) to the larger of TS(T) and the current read_TS(Xi).
Steps
1. X is the committed version of a data item.
2. T creates local version X’ after obtaining a write lock on X.
Read phase: A transaction can read values of committed data items. However, writes are applied only
to local copies (versions) of the data items (hence, it can be considered as a multiversion CCM).
Validation phase: Serializability is checked by determining any conflicts with other concurrent
transactions. This phase for Ti checks that, for each transaction Tj that is either committed or is in its
validation phase, one of the following conditions holds:
1. Tj completes its write phase before Ti starts its read phase.
2. Ti starts its write phase after Tj completes its write phase, and the read_set(Ti) has no items in
common with the write_set(Tj)
3. Both read_set(Ti) and write_set(Ti) have no items in common with the write_set(Tj), and Tj
completes its read phase before Ti completes its write phase.
When validating Ti, the first condition is checked first for each transaction Tj, since (1) is the simplest
condition to check. If (1) is false for a particular Tj, then (2) is checked and only if (2) is false is (3 ) is
checked. If none of these conditions holds for any Tj, the validation fails and Ti is aborted.
Write phase: On a successful validation, transaction updates are applied to the database on disk and
become the committed versions of the data items; otherwise, transactions that fail the validation phase
are restarted.
Multiple-Granularity 2PL
The size of a data item is called its granularity. Granularity can be coarse (entire database) or it can be
fine (a tuple (record) or an attribute value of a record). Data item granularity significantly affects
concurrency control performance. Degree of concurrency is low for coarse granularity and high for fine
granularity. Example of data item granularity:
1. A field of a database record (an attribute of a tuple).
2. A database record (a tuple).
3. A disk block.
4. An entire file (relation).
5. The entire database.
The following diagram illustrates a hierarchy of granularity of items from coarse (database) to fine
(record). The root represents an item that includes the whole database, followed by file items
(tables/relations), disk page items within each file, and record items within each disk page.
To manage such hierarchy, in addition to read or shared (S) and write or exclusive (X) locking modes,
three additional locking modes, called intention lock modes are defined:
Intention-shared (IS): indicates that a shared lock(s) will be requested on some descendent nodes(s).
Intention-exclusive (IX): indicates that an exclusive lock(s) will be requested on some descendent
nodes(s).
Shared-intention-exclusive (SIX): indicates that the current node is requested to be locked in shared
mode but an exclusive lock(s) will be requested on some descendent nodes(s).
These locks are applied using the lock compatibility table below. Locking always begins at the root node
and proceeds down the tree, while unlocking proceeds in the opposite direction:
The set of rules which must be followed for producing serializable schedule are
2. The lock compatibility table must be adhered to.
3. The root of the tree must be locked first, in any mode.
4. A node N can be locked by a transaction T in S (or X) mode only if the parent node is already
locked by T in either IS (or IX) mode.
5. A node N can be locked by T in X, IX, or SIX mode only if the parent of N is already locked by
T in either IX or SIX mode.
6. T can lock a node only if it has not unlocked any node (to enforce 2PL policy).
7. T can unlock a node, N, only if none of the children of N are currently locked by T
Database Recovery Techniques
Purpose of Database Recovery
• To bring the database into a consistent state after a failure occurs.
• To ensure the transaction properties of Atomicity (a transaction must be done in its entirety;
otherwise, it has to be rolled back) and Durability (a committed transaction cannot be canceled, and
all its updates must be applied permanently to the database).
• After a failure, the DBMS recovery manager is responsible for bringing the system into a consistent
state before transactions can resume.
Types of Failures
• Transaction failure: Transactions may fail because of errors, incorrect input, deadlock, incorrect
synchronization, etc.
• System failure: System may fail because of application error, operating system fault, RAM failure,
etc.
• Media failure: Disk head crash, power disruption, etc.
For write_item log entry, old value of item before modification (BFIM - BeFore Image) and the new
value after modification (AFIM – AFter Image) are stored. BFIM needed for UNDO, AFIM needed for
REDO. A sample log is given below. Back P and Next P point to the previous and next log records of
the same transaction.
Database Cache: A set of main memory buffers; each buffer typically holds contents of one disk block.
Stores the disk blocks that contain the data items being read and written by the database transactions.
Data Item Address: (disk block address, offset, size in bytes).
Cache Table:Table of entries of the form (buffer addr, disk block addr, modified bit, pin/unpin bit, ...) to
indicate which disk blocks are currently in the cache buffers.
Data items to be modified are first copied into database cache by the Cache Manager (CM) and after
modification they are flushed (written) back to the disk. The flushing is controlled by Modified and
Pin-Unpin bits.
Pin-Unpin: If a buffer is pinned, it cannot be written back to disk until it is unpinned.
Modified: Indicates that one or more data items in the buffer have been changed.
Data Update
• Immediate Update: A data item modified in cache can be written back to disk before the
transaction commits.
• Deferred Update: A modified data item in the cache cannot be written back to disk till after the
transaction commits (buffer is pinned).
• Shadow update: The modified version of a data item does not overwrite its disk copy but is written
at a separate disk location (new version).
• In-place update: The disk version of the data item is overwritten by the cache version.
forward) is needed for committed transactions whose writes may have not yet been flushed from cache
to disk.
Undo: Restore all BFIMs from log to database on disk. UNDO proceeds backward in log (from most
recent to oldest UNDO).
Redo: Restore all AFIMs from log to database on disk. REDO proceeds forward in log (from oldest to
most recent REDO).
The information needed for recovery must be written to the log file on disk before changes are made to
the database on disk. Write-Ahead Logging (WAL) protocol consists of two rules:
For Undo: Before a data item’s AFIM is flushed to the database on disk (overwriting the BFIM) its
BFIM must be written to the log and the log must be saved to disk.
For Redo: Before a transaction executes its commit operation, all its AFIMs must be written to the log
and the log must be saved on a stable store.
Checkpointing
To minimize the REDO operations during recovery. The following steps define a checkpoint operation:
1. Suspend execution of transactions temporarily.
2. Force write modified buffers from cache to disk.
3. Write a [checkpoint] record to the log, save the log to disk. This record also includes other info.,
such as the list of active transactions at the time of checkpoint.
4. Resume normal transaction execution.
During recovery redo is required only for transactions that have committed after the last [checkpoint]
record in the log. Steps 1 and 4 in above are not realistic.
A variation of checkpointing called fuzzy checkpointing allows transactions to continue execution
during the checkpointing process.
No-Force: Some cache flushing may be deferred till after transaction commits (recovery may require
REDO).
These give rise to four different ways for handling recovery:
Steal/No-Force (Undo/Redo), Steal/Force (Undo/No-redo), No-Steal/No-Force (Redo/No-undo), No-
Steal/Force (No-undo/No-redo).
X Y
X' Y'
Database