Concurrency Control Dbms
Concurrency Control Dbms
Concurrency Control Dbms
Concurrency Control
Table of contents
• Objectives
• Introduction
• Context
• Concurrent access to data
– Concept of transaction
– Transaction states and additional operations
– Interleaved concurrency
∗ Interleaved vs simultaneous concurrency
∗ Genuine vs appearance of concurrency
– Read and write operations
• Need for concurrency control
– The lost update problem
– Uncommitted dependency (or dirty read / temporary update)
– Inconsistent analysis
– Other problems
• Need for recovery
– Transaction problems
– Desirable properties of transactions (ACID)
∗ Atomicity
∗ Consistency
∗ Isolation
∗ Durability or permanency
• Serialisability
– Schedules of transactions
– Serial schedules
– Non-serial schedules
– Serialisable schedule
• Locking techniques for concurrency control
– Types of locks
∗ Binary locks
∗ Shared and exclusive locks
– Use of the locking scheme
– Guaranteeing serialisability by two-phase locking (2PL)
∗ Basic 2PL
∗ Conservative 2PL
∗ Strict 2PL
• Dealing with deadlock and livelock
– Deadlock detection with wait-for graph
– Ordering data items deadlock prevention protocol
– Wait-die or wound-wait deadlock prevention protocol
– Livelock
• Discussion topics
1
– Discussion topic 1
– Discussion topic 2
– Discussion topic 3
• Additional content and exercises
– Additional content
∗ Concurrency control based on timestamp ordering
∗ Multiversion concurrency control techniques
∗ Multiversion techniques based on timestamp ordering
∗ Multiversion two-phase locking
∗ Granularity of data items
– Additional exercises
∗ Extension exercise 1
∗ Extension exercise 2
∗ Extension exercise 3
∗ Extension exercise 4
Objectives
Introduction
In parallel with this chapter, you should read Chapter 20 of Thomas Connolly
and Carolyn Begg, “Database Systems A Practical Approach to Design, Imple-
mentation, and Management”, (5th edn.).
The purpose of this chapter is to introduce the fundamental technique of con-
currency control, which provides database systems with the ability to handle
many users accessing data simultaneously. In addition, this chapter helps you
understand the functionality of database management systems, with special ref-
erence to online transaction processing (OLTP). The chapter also describes the
problems that arise out of the fact that users wish to query and update stored
data at the same time, and the approaches developed to address these problems,
together with their respective strengths and weaknesses in a range of practical
situations.
2
There are a number of concepts that are technical and unfamiliar. You will be
expected to be able to handle these concepts but not to have any knowledge
of the detailed algorithms involved. This chapter fits closely with the one on
backup and recovery, so you may want to revisit this chapter later in the course
to review the concepts. It will become clear from the information on concurrency
control that there are a number of circumstances where recovery procedures may
need to be invoked to salvage previous or currently executing transactions. The
material covered here will be further extended in the chapter on distributed
database systems, where we shall see how effective concurrency control can be
implemented across a computer network.
Context
Many criteria can be used to classify DBMSs, one of which is the number of
users supported by the system. Single-user systems support only one user at a
time and are mostly used with personal computers. Multi-user systems, which
include the majority of DBMSs, support many users concurrently.
In this chapter, we will discuss the concurrency control problem, which occurs
when multiple transactions submitted by various users interfere with one another
in a way that produces incorrect results. We will start the chapter by introducing
some basic concepts of transaction processing. Why concurrency control and
recovery are necessary in a database system is then discussed. The concept of
an atomic transaction and additional concepts related to transaction processing
in database systems are introduced. The concepts of atomicity, consistency,
isolation and durability – the so-called ACID properties that are considered
desirable in transactions - are presented.
The concept of schedules of executing transactions and characterising the recov-
erability of schedules is introduced, with a detailed discussion of the concept of
serialisability of concurrent transaction executions, which can be used to define
correct execution sequences of concurrent transactions.
We will also discuss recovery from transaction failures. A number of concur-
rency control techniques that are used to ensure noninterference or isolation
of concurrently executing transactions are discussed. Most of these techniques
ensure serialisability of schedules, using protocols or sets of rules that guarantee
serialisability. One important set of protocols employs the technique of locking
data items, to prevent multiple transactions from accessing the items concur-
rently. Another set of concurrency control protocols use transaction timestamps.
A timestamp is a unique identifier for each transaction generated by the system.
Concurrency control protocols that use locking and timestamp ordering to en-
sure serialisability are both discussed in this chapter.
An overview of recovery techniques will be presented in a separate chapter.
3
Concurrent access to data
Concept of transaction
For recovery purposes, a system always keeps track of when a transaction starts,
terminates, and commits or aborts. Hence, the recovery manager keeps track of
the following transaction states and operations:
• BEGIN_TRANSTRACTION: This marks the beginning of transac-
tion execution.
• READ or WRITE: These specify read or write operations on the
database items that are executed as part of a transaction.
• END_TRANSTRACTION: This specifies that read and write opera-
tions have ended and marks the end limit of transaction execution. How-
ever, at this point it may be necessary to check whether the changes in-
troduced by the transaction can be permanently applied to the database
(committed) or whether the transaction has to be aborted because it vio-
lates concurrency control, or for some other reason (rollback).
• COMMIT_TRANSTRACTION: This signals a successful end of the
transaction so that any changes (updates) executed by the transaction can
be safely committed to the database and will not be undone.
• ROLLBACK (or ABORT): This signals the transaction has ended
unsuccessfully, so that any changes or effects that the transaction may
have applied to the database must be undone.
In addition to the preceding operations, some recovery techniques require addi-
tional operations that include the following:
4
• UNDO: Similar to rollback, except that it applies to a single operation
rather than to a whole transaction.
• REDO: This specifies that certain transaction operations must be redone
to ensure that all the operations of a committed transaction have been
applied successfully to the database.
A state transaction diagram is shown below:
It shows clearly how a transaction moves through its execution states. In the
diagram, circles depict a particular state; for example, the state where a transac-
tion has become active. Lines with arrows between circles indicate transitions
or changes between states; for example, read and write, which correspond to
computer processing of the transaction.
A transaction goes into an active state immediately after it starts execution,
where it can issue read and write operations. When the transaction ends, it
moves to the partially committed state. At this point, some concurrency control
techniques require that certain checks be made to ensure that the transaction
did not interfere with other executing transactions. In addition, some recovery
protocols are needed to ensure that a system failure will not result in inability
to record the changes of the transaction permanently. Once both checks are
successful, the transaction is said to have reached its commit point and enters
the committed state. Once a transaction enters the committed state, it has
concluded its execution successfully.
However, a transaction can go to the failed state if one of the checks fails or
if it aborted during its active state. The transaction may then have to be
rolled back to undo the effect of its write operations on the database. The
terminated state corresponds to the transaction leaving the system. Failed or
aborted transactions may be restarted later, either automatically or after being
resubmitted, as brand new transactions.
5
Interleaved concurrency
6
interleaved concurrency, as illustrated by program C and D in the figure below.
Most of the theory concerning concurrency control in databases is developed in
terms of interleaved concurrency, although it may be adapted to simultaneous
concurrency.
We deal with transactions at the level of data items and disk blocks for the
purpose of discussing concurrency control and recovery techniques. At this
level, the database access operations that a transaction can include are:
7
• read_item(X): Reads a database item named X into a program variable
also named X.
• write_item(X): Writes the value of program variable X into the database
item named X.
Executing a read_item(X) command includes the following steps:
1. Find the address of the disk block that contains item X.
2. Copy the disk block into a buffer in main memory if that disk is not already
in some main memory buffer.
3. Copy item X from the buffer to the program variable named X.
Executing a write_item(X) command includes the following steps:
1. Find the address of the disk block that contains item X.
2. Copy the disk block into a buffer in main memory if that disk is not already
in some main memory buffer.
3. Copy item X from the program variable named X into its correct location
in the buffer.
4. Store the updated block from the buffer back to disk (either immediately
or at some later point in time).
Step 4 is the one that actually updates the database on disk. In some cases the
buffer is not immediately stored to disk, in case additional changes are to be
made to the buffer. Usually, the decision about when to store back a modified
disk block that is in a main memory buffer is handled by the recovery manager
or the operating system.
A transaction will include read and write operations to access the database.
The figure below shows examples of two very simple transactions. Concurrency
control and recovery mechanisms are mainly concerned with the database access
commands in a transaction.
8
The above two transactions submitted by any two different users may be exe-
cuted concurrently and may access and update the same database items (e.g. X).
If this concurrent execution is uncontrolled, it may lead to problems such as an
inconsistent database. Some of the problems that may occur when concurrent
transactions execute in an uncontrolled manner are discussed in the next section.
Activity 1 - Looking up glossary entries
In the Concurrent Access to Data section of this chapter, the following phrases
have glossary entries:
• transaction
• interleaving
• COMMIT
• ROLLBACK
1. In your own words, write a short definition for each of these terms.
2. Look up and make notes of the definition of each term in the module
glossary.
3. Identify (and correct) any important conceptual differences between your
definition and the glossary entry.
Review question 1
9
1. Explain what is meant by a transaction. Discuss the meaning of transac-
tion states and operations.
2. In your own words, write the key feature(s) that would distinguish an
interleaved concurrency from a simultaneous concurrency.
3. Use an example to illustrate your point(s) given in 2.
4. Discuss the actions taken by the read_item and write_item operations on
a database.
Concurrency is the ability of the DBMS to process more than one transaction
at a time. This section briefly overviews several problems that can occur when
concurrent transactions execute in an uncontrolled manner. Concrete examples
are given to illustrate the problems in details. The related activities and learning
tasks that follow give you a chance to evaluate the extent of your understanding
of the problems. An important learning objective for this section of the chapter
is to understand the different types of problems of concurrent executions in
OLTP, and appreciate the need for concurrency control.
We illustrate some of the problems by referring to a simple airline reservation
database in which each record is stored for each airline flight. Each record
includes the number of reserved seats on that flight as a named data item,
among other information. Recall the two transactions T1 and T2 introduced
previously:
10
Transaction T1 cancels N reservations from one flight, whose number of reserved
seats is stored in the database item named X, and reserves the same number of
seats on another flight, whose number of reserved seats is stored in the database
item named Y. A simpler transaction T2 just reserves M seats on the first flight
referenced in transaction T1. To simplify the example, the additional portions
of the transactions are not shown, such as checking whether a flight has enough
seats available before reserving additional seats.
When an airline reservation database program is written, it has the flight num-
bers, their dates and the number of seats available for booking as parameters;
hence, the same program can be used to execute many transactions, each with
different flights and number of seats to be booked. For concurrency control
purposes, a transaction is a particular execution of a program on a specific date,
flight and number of seats. The transactions T1 and T2 are specific executions
of the programs that refer to the specific flights whose numbers of seats are
stored in data item X and Y in the database. Now let’s discuss the types of
problems we may encounter with these two transactions.
The lost update problem occurs when two transactions that access the same
database items have their operations interleaved in a way that makes the value
of some database item incorrect. That is, interleaved use of the same data item
would cause some problems when an update operation from one transaction
overwrites another update from a second transaction.
An example will explain the problem clearly. Suppose the two transactions T1
and T2 introduced previously are submitted at approximately the same time.
It is possible when two travel agency staff help customers to book their flights
at more or less the same time from a different or the same office. Suppose that
their operations are interleaved by the operating system as shown in the figure
below:
11
The above interleaved operation will lead to an incorrect value for data item X,
because at time step 3, T2 reads in the original value of X which is before T1
changes it in the database, and hence the updated value resulting from T1 is
lost. For example, if X = 80, originally there were 80 reservations on the flight,
N = 5, T1 cancels 5 seats on the flight corresponding to X and reserves them
on the flight corresponding to Y, and M = 4, T2 reserves 4 seats on X.
The final result should be X = 80 – 5 + 4 = 79; but in the concurrent operations
of the figure above, it is X = 84 because the update that cancelled 5 seats in T1
was lost.
The detailed value updating in the flight reservation database in the above
example is shown below:
12
The rollback of transaction T1 may be due to a system crash, and transaction
T2 may already have terminated by that time, in which case the crash would
not cause a rollback to be issued for T2. The following situation is even more
unacceptable:
Inconsistent analysis
Inconsistent analysis occurs when a transaction reads several values, but a sec-
ond transaction updates some of these values during the execution of the first.
This problem is significant, for example, if one transaction is calculating an
aggregate summary function on a number of records while other transactions
are updating some of these records. The aggregate function may calculate some
values before they are updated and others after they are updated. This causes
an inconsistency.
13
For example, suppose that a transaction T3 is calculating the total number of
reservations on all the flights; meanwhile, transaction T1 is executing. If the
interleaving of operations shown below occurs, the result of T3 will be off by
amount N, because T3 reads the value of X after N seats have been subtracted
from it, but reads the value of Y before those N seats have been added to it.
Other problems
Another problem that may occur is the unrepeatable read, where a transaction
T1 reads an item twice, and the item is changed by another transaction T2
between reads. Hence, T1 receives different values for its two reads of the same
item.
Phantom record could occur when a transaction inserts a record into the
database, which then becomes available to other transactions before comple-
tion. If the transaction that performs the insert operation fails, it appears that
a record in the database disappears later.
Exercise 1
Typical problems in multi-user environment when concurrency access
to data is allowed
Some problems may occur in multi-user environment when concurrency access
to database is allowed. These problems may cause data stored in the multi-user
DBMS to be damaged or destroyed. Four interleaved transaction schedules are
given below. Identify what type of problems they have.
14
15
Exercise 2
Inconsistent analysis problem
Interleaved calculation of aggregates may have some aggregates on early data
and some on late data if other transactions are able to update the data. This
will cause incorrect summary. Consider the situation below, in which a number
of account records have the following values:
Review question 2
1. What is meant by interleaved concurrent execution of database transac-
tions in a multi-user system? Discuss why concurrency control is needed,
and give informal examples.
16
Need for recovery
Transaction problems
The practical aspects of transactions are about keeping control. There are a
variety of causes of transaction failure. These may include:
1. Concurrency control enforcement: Concurrency control method may abort
the transaction, to be restarted later, because it violates serialisability
(the need for transactions to be executed in an equivalent way as would
have resulted if they had been executed sequentially), or because several
transactions are in a state of deadlock.
2. Local error detected by the transaction: During transaction executions,
certain conditions may occur that necessitate cancellation of the trans-
action (e.g. an account with insufficient funds may cause a withdrawal
transaction from that account to be cancelled). This may be done by a
programmed ABORT in the transaction itself.
3. A transaction or system error: Some operation in the transaction may
cause it to fail, such as integer overflow, division by zero, erroneous pa-
rameter values or logical programming errors.
4. User interruption of the transaction during its execution, e.g. by issuing a
control-C in a VAX/ VMS or UNIX environment.
5. System software errors that result in abnormal termination or destruction
of the database management system.
6. Crashes due to hardware malfunction, resulting in loss of internal (main
and cache) memory (otherwise known as system crashes).
7. Disk malfunctions such as read or write malfunction, or a disk read/write
head crash. This may happen during a read or write operation of the
transaction.
17
8. Natural physical disasters and catastrophes such as fires, earthquakes or
power surges; sabotages, intentional contamination with computer viruses,
or destruction of data or facilities by operators or users.
Failures of types 1 to 6 are more common than those of types 7 or 8. Whenever
a failure of type 1 through 6 occurs, the system must keep sufficient information
to recover from the failure. Disk failure or other catastrophic failures of 7 or 8
do not happen frequently; if they do occur, it is a major task to recover from
these types of failure.
The acronym ACID indicates the properties of any well-formed transaction. Any
transaction that violates these principles will cause failures of concurrency. A
brief description of each property is given first, followed by detailed discussions.
1. Atomicity: A transaction is an atomic unit of processing; it is either
performed in its entirety or not performed at all. A transaction does not
partly happen.
2. Consistency: The database state is consistent at the end of a transaction.
3. Isolation: A transaction should not make its updates visible to other
transactions until it is committed; this property, when enforced strictly,
solves the temporary update problem and makes cascading rollbacks (see
‘Atomicity’ below) of transactions unnecessary.
4. Durability: When a transaction has made a change to the database
state and the change is committed, this change is permanent and should
be available to all other transactions.
Atomicity
Atomicity is built on the idea that you cannot split an atom. If a transaction
starts, it must finish – or not happen at all. This means if it happens, it happens
completely; and if it fails to complete, there is no effect on the database state.
There are a number of implications. One is that transactions should not be
nested, or at least cannot be nested easily. A nested transaction is where one
transaction is allowed to initiate another transaction (and so on). If one of
the nested transactions fails, the impact on other transactions leads to what is
known as a cascading rollback.
Transactions need to be identified. The database management system can assign
a serial number of timestamp to each transaction. This identifier is required so
each activity can be logged. When a transaction fails for any reason, the log is
used to roll back and recover the correct state of the database on a transaction
basis.
18
Rolling back and committing transactions
There are two ways a transaction can terminate. If it executes to completion,
then the transaction is said to be committed and the database is brought to
a new consistent state. Committing a transaction signals a successful end-of-
transaction. It tells the transaction manager that a logical unit of work has
been successfully completed, the database is (or should be) in a consistent state
again, and all of the updates made by that unit of work (transaction) can now be
made permanent. This point is known as a synchronisation point. It represents
the boundary between two consecutive transactions and corresponds to the end
of a logical unit of work.
The other way a transaction may terminate is that the transaction is aborted
and the incomplete transaction is rolled back and restored to the consistent state
it was in before the transaction started. A rollback signals an unsuccessful end of
a transaction. It tells the transaction manager that something has gone wrong,
the database might be in an inconsistent state and all of the updates made by
the logical unit of work so far must be undone. A committed transaction cannot
be aborted and rolled back.
Atomicity is maintained by commitment and rollback. The major SQL oper-
ations that are under explicit user control that establish such synchronisation
points are COMMIT and ROLLBACK. Otherwise (default situation), an entire
program is regarded as one transaction.
Consistency
The database must start and finish in a consistent state. You should note that
in contrast, during a transaction, there will be times where the database is
inconsistent. Some part of the data will have been changed while other data
has yet to be.
A correct execution of the transaction must take the database from one con-
sistent state to another, i.e. if the database was in a consistent state at the
beginning of transaction, it must be in a consistent state at the end of that
transaction.
The consistency property is generally considered to be the responsibility of the
programmers who write the database programs or the DBMS module that en-
forces integrity constraints. It is also partly the responsibility of the database
management system to ensure that none of the specified constraints are violated.
The implications of this are the importance of specifying the constraints and
domains within the schema, and the validation of transactions as an essential
part of the transactions.
Isolation
19
A transaction should not make its update accessible to other transactions until
it has terminated. This property gives the transaction a measure of relative
independence and, when enforced strictly, solves the temporary update problem.
In general, various levels of isolation are permitted. A transaction is said to
have degree 0 isolation if it does not overwrite the dirty reads of higher-degree
transactions. A degree 1 isolation transaction has no lost updates, and degree
2 isolation has no lost update and no dirty reads. Finally, degree 3 isolation
(also known as true isolation) has, in addition to degree 2 properties, repeatable
reads.
Isolation refers to the way in which transactions are prevented from interfering
with each other. You might think that one transaction should never be interfered
with by any other transactions. This is nearly like insisting on full serialisation of
transactions with little or no concurrency. The issues are those of performance.
Durability or permanency
Once a transaction changes the database and the changes are committed, these
changes must never be lost because of subsequent failure.
At the end of a transaction, one of two things will happen. Either the transaction
has completed successfully or it has not. In the first case, for a transaction
containing write_item operations, the state of the database has changed, and
in any case, the system log has a record of the activities. Or, the transaction has
failed in some way, in which case the database state has not changed, though
it may have been necessary to use the system log to roll back and recover from
any changes attempted by the transaction.
Review question 3
1. Discuss different types of possible transaction failures, with some exam-
ples.
2. Transactions cannot be nested inside one another. Why? Support your
answer with an example.
Serialisability
20
Schedules of transactions
21
An important aspect of concurrency control, called serialisability theory, at-
tempts to determine which schedules are ‘correct’ and which are not, and to
develop techniques that allow only correct schedules. The next section defines
serial and non-serial schedules, presents some of the serialisability theory, and
discusses how it may be used in practice.
Serial schedules
Schedules A and B are called serial schedules because the operations of each
transaction are executed consecutively, without any interleaved operations from
the other transaction. In a serial schedule, entire transactions are performed in
serial order: T1 and then T2 or T2 and then T1 in the diagram below. Schedules
C and D are called non-serial because each sequence interleaves operations from
the two transactions.
Formally, a schedule S is serial if, for every transaction T participating in the
schedule, all the operations of T are executed consecutively in the schedule;
otherwise, the schedule is called non-serial. One reasonable assumption we can
make, if we consider the transactions to be independent, is that every serial
schedule is considered correct. This is so because we assume that every trans-
action is correct if executed on its own (by the consistency property introduced
previously in this chapter) and that transactions do not depend on one another.
Hence, it does not matter which transaction is executed first. As long as every
transaction is executed from beginning to end without any interference from the
operations of other transactions, we get a correct end result on the database.
The problem with serial schedules is that they limit concurrency or interleaving
of operations. In a serial schedule, if a transaction waits for an I/O operation
to complete, we cannot switch the CPU processor to another transaction, thus
wasting valuable CPU processing time and making serial schedules generally
unacceptable.
To illustrate our discussion, consider the schedules in the diagram below, and
22
assume that the initial values of database items are X = 90, Y = 90, and that
N = 3 and M = 2. After executing transaction T1 and T2, we would expect
the database values to be X = 89 and Y = 93, according to the meaning of the
transactions. Sure enough, executing either of the serial schedules A or B gives
the correct results. This is shown below:
Non-serial schedules
23
Schedule C gives an erroneous result because of the lost update problem. Trans-
action T2 reads the value of X before it is changed by transaction T1, so only
the effect of T2 on X is reflected in the database. The effect of T1 on X is lost,
overwritten by T2, leading to the incorrect result for item X.
However, some non-serial schedules do give the expected result, such as schedule
D in the diagram above. We would like to determine which of the non-serial
schedules always give a correct result and which may give erroneous results. The
concept used to characterise schedules in this manner is that of serialisability of
a schedule.
Serialisable schedule
24
database correctness.
Now the question is: when are two schedules considered ‘equivalent’? There
are several ways to define equivalence of schedules. The simplest, but least
satisfactory, definition of schedule equivalence involves comparing the effects
of the schedules on the database. Intuitively, two schedules are called result
equivalent if they produce the same final state of the database. However, two
schedules may accidentally provide the same final state. For example, in the
figure below, schedules S1 and S2 will produce the same database state if they
execute on a database with an initial value of X = 100; but for other initial
values of X, the schedules are not result equivalent. Hence result equivalent is
not always the safe way to define schedules equivalence.
Here are two schedules that are equivalent for the initial value of X = 100, but
are not equivalent in general:
25
= 2,
1. create all possible serial schedules and examine the values of x and y;
2. create a non-serial interleaved schedule and examine the values of x and
y. Is this a serialisable schedule?
Review question 4
1. As a summary of schedules of transactions and serialisability of schedules,
fill in the blanks in the following paragraphs:
A schedule S of n transactions T1, T2, … Tn is an ordering of the operations
of the transactions. The operations in S are exactly those operations in
______ including either a ______ or ______ operation as the last
operation for each transaction in the schedule. For any pair of operations
from the same transaction Ti, their order of appearance in S is as their
order of appearance in Ti.
In serial schedules, all operations of each transaction are executed
______ , without any operations from the other transactions. Every
serial schedule is considered ______ .
A schedule S of n transactions is a fertilisable schedule if it is equivalent
to some ______ of the same n transactions.
Saying that a non-serial schedule is serialisable is equivalent to saying that
it is ______ , because it is equivalent to ______ .
2. Compare binary locks with exclusive/shared locks. Why is the latter type
of locks preferable?
26
Locking techniques for concurrency control
Types of locks
The idea of locking is simple: when a transaction needs an assurance that some
object, typically a database record that it is accessing in some way, will not
change in some unpredictable manner while the transaction is not running on
the CPU, it acquires a lock on that object. The lock prevents other transactions
from accessing the object. Thus the first transaction can be sure that the object
in question will remain in a stable state as long as the transaction desires.
There are several types of locks that can be used in concurrency control. Binary
locks are the simplest, but are somewhat restrictive in their use.
Binary locks
A binary lock can have two states or values: locked and unlocked (or 1 and 0, for
simplicity). A distinct lock is associated with each database item X. If the value
of the lock on X is 1, item X is locked and cannot be accessed by a database
operation that requests the item. If the value of the lock on X is 0, item X is
unlocked, and it can be accessed when requested. We refer to the value of the
lock associated with item X as LOCK(X).
Two operations, lock and unlock, must be included in the transactions when
binary locking is used. A transaction requests access to an item X by issuing a
lock(X) operation. If LOCK(X) = 1, the transaction is forced to wait; otherwise,
the transaction sets LOCK(X) := 1 (locks the item) and allows access. When
the transaction is through using the item, it issues an unlock(X) operation,
which sets LOCK(X) := 0 (unlocks the item) so that X may be accessed by
other transactions. Hence, a binary lock enforces mutual exclusion on the data
item. The DBMS has a lock manager subsystem to keep track of and control
access to locks.
27
When the binary locking scheme is used, every transaction must obey the fol-
lowing rules:
1. A transaction T must issue the operation lock(X) before any read_item(X)
or write_item(X) operations are performed in T.
2. A transaction T must issue the operation unlock(X) after all read_item(X)
and write_item(X) operations are completed in T.
3. A transaction T will not issue a lock(X) operation if it already holds the
lock on item X.
4. A transaction T will not issue an unlock(X) operation unless it already
holds the lock on item X.
These rules can be enforced by a module of the DBMS. Between the lock(X)
and unlock(X) operations in a transaction T, T is said to hold the lock on item
X. At most, one transaction can hold the lock on a particular item. No two
transactions can access the same item concurrently.
28
If transaction A holds an exclusive lock on record X, then a request from trans-
action B for a lock of either type on X will cause B to go into a wait state (and
B will wait until A’s lock is released).
This can be summarised by means of the compatibility matrix below, that shows
which type of lock requests can be granted simultaneously:
For example, when transaction A holds an exclusive (X) lock on data item X,
the request from transaction B for an exclusive lock on X will not be granted.
If transaction A holds a shared (S) lock on data item X, the request from
transaction B for a shared lock will be granted (two transactions can read access
the same item simultaneously) but not for an exclusive lock.
Transaction requests for record locks are normally implicit (at least in most
modern systems). In addition, a user can specify explicit locks. When a trans-
action successfully retrieves a record, it automatically acquires an S lock on
that record. When a transaction successfully updates a record, it automatically
acquires an X lock on that record. If a transaction already holds an S lock on
a record, then the update operation will promote the S lock to X level as long
as T is the only transaction with an S lock on X at the time.
Exclusive and shared locks are normally held until the next synchronisation
point (review the concept of synchronisation point under ‘Atomicity’). However,
a transaction can explicitly release locks that it holds prior to termination using
the unlock command.
29
Assume, initially, that X = 20 and Y = 30; the result of serial schedule T1
followed by T2 is X = 50 and Y = 80; and the result of serial schedule T2
followed by T1 is X = 70 and Y = 50. The figure below shows an example
where, although the multiple-mode locks are used, a non-serialisable schedule
may still result:
30
The reason this non-serialisable schedule occurs is that the items Y in T1 and
X in T2 were unlocked too early. To guarantee serialisability, we must follow
an additional protocol concerning the positioning of locking and unlocking op-
erations in every transaction. The best known protocol, two-phase locking, is
described below.
Basic 2PL
A transaction is said to follow the two-phase locking protocol (basic 2PL pro-
tocol) if all locking operations (read_lock, write_lock) precede the first unlock
operation in the transaction. Such a transaction can be divided into two phases:
an expanding (or growing) phase, during which new locks on items can be ac-
quired but none can be released; and a shrinking phase, during which existing
locks can be released but no new locks can be acquired.
Transactions T1 and T2 shown in the last section do not follow the 2PL protocol.
31
This is because the write_lock(X) operation follows the unlock(Y) operation in
T1, and similarly the write_lock(Y) operation follows the unlock(X) operation
in T2. If we enforce 2PL, the transactions can be rewritten as T1’ and T2’, as
shown below:
Now the schedule involving interleaved operations shown in the figure above is
not permitted. This is because T1’ will issue its write_lock(X) before it unlocks
Y; consequently, when T2’ issues its read_lock(X), it is forced to wait until T1’
issues its unlock(X) in the schedule.
It can be proved that, if every transaction in a schedule follows the basic 2PL,
the schedule is guaranteed to be serialisable, removing the need to test for
serialisability of schedules any more. The locking mechanism, by enforcing 2PL
rules, also enforces serialisability.
Another problem that may be introduced by 2PL protocol is deadlock. The
formal definition of deadlock will be discussed below. Here, an example is used
to give you an intuitive idea about the deadlock situation. The two transactions
that follow the 2PL protocol can be interleaved as shown here:
32
At time step 5, it is not possible for T1’ to acquire an exclusive lock on X as
there is already a shared lock on X held by T2’. Therefore, T1’ has to wait.
Transaction T2’ at time step 6 tries to get an exclusive lock on Y, but it is
unable to as T1’ has a shared lock on Y already. T2’ is put in waiting too.
Therefore, both transactions wait fruitlessly for the other to release a lock. This
situation is known as a deadly embrace or deadlock. The above schedule would
terminate in a deadlock.
Conservative 2PL
A variation of the basic 2PL is conservative 2PL also known as static 2PL, which
is a way of avoiding deadlock. The conservative 2PL requires a transaction to
lock all the data items it needs in advance. If at least one of the required
data items cannot be obtained then none of the items are locked. Rather, the
transaction waits and then tries again to lock all the items it needs. Although
conservative 2PL is a deadlock-free protocol, this solution further limits concur-
rency.
Strict 2PL
In practice, the most popular variation of 2PL is strict 2PL, which guarantees
a strict schedule. (Strict schedules are those in which transactions can neither
read nor write an item X until the last transaction that wrote X has committed
or aborted). In strict 2PL, a transaction T does not release any of its locks until
after it commits or aborts. Hence, no other transaction can read or write an
item that is written by T unless T has committed, leading to a strict schedule
for recoverability. Notice the difference between conservative and strict 2PL; the
33
former must lock all items before it starts, whereas the latter does not unlock
any of its items until after it terminates (by committing or aborting). Strict
2PL is not deadlock-free unless it is combined with conservative 2PL.
In summary, all type 2PL protocols guarantee serialisability (correctness) of a
schedule but limit concurrency. The use of locks can also cause two additional
problems: deadlock and livelock. Conservative 2PL is deadlock-free.
Exercise 4
Multiple-mode locking scheme and serialisability of schedules
1. For the example schedule shown again here below, complete the two pos-
sible serial schedules, and show the values of items X and Y in the two
transactions and in the database at each time step.
34
Dealing with deadlock and livelock
Deadlock occurs when each of two transactions is waiting for the other to re-
lease the lock on an item. A simple example was shown above, where the two
transactions T1’ and T2’ are deadlocked in a partial schedule; T1’ is waiting for
T2’ to release item X, while T2’ is waiting for T1’ to release item Y. Meanwhile,
neither can proceed to unlock the item that the other is waiting for, and other
transactions can access neither item X nor item Y. Deadlock is also possible
when more than two transactions are involved.
35
is currently locked by a transaction Tj, it creates a directed edge (Ti # Tj).
When Tj releases the lock(s) on the items that Ti was waiting for, the directed
edge is dropped from the waiting-for graph. We have a state of deadlock if and
only if the wait-for graph has a cycle. Recall this partial schedule introduced
previously:
The wait-for graph for the above partial schedule is shown below:
One problem with using the wait-for graph for deadlock detection is the matter
of determining when the system should check for deadlock. Criteria such as
the number of concurrently executing transactions or the period of time several
transactions have been waiting to lock items may be used to determine that the
system should check for deadlock.
36
When we have a state of deadlock, some of the transactions causing the deadlock
must be aborted. Choosing which transaction to abort is known as victim
selection. The algorithm for victim selection should generally avoid selecting
transactions that have been running for a long time and that have performed
many updates, and should try instead to select transactions that have not made
many changes or that are involved in more than one deadlock cycle in the wait-
for graph. A problem known as cyclic restart may occur, where a transaction
is aborted and restarted only to be involved in another deadlock. The victim
selection algorithm can use higher priorities for transactions that have been
aborted multiple times, so that they are not selected as victims repeatedly.
Exercise 5
Wait-for graph
Given the graph below, identify the deadlock situations.
One way to prevent deadlock is to use a deadlock prevention protocol. One such
deadlock prevention protocol is used in conservative 2PL. It requires that every
transaction lock all the items it needs in advance; if any of the items cannot
be obtained, none of the items are locked. Rather, the transaction waits and
then tries again to lock all the items it needs. This solution obviously limits
concurrency. A second protocol, which also limits concurrency, though to a
lesser extent, involves ordering all the data items in the database and making
sure that a transaction that needs several items will lock them according to that
order (e.g. ascending order). For example, data items may be ordered as having
rank 1, 2, 3, and so on.
A transaction T requiring data items A (with a rank of i) and B (with a rank
of j, and j>i), must first request a lock for the data item with the lowest rank,
37
namely A. When it succeeds in getting the lock for A, only then can it request
a lock for data item B.
All transactions must follow such a protocol, even though within the body of the
transaction the data items are not required in the same order as the ranking of
the data items for lock requests. For this particular protocol to work, all locks
to be applied must be binary locks (i.e. the only locks that can be applied are
write locks).
A number of deadlock prevention schemes have been proposed that make a de-
cision on whether a transaction involved in a possible deadlock situation should
be blocked and made to wait, should be aborted, or should preempt and abort
another transaction. These protocols use the concept of transaction timestamp
TS(T), which is a unique identifier assigned to each transaction. The times-
tamps are ordered based on the order in which transactions are started; hence,
if transaction T1 starts before transaction T2, then TS(T1) < TS(T2). No-
tice that the older transaction has a smaller timestamp value. This can be
easily understood and remembered in the following way. For an older transac-
tion T which is ‘born’ earlier, its birthday (i.e. TS(T) = 10am) is smaller than
a younger transaction T’, which is ‘born’ at 11am (i.e. TS(T’) = 11am, and
TS(T) < TS(T’) ).
Two schemes that use transaction timestamp to prevent deadlock are wait-die
and wound-wait. Suppose that transaction Ti tries to lock an item X, but is
not able to because X is locked by some other transaction Tj with a conflicting
lock. The rules followed by these schemes are as follows:
• wait-die: if TS(Ti) < TS(Tj) (Ti is older than Tj) then Ti is allowed
to wait, otherwise abort Ti (Ti dies) and restart it later with the same
timestamp.
• wound-wait: if TS(Ti) < TS(Tj) (Ti is older than Tj) then abort Tj (Ti
wound Tj) and restart it later with the same timestamp, otherwise Ti is
allowed to wait.
In wait-die, an older transaction is allowed to wait on a younger transaction,
whereas a younger transaction requesting an item held by an older transaction
is aborted and restarted. The wound-die approach does the opposite: a younger
transaction is allowed to wait on an older one, whereas an older transaction
requesting an item held by a younger transaction preempts the younger trans-
action by aborting it. Both schemes end up aborting the younger of the two
transactions that may be involved in a deadlock, and it can be shown that these
two techniques are deadlock-free. The two schemes can be summarised in the
following two tables. The one below is for the wait-die protocol:
38
And this one below is for the wound-die protocol:
The problem with these two schemes is that they cause some transactions to be
aborted and restarted even though those transactions may never actually cause
a deadlock. Another problem can occur with wait-die, where the transaction Ti
may be aborted and restarted several times in a row because an older transaction
Tj continues to hold the data item that Ti needs.
Exercise 6
Deadlock prevention protocols
A DBMS attempts to run the following schedule. Show:
1. How conservative 2PL would prevent deadlock.
2. How ordering all data items would prevent deadlock.
3. How the wait-for scheme would prevent deadlock.
4. How the wound-wait scheme would prevent deadlock.
39
Livelock
Another problem that may occur when we use locking is livelock. A transaction
is in a state of livelock if it cannot proceed for an indefinite period while other
transactions in the system continue normally. This may occur if the waiting
scheme for locked items is unfair, giving priority to some transactions over others.
The standard solution for livelock is to have a fair waiting scheme. One such
scheme uses a first-come-first-serve queue; transactions are enabled to lock an
item in the order in which they originally requested to lock the item. Another
40
scheme allows some transactions to have priority over others but increases the
priority of a transaction the longer it waits, until it eventually gets the highest
priority and proceeds.
Review question 5
1. Complete the following table to describe which type of lock requests can
be granted to the particular transaction.
41
5. What is timestamp? Discuss how it is used in deadlock prevention proto-
cols.
Discussion topics
Discussion topic 1
There are many new concepts in this chapter. If you want to discuss them with
your colleagues or make comments about the concepts, use the online facilities.
Discussion topic 2
Discussion topic 3
42
Additional content and exercises
Additional content
43
2. Transaction T issues read_item(X) operation:
• If write_TS(X)>TS(T), then abort and roll back T and reject the oper-
ation. This should be done because some transaction with a timestamp
greater than TS(T) – and hence after T in the timestamp ordering – has
already written the value of item X before T had a chance to read X.
• If write_TS(X) <=TS(T), then execute the read_item(X) operation of T
and set read_TS(T) to the larger of TS(T) and the current read_TS(X).
44
1. If transaction T issues a write_item(X) operation, and version i of X has
the highest write_TS(Xi) of all versions of X which is also less than or
equal to TS(T), and TS(T) < read_TS(Xi), then abort and roll back
transaction T; otherwise, create a new version Xj of X with read_TS(Xj)
= write_TS(Xj) = TS(T).
2. If transaction T issues a read_item(X) operation, and version i of X has
the highest write_TS(Xi) of all versions of X which is also less than
or equal to TS(T), then return the value of Xi to transaction T, and
set the value of read_TS(Xj) to the largest of TS(T) and the current
read_TS(Xj).
45
only allowed to read the version X that was written by committed transaction.
However, deadlock may occur.
46
Most concurrency control techniques have a uniform data item size. However,
some techniques have been proposed that permit variable item sizes. In these
techniques, the data item size may be changed to the granularity that best suits
the transactions that are currently executing on the system.
Additional exercises
Extension exercise 1
Interleaved concurrency
T1, T2 and T3 are defined to perform the following operations: T1: Add one
to A. T2: Double A. T3: Display A on the screen and then set A to one.
1. Supposed the above three transactions are allowed to execute concurrently.
If A has an initial value zero, how many correct results are there? Enu-
merate them.
2. Suppose the internal structure of T1, T2, and T3 is as indicated below:
If the transactions execute without any locking, how many possible interleaved
executions are there?
Extension exercise 2
Deadlock
The following list represents the sequence of events in an interleaved execution
of a set of transaction T1 to T12. A, B, … H are data items in the database.
Assume that FETCH R acquires an S lock on R, and UPDATE R promotes that
lock to X level. Assume also all locks are held until the next synchronisation
point. Are there any deadlocks at time tn?
47
48
Extension exercise 3
Multiversion two-phase locking
What is a certify lock? Discuss multiversion two-phase locking for concurrency
control.
Extension exercise 4
Granularity of data items
How does the granularity of data items affect the performance of concurrency
control? What factors affect selection of granularity size for data items?
49