Unit 2 (Process Synchronization) 1
Unit 2 (Process Synchronization) 1
Process Synchronization
Shyam B. Verma
(Assistant Professor)
SYLLABUS [UNIT-2]
Concurrent Processes: Process Concept,
Principle of Concurrency, Producer / Consumer
Problem
Mutual Exclusion, Critical Section Problem,
Dekker’s solution, Peterson’s solution,
Semaphores, Test and Set operation;
Classical Problem in Concurrency- Dining
Philosopher Problem, Sleeping Barber
Problem;
Inter Process Communication models and
Schemes, Process generation.
Objectives
To present the concept of process
synchronization.
To introduce the critical-section problem,
whose solutions can be used to ensure the
consistency of shared data
To present both software and hardware
solutions of the critical-section problem
To examine several classical process-
synchronization problems
To explore several tools that are used to
solve process synchronization problems
Process Concept
A process is a program in execution. The execution of a
process progresses in a sequential fashion. A program
is a passive entity while a process is an active entity. A
process includes much more than just the program code.
A process includes the text section, stack, data
section, program counter, register contents and so
on.
The text section consists of the set of instructions to be
executed for the process.
The data section contains the values of initialized and
uninitialized global variables in the program.
The stack is used whenever there is a function call in
the program.
The program counter has the address of the next
instruction to be executed in the process.
Principle of Concurrency
Concurrency is the execution of multiple instruction
sequences at the same time.
It happens in the operating system when there are several
process threads running in parallel. The running process
threads always communicate with each other through
shared memory or message passing. Concurrency
results in the sharing of resources resulting in problems
like deadlocks and resource starvation.
Technology, like multi-core processors and parallel
processing, allows multiple processes and threads to be
executed simultaneously. Multiple processes and threads
can access the same memory space, the same declared
variable in code, or even read or write to the same file.
The Processes executing in the OS is one of the two types:
Independent Processes
Cooperating Processes
Principle of Concurrency
Independent Processes
Its state is not shared with any other process.
The result of execution depends only on the input
state.
The result of the execution will always be the same for
the same input.
The termination of the independent process will not
terminate any other.
Cooperating System
Its state is shared along other processes.
The result of the execution depends on relative
execution sequence and cannot be predicted in
advanced(Non-deterministic).
The result of the execution will not always be the same
for the same input.
The termination of the cooperating process may affect
other process.
Problems in Concurrency
There are various problems in concurrency.
Some of them are as follows:
Sharing Global Resources
Sharing global resources is difficult. If two
processes utilize a global variable and both alter
the variable's value, the order in which the many
changes are executed is critical.
Locking the channel
It could be inefficient for the OS to lock the
resource and prevent other processes from using
it.
Optimal Allocation of Resources
It is challenging for the OS to handle resource
allocation properly.
Issues of Concurrency
Non-atomic operations may be interrupted by several
processes. A non-atomic operation depends on other processes,
and an atomic operation runs independently of other processes.
Deadlock may happen in concurrent computing. Software and
hardware locks are commonly used to arbitrate shared
resources and implement process synchronization in parallel
computing, distributed systems, and multiprocessing.
Blocking may happen when a process is waiting for some
event, like the availability of a resource or completing an I/O
operation.
Race Conditions problem may occurs when the output of a
process is determined by the timing or sequencing of other
uncontrollable events.
Starvation problem may arise where a process is continuously
denied the resources it needs to complete its work.
Data inconsistency may occur. Maintaining data consistency
requires mechanisms to ensure the orderly execution of
cooperating processes.
Producer Consumer Process
Producer is a process which is able to produce data.
Consumer is a process that is able to consume the
data produced by the Producer.
Both Producer and Consumer share a common
memory buffer. This buffer is a space of a certain
size in the memory of the system which is used for
storage. The producer produces the data into the buffer
and the consumer consumes the data from the buffer.
So, what are the Producer-Consumer Problems?
Producer Process should not produce any data when
the shared buffer is full.
Consumer Process should not consume any data when
the shared buffer is empty.
The access to the shared buffer should be mutually
exclusive i.e at a time only one process should be able
to access the shared buffer and make changes to it.
For consistent data synchronization between Producer
and Consumer, the above problem should be resolved.
Producer Consumer Process
while (true) {
/* produce an item in next_produced */
register1 = counter
register1 = register1 + 1
counter = register1
counter-- could be implemented as
register2 = counter
register2 = register2 - 1
counter = register2
Critical Section
turn = j;
Remainder Section
} while (true);
The two processes are numbered P and P .
0 1
For convenience, when presenting P , we use P
i j
to represent other process.
Let the processes share a common integer
variable turn initialized to 0 (or 1).
If turn = i, then process P is allowed to
i
execute in its critical section.
The above solution ensures Mutual exclusion.
It requires strict alteration of processes in
critical section. So, no progress. If turn=0 and
P1 wants to enter its critical section, P 1 can not
do so.
No bounded waiting, because if a process do
not want to enter its critical section then the
turn of other process will no come.
Algorithm-2
Replace the turn variable by a Boolean array
flag[2] 0 0
Initially array flag is initialized to zero. i.e. flag
If flag[i] is true then Pi is ready to enter the
critical section.
In this algorithm, process Pi sets flag[i] to
be true, signaling that it is ready to enter
its critical section.
Then, Pi checks if Pj is not ready to enter
critical section.
This solution satisfies mutual exclusion
requirement.
This solution does not satisfy Progress
requirement.
Algorithm-3 [Peterson’s
Solution]
Good algorithmic description of solving the
problem
Two process solution
Assume that the load and store machine-
language instructions are atomic; that is,
cannot be interrupted
The two processes share two variables:
int turn;
Boolean flag[2]
// token comparison
Firstly the process sets its “choosing” variable to be
TRUE indicating its intent to enter critical section.
Then it gets assigned the highest ticket number
corresponding to other processes. Then the
“choosing” variable is set to FALSE indicating that it
now has a new ticket number.
The next three lines is checks if a process is
modifying its TICKET value then at that time some
other process should not be allowed to check its old
ticket value which is now obsolete. This is why inside
the for loop before checking ticket value we first
make sure that all other processes have the
“choosing” variable as FALSE.
After that we proceed to check the ticket values of
processes where process with least ticket
number/process id gets inside the critical section.
The exit section just resets the ticket value to zero.
Mutual Exclusion: A process with the lowest token
number is allowed to enter its critical section. Suppose
two processes have the same token number. In that
case, the process with the lower process ID among
these is selected, so at a particular time, there will be
only one process executing in its critical section. Thus
the requirement of mutual Exclusion is met.
Progress: After selecting a token, a waiting process
checks whether any other waiting process has higher
priority to enter its critical section. If there is no such
process, Pi will immediately enter its critical section.
Thus meeting progress requirements.
Bounded Waiting: As awaiting, the process will enter
its critical section when no other process is in its critical
section and
If its token number is the smallest among other waiting
processes.
If token numbers are the same, it has the lowest process ID
among other waiting processes.
Synchronization Hardware
Hardware features can make the programming
task easier and improves system efficiency.
Some simple hardware instructions available on
many systems for implementing the critical
section code. This problem could be solved if
we could disable interrupts to occur while a
shared variable is being modified.
Currently running code would execute without
preemption. So, no unexpected modification
could be made to shared variable.
All solutions below based on idea of locking.
Protecting critical regions via locks.
Modern machines provide special atomic
hardware instructions
Atomic = non-interruptible instructions
Solution to Critical-section Problem Using
Locks
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
TestAndSet Instruction
Definition:
boolean test_and_set (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
1.Executed atomically
2.Returns the original value of passed parameter
3.Set the new value of passed parameter to
“TRUE”.
4.If two TestAndSet instructions executed
simultenously (each on different CPUs), they will
be executed sequentially in some arbitrary order.
If a machine supports TestAndSet instruction,
then Mutual exclusion can be achieved by
following algorithm:
This solution satisfies Mutual exclusion,
progress but no bounded-waiting.
Another hardware instruction swap is
defined as:
do {
waiting[i] = true;
key = true;
while (waiting[i] && key)
key = test_and_set(&lock);
waiting[i] = false;
/* critical section */
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = false;
else
waiting[j] = false;
/* remainder section */
} while (true);
Mutex Locks
Previous solutions are complicated and
generally inaccessible to application
programmers
OS designers build software tools to solve
critical section problem
Simplest is mutex lock
Protect a critical section by first acquire() a
lock then release() the lock
Boolean variable indicating if lock is available
or not
Calls to acquire() and release() must be atomic
Usually implemented via hardware atomic
instructions
But this solution requires busy waiting
This lock therefore called a spinlock
acquire() and release()
acquire() {
while (!available);
/* busy wait */
available = false;
}
release() {
available = true;
}
do {
acquire() lock
critical section
release() lock
remainder section
} while (true);
Semaphore
Semaphore is a Synchronization tool that provides more
sophisticated ways (than Mutex locks) for process to synchronize
their activities. It can solve various synchronization problems
Semaphore S – integer variable. It can only be accessed via two
indivisible (atomic) operations
wait() and signal()
Originally called P() and V()
Definition of the wait() operation
wait(S) {
while (S <= 0);
// busy wait
S--;
}
Definition of the signal() operation
signal(S) {
S++;
}
Semaphore mutex can be used to solve n-
process critical section problem. The n-
processes share a semaphore mutex
initialized to 1.
Semaphore Implementation
Must guarantee that no two processes can
execute the wait() and signal() on the same
semaphore at the same time
Thus, the implementation becomes the critical
section problem where the wait and signal code
are placed in the critical section
Could now have busy waiting in critical section
implementation. Busy- waiting wastes CPU cycles.
This type of semaphore is also called spin-lock. The
advantage of a spin-lock is that no context switch
is required when a process must wait on a lock.
Note that applications may spend lots of time in
critical sections and therefore this is not a good
solution.
Semaphore Implementation with no Busy
waiting
To overcome the problem of busy waiting, when a process
execute a wait operation and its semaphore value is not
positive, the process must block itself. The blocked
processes must be placed into a waiting queue. The
scheduler will pick another process to execute.
Two operations:
block – place the process on the waiting queue
wakeup – remove one of processes in the waiting queue and
place it in the ready queue.
Each semaphore has an integer value and a list of processes.
When a process waits on a semaphore, it is added to the list of
processes. A signal operation removes one process from the list
and awakens.
typedef struct{
int value;
struct process *list;
} semaphore;
Implementation with no Busy waiting (Cont.)
* Initially value is initialized to 1.
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
* If the semaphore value is negative, its magnitude is
the no. of processes waiting on that semaphore.
Deadlock and Starvation
Deadlock – When two or more processes are
waiting indefinitely for an event (signal) that can
be caused by only one of the waiting processes. If
such a state is reached, these processes are said
to be deadlocked.
Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
... ...
signal(S); signal(Q);
signal(Q); signal(S);
Starvation – indefinite blocking
A process may never be removed from the semaphore
queue in which it is suspended
Priority Inversion – Scheduling problem
when lower-priority process holds a lock
needed by higher-priority process
Solved via priority-inheritance protocol
Binary Semaphore
Above semaphore construct is known as
counting semaphore, since its value ranges
over unrestricted domain.
A Binary semaphore is an integer with value 0
or 1.
Let S be a counting semaphore. To implement
it in terms of Binary semaphore:
Binary semaphore S1, S2;
int C;
Initially, S1=1 and S2=0 and C is initialized to the
initial value of counting semaphore S.
wait(S) signal(S)
{ {
wait(S1); wait(S1);
C--; C++;
if(C<0) if(C<=0)
{ {
signal(S1); signal(S2);
wait(S2); }
} else
signal(S1); signal(S1);
} }
Classical Problems of
Synchronization
do {
wait(rw_mutex);
...
/* writing is performed */
...
signal(rw_mutex);
} while (true);
Readers-Writers Problem (Cont.)
The structure of a reader process
do {
wait(mutex);
read_count++;
if (read_count == 1)
wait(rw_mutex);
signal(mutex);
...
/* reading is performed */
...
wait(mutex);
read count--;
if (read_count == 0)
signal(rw_mutex);
signal(mutex);
} while (true);
There is a possibility of starvation of writer
processes.
If a writer is in the critical section and n reader
processes are waiting, then one writer is
waiting on rw_mutex and (n-1) readers are
waiting on mutex.
When a writer executes signal(rw_mutex), we
may resume the execution of either the
waiting readers or a single writer.
This selection is made by scheduler.
Problem is solved on some systems by kernel
providing reader-writer locks.
Dining-Philosophers Problem
// eat
signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (TRUE);
What is the problem with this
algorithm?
Dining-Philosophers Problem Algorithm (Cont.)
CORRECT ANSWER: A
GATE QUESTIONS
CORRECT ANSWER: C
GATE QUESTIONS
CORRECT ANSWER: A
GATE QUESTIONS
CORRECT ANSWER: B
GATE QUESTIONS
CORRECT ANSWER: B
GATE QUESTIONS
CORRECT ANSWER: C
GATE QUESTIONS
CORRECT ANSWER: C
GATE QUESTIONS
CORRECT ANSWER: A