Process Synchronization: Critical Section Problem
Process Synchronization: Critical Section Problem
Process Synchronization was introduced to handle problems that arose while multiple
process executions. Some of the problems are discussed below.
A Critical Section is a code segment that accesses shared variables and has to be
executed as an atomic action.
It means that in a group of cooperating processes, at a given point of time, only one
process must be executing its critical section. If any other process also wants to
execute its critical section, it must wait until the first one finishes.
do {
while ( turn ^= i ); Entry Section
/* critical section */
turn = j;
/* remainder section */
} while(TRUE);
1. Mutual Exclusion
Out of a group of cooperating processes, only one process can be in its critical section
at a given point of time.
2. Progress
If no process is in its critical section, and if one or more threads want to execute their
critical section then any one of these threads must be allowed to get into its critical
section.
3. Bounded Waiting
After a process makes a request for getting into its critical section, there is a limit for
how many other processes can get into their critical section, before this process's
request is granted. So after the limit is reached, system must grant the process
permission to get into its critical section.
Synchronization Hardware
Many systems provide hardware support for critical section code. The critical section
problem could be solved easily in a single-processor environment if we could disallow
interrupts to occur while a shared variable or resource is being modified.
In this manner, we could be sure that the current sequence of instructions would be
allowed to execute in order without pre-emption. Unfortunately, this solution is not
feasible in a multiprocessor environment.
Disabling interrupt on a multiprocessor environment can be time consuming as the
message is passed to all the processors.
This message transmission lag, delays entry of threads into critical section and the
system efficiency decreases.
Mutex Locks
As the synchronization hardware solution is not easy to implement for everyone, a
strict software approach called Mutex Locks was introduced. In this approach, in the
entry section of code, a LOCK is acquired over the critical resources modified and used
inside critical section, and in the exit section that LOCK is released.
As the resource is locked while a process executes its critical section hence no other
process can access it.
Semaphores are integer variables that are used to solve the critical section problem
by using two atomic operations, wait and signal that are used for process
synchronization.
The definitions of wait and signal are as follows:
1. Wait
signal(S) { S++; }
After decreasing the counter by 1, if the counter value becomes negative, then
Add the caller to the waiting list, and then
Block itself
After decreasing the counter by 1, if the new counter value is not positive, then
Remove a process P from the waiting list
Remove the execution of process P, and return
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about these are given as follows:
1. Counting Semaphores
These are integer value semaphores and have an unrestricted value domain.
These semaphores are used to coordinate the resource access, where the
semaphore count is the number of available resources. If the resources are
added, semaphore count automatically incremented and if the resources are
removed, the count is decremented.
2. Binary Semaphores
The binary semaphores are like counting semaphores but their value is
restricted to 0 and 1. The wait operation only works when the semaphore is 1
and the signal operation succeeds when semaphore is 0. It is sometimes easier
to implement binary semaphores than counting semaphores.
Advantages of Semaphores
Some of the advantages of semaphores are as follows:
1. Semaphores allow only one process into the critical section. They follow the
mutual exclusion principle strictly and are much more efficient than some
other methods of synchronization.
2. There is no resource wastage because of busy waiting in semaphores as
processor time is not wasted unnecessarily to check if a condition is fulfilled to
allow a process to access the critical section.
3. Semaphores are implemented in the machine independent code of the
microkernel. So they are machine independent.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows:
Barber shop with one barber, one barber chair and N chairs to wait in. When
no customers the barber goes to sleep in barber chair and must be woken
when a customer comes in. When barber is cutting hair new customers take
empty seats to wait, or leave if no vacancy.
:- The problem describe two processes, User and Consumer, who share a common fixed
size buffer. Producer consumer problem also known as the "Bounded Buffer Problem" is
a multi-process synchronization problem.
Producer: The producer's job is to generate a bit of data, put it into the buffer and
start again.
If the buffer is empty, then a consumer should not try to access the data item
from it.
0
Similarly, a producer should not produce any data item if the buffer is full.
Counter: It counts the data items in the buffer. or to track whether the buffer is
empty or full. Counter is shared between two processes and updated by both.
How it works?
• Similarly producer check the buffer for the value of Counter for adding
data.
• If the counter is less than its maximum values, it means that there is some
space in Buffer.
It starts executing for producing the data items and update the counter by
implementing it by one.