Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
299 views

Process Synchronization: Critical Section Problem

The document discusses process synchronization and solutions to classical synchronization problems using semaphores. It defines process synchronization as coordinating access to shared resources among concurrent processes. The critical section problem arises when multiple processes need exclusive access to a shared resource. Semaphores provide a solution by allowing processes to wait (decrement) or signal (increment) access to resources, avoiding inconsistent data. Classical synchronization problems like the bounded buffer, dining philosophers, readers-writers, and sleeping barber problems are then discussed as examples solved using semaphores.

Uploaded by

Linda Brown
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
299 views

Process Synchronization: Critical Section Problem

The document discusses process synchronization and solutions to classical synchronization problems using semaphores. It defines process synchronization as coordinating access to shared resources among concurrent processes. The critical section problem arises when multiple processes need exclusive access to a shared resource. Semaphores provide a solution by allowing processes to wait (decrement) or signal (increment) access to resources, avoiding inconsistent data. Classical synchronization problems like the bounded buffer, dining philosophers, readers-writers, and sleeping barber problems are then discussed as examples solved using semaphores.

Uploaded by

Linda Brown
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Process Synchronization

Process Synchronization means sharing system resources by processes in such a way


that, Concurrent access to shared data is handled thereby minimizing the chance of
inconsistent data.

Maintaining data consistency demands mechanisms to ensure synchronized execution


of cooperating processes.

Process Synchronization was introduced to handle problems that arose while multiple
process executions. Some of the problems are discussed below.

Critical Section Problem

A Critical Section is a code segment that accesses shared variables and has to be
executed as an atomic action.

It means that in a group of cooperating processes, at a given point of time, only one
process must be executing its critical section. If any other process also wants to
execute its critical section, it must wait until the first one finishes.

A section of code, common to n cooperating processes, in which the processes


may be accessing common variables.
 
A Critical Section Environment contains:
Entry Section Code requesting entry into the critical section.
Critical Section Code in which only one process can execute at any one time.
Exit Section The end of the critical section, releasing or allowing others in.
Remainder Section Rest of the code AFTER the critical section.

Here’s an example of a simple piece of code containing the components required in a


critical section.

do {
while ( turn ^= i ); Entry Section

/* critical section */
turn = j;
/* remainder section */
} while(TRUE);

Solution to Critical Section Problem


A solution to the critical section problem must satisfy the following three conditions:

1. Mutual Exclusion
Out of a group of cooperating processes, only one process can be in its critical section
at a given point of time.

2. Progress
If no process is in its critical section, and if one or more threads want to execute their
critical section then any one of these threads must be allowed to get into its critical
section.
3. Bounded Waiting
After a process makes a request for getting into its critical section, there is a limit for
how many other processes can get into their critical section, before this process's
request is granted. So after the limit is reached, system must grant the process
permission to get into its critical section.

Synchronization Hardware
Many systems provide hardware support for critical section code. The critical section
problem could be solved easily in a single-processor environment if we could disallow
interrupts to occur while a shared variable or resource is being modified.
In this manner, we could be sure that the current sequence of instructions would be
allowed to execute in order without pre-emption. Unfortunately, this solution is not
feasible in a multiprocessor environment.
Disabling interrupt on a multiprocessor environment can be time consuming as the
message is passed to all the processors.
This message transmission lag, delays entry of threads into critical section and the
system efficiency decreases.
Mutex Locks
As the synchronization hardware solution is not easy to implement for everyone, a
strict software approach called Mutex Locks was introduced. In this approach, in the
entry section of code, a LOCK is acquired over the critical resources modified and used
inside critical section, and in the exit section that LOCK is released.
As the resource is locked while a process executes its critical section hence no other
process can access it.

Semaphores in Process Synchronization

Semaphores are system-implemented data structures that are shared between


processes.
Features of Semaphores
 Semaphores are used to synchronize operations when processes access a
common, limited, and possibly non-shareable resource.
 Each time a process wants to obtain the resource, the associated semaphore is
tested. A positive, non-zero semaphore value indicates the resource is
available.
 Semaphore system calls will, by default, cause the invoking process to block if
the semaphore value indicates the resource is not available.
Uses of Semaphores
Critical Section Guard (initial value: 1)
 Semaphores that control access to a single resource, taking the value of
0 (resource is in use) or 1 (resource is available), are often called binary
semaphores.
Precedence Enforcer (initial value: 0)
 Semaphores ensuring an ordering of execution among concurrent
processes
Resource Counter (initial value: N)
 Semaphores controlling access to N (multiple) resources, thus assuming
a range of non-negative values, are frequently called counting
semaphores.

Semaphores are integer variables that are used to solve the critical section problem
by using two atomic operations, wait and signal that are used for process
synchronization.
The definitions of wait and signal are as follows:

1. Wait

The wait operation decrements the value of its argument S, if it is positive. If S


is negative or zero, then no operation is performed.

wait(S) { while (S<=0); S--; }


2. Signal

The signal operation increments the value of its argument S.

signal(S) { S++; }

Semaphore Method: Wait


Void wait(sem S)
{
S.count--;
If(S.count<0)
{ add the caller to the waiting list;
block();
}
}

After decreasing the counter by 1, if the counter value becomes negative, then
Add the caller to the waiting list, and then
Block itself

Semaphore Method: Signal


Void signal(sem S)
{
S.count++;
If(S.count<0)
{ remove a process P from the waiting list;
resume(P);
}
}

After decreasing the counter by 1, if the new counter value is not positive, then
Remove a process P from the waiting list
Remove the execution of process P, and return

Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about these are given as follows:

1. Counting Semaphores

These are integer value semaphores and have an unrestricted value domain.
These semaphores are used to coordinate the resource access, where the
semaphore count is the number of available resources. If the resources are
added, semaphore count automatically incremented and if the resources are
removed, the count is decremented.
2. Binary Semaphores

The binary semaphores are like counting semaphores but their value is
restricted to 0 and 1. The wait operation only works when the semaphore is 1
and the signal operation succeeds when semaphore is 0. It is sometimes easier
to implement binary semaphores than counting semaphores.

Advantages of Semaphores
Some of the advantages of semaphores are as follows:

1. Semaphores allow only one process into the critical section. They follow the
mutual exclusion principle strictly and are much more efficient than some
other methods of synchronization.
2. There is no resource wastage because of busy waiting in semaphores as
processor time is not wasted unnecessarily to check if a condition is fulfilled to
allow a process to access the critical section.
3. Semaphores are implemented in the machine independent code of the
microkernel. So they are machine independent.

Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows:

1. Semaphores are complicated so the wait and signal operations must be


implemented in the correct order to prevent deadlocks.
2. Semaphores are impractical for last scale use as their use leads to loss of
modularity. This happens because the wait and signal operations prevent the
creation of a structured layout for the system.
3. Semaphores may lead to a priority inversion where low priority processes may
access the critical section first and high priority processes later.

Classical problems of Synchronization with Semaphore Solution

Classical problems of synchronization as examples of a large class of concurrency-


control problems. In our solutions to the problems, we use semaphores for
synchronization, since that is the traditional way to present such solutions. However,
actual implementations of these solutions could use mutex locks in place of binary
semaphores.
These problems are used for testing nearly every newly proposed synchronization
scheme. The following problems of synchronization are considered as classical
problems:
1. Bounded-buffer (or Producer-Consumer) Problem,
2. Dining-Philosphers Problem,
3. Readers and Writers Problem,
4. Sleeping Barber Problem
1. Bounded-buffer (or Producer-Consumer) Problem
Bounded Buffer problem is also called producer consumer problem. This
problem is generalized in terms of the Producer-Consumer problem. Solution to
this problem is, creating two counting semaphores “full” and “empty” to keep
track of the current number of full and empty buffers respectively. Producers
produce a product and consumers consume the product, but both use of one of
the containers each time.

2. Dining Philosopher Problem

The Dining Philosopher Problem states that K philosophers seated around a


circular table with one chopstick between each pair of philosophers. There is
one chopstick between each philosopher. A philosopher may eat if he can
pickup the two chopsticks adjacent to him. One chopstick may be picked up by
any one of its adjacent followers but not both. This problem involves the
allocation of limited resources to a group of processes in a deadlock-free and
starvation-free manner.

3. Readers and Writers Problem

Suppose that a database is to be shared among several concurrent processes.


Some of these processes may want only to read the database, whereas others
may want to update (that is, to read and write) the database. We distinguish
between these two types of processes by referring to the former as readers and
to the latter as writers. Precisely in OS we call this situation as the readers-
writers problem. Problem parameters:
One set of data is shared among a number of processes.
Once a writer is ready, it performs its write. Only one writer may write at a
time.
If a process is writing, no other process can read it.
If at least one reader is reading, no other process can write.
Readers may not write and only read.

4. Sleeping Barber Problem

Barber shop with one barber, one barber chair and N chairs to wait in. When
no customers the barber goes to sleep in barber chair and must be woken
when a customer comes in. When barber is cutting hair new customers take
empty seats to wait, or leave if no vacancy.

What is Producer Consumer Problem

:- The problem describe two processes, User and Consumer, who share a common fixed
size buffer. Producer consumer problem also known as the "Bounded Buffer Problem" is
a multi-process synchronization problem. 

Producer: The producer's job is to generate a bit of data, put it into the buffer and
start again. 

Consumer: The consumer is consuming the data(i.e remaining it from the buffer)


one piece at a time. 

If the buffer is empty, then a consumer should not try to access the data item
from it.

0
Similarly, a producer should not produce any data item if the buffer is full.

Counter: It counts the data items in the buffer. or to track whether the buffer is
empty or full. Counter is shared between two processes and updated by both.

How it works? 

• Counter value is checked by consumer before consuming it. 

• If counter is 1 or greater than 1 then start executing the process and


updates the counters. 

• Similarly producer check the buffer for the value of Counter for adding
data. 

• If the counter is less than its maximum values, it means that there is some
space in Buffer. 

It starts executing for producing the data items and update the counter by
implementing it by one. 

Let max= maximum size of the buffer.


If buffer is full then counter=max and consumer is busy executive other
instructions or has not been allotted its time slice yet.

You might also like