Module_03_OS
Module_03_OS
Module -03
Chapter 6 Process Synchronization
6.1 Background
Recall that back in Chapter 3 we looked at cooperating processes ( those that can effect or be
effected by other simultaneously running processes ), and as an example, we used the producer-
consumer cooperating processes:
Producer code from chapter 3:
item nextProduced;
while( true ) {
item nextConsumed;
while( true ) {
The only problem with the above code is that the maximum number of items which can be placed
into the buffer is BUFFER_SIZE - 1. One slot is unavailable because there always has to be a gap
between the producer and the consumer.
Unfortunately we have now introduced a new problem, because both the producer and the
consumer are adjusting the value of the variable counter, which can lead to a condition known as
a race condition. In this condition a piece of code may or may not work correctly, depending on
which of two simultaneous processes executes first, and more importantly if one of the processes
gets interrupted such that the other process runs between important steps of the first process.
( Bank balance example discussed in class. )
The particular problem above comes from the producer executing "counter++" at the same time the
consumer is executing "counter--". If one process gets part way through making the update and
then the other process butts in, the value of counter can get left in an incorrect state.
But, you might say, "Each of those are single instructions - How can they get interrupted halfway
through?" The answer is that although they are single instructions in C++, they are actually three
steps each at the hardware level: (1) Fetch counter from memory into a register, (2) increment or
decrement the register, and (3) Store the new value of counter back to memory. If the instructions
from the two processes get interleaved, there could be serious problems, such as illustrated by the
following:
Exercise: What would be the resulting value of counter if the order of statements T4 and T5 were
reversed? ( What should the value of counter be after one producer and one consumer, assuming
the original value was 5? )
Note that race conditions are notoriously difficult to identify and debug, because by their very
nature they only occur on rare occasions, and only when the timing is just exactly right. ( or
wrong! :-) ) Race conditions are also very difficult to reproduce. :-(
Obviously the solution is to only allow one process at a time to manipulate the value "counter".
This is a very common occurrence among cooperating processes, so lets look at some ways in
which this is done, as well as some classic problems in this area.
A solution to the critical-section problem must satisfy the following three requirements:
1. Mutual exclusion. If process P, is executing in its critical section, then no other processes can
be executing in their critical sections.
2. Progress. If no process is executing in its critical section and some processes wish to enter their
critical sections, then only those processes that are not executing in their remainder sections
can participate in the decision on which will enter its critical section next, and this selection
cannot be postponed indefinitely.
3. Bounded waiting. There exists a bound, or limit, on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.
We assume that each process is executing at a nonzero speed. We can make no assumption
concerning the relative speed of the n processes.
Two general approaches are used to handle critical sections in operating systems.
1. Pre-emptive kernel 2. Non-preemptive kernel.
A pre-emptive kernel allows a process to be pre-empted while it is running in kernel mode. It
is difficult to design for SMP architectures, since in these environments it is possible for two
kernel-mode processes to run simultaneously on different processors. It is more suitable for
real-time programming, as it will allow a real-time process to pre-empt a process currently
running in the kernel. Also, pre-emptive kernel may be more responsive, since there is less
risk that a kernel-mode process will run for an arbitrarily long period before giving up the
processor to waiting processes.
A Non-preemptive kernel does not allow a process running in kernel mode to be pre-empted;
a kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields control
of the CPU. It is essentially free from race conditions on kernel data structures, as only one
process is active in the kernel at a time.
BGSCET/AIML 2023-2024 4/34
Operating Systems - BCS303
j. If Pj is not ready to enter the critical section, then flag [j] = = false, and Pi can enter its critical
section. If turn = = j, then Pj will enter the critical section. However, once P j exits its critical
section, it will reset flag[j] to false, allowing Pi to enter its critical section. If Pj resets flag[j] to
true, it must also set turn to i. Thus, Pi will enter the critical section (progress) after at most one
entry by Pi (bounded waiting).
3. n-process solution to critical section problem with uses TestandSet() hardware instruction.
Prove how this algorithm satisfies all the requirements of critical section problem’s solution.
Any solution to the critical-section problem requires a simple tool called a lock. Race
conditions are prevented by requiring that critical regions be protected by locks.
That is, a process must acquire a lock before entering a critical section; it releases the lock when
it exits the critical section. This is illustrated below,
Hardware features can make any programming task easier and improve system efficiency.
The critical-section problem can be solved simply in a uniprocessor environment if we could
prevent interrupts from occurring while a shared variable was being modified.
This solution is not feasible in a multiprocessor environment. Disabling interrupts on a
multiprocessor can be time consuming, as the message is passed to all the processors. This
message passing delays entry into each critical section, and system efficiency decreases.
do {
while (TestAndSetLock(&lock)); // do nothing
// critical section
lock = FALSE;
// remainder section
}while (TRUE);
The Swap() instruction, in contrast to the TestAndSet() instruction, operates on the contents of
two words as shown below,
It is executed atomically. If the machine supports the Swap() instruction, then mutual exclusion
can be provided as follows.
A global Boolean variable lock is declared and is initialized to false. In addition, each process
has a local Boolean variable key. The structure of process Pi is shown below,
do {
key = TRUE;
while (key = = TRUE)
Swap(&lock, &key)
};
// critical section
lock = FALSE;
BGSCET/AIML 2023-2024 7/34
Operating Systems - BCS303
// remainder section
}while (TRUE);
Although these algorithms satisfy the mutual-exclusion requirement, they do not satisfy the
bounded-waiting requirement.
Another algorithm is given below using the TestAndSet() instruction that satisfies all the critical-
section requirements. The common data structures are,
boolean waiting[n];
boolean lock;
These data structures are initialized to false.
do {
waiting[i] = TRUE;
key = TRUE;
while (waiting[i] && key)
key= TestAndSet(&lock);
waiting[i] = FALSE;
// critical section
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j = = i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);
4. Servers can be designed to limit the number of open connections. For example, a server
may wish to have only N socket connections at any point in time. As soon as N connections
are made, the server will not accept another incoming connection until an existing
connection is released. Explain how semaphores can be used by a server to limit the
number of concurrent connections.
Binary Semaphore
The value of a binary semaphore can range only between 0 and 1. On some systems, binary
semaphores are known as mutex locks, as they are locks that provide mutual exclusion.
We can use binary semaphores to deal with the critical-section problem for multiple processes.
Then processes share a semaphore, mutex, initialized to 1. Each process Pi is organized as shown
below.
do
{
wait (mutex) ;
// critical section
signal(mutex);
// remainder section
} while (TRUE);
.
Counting Semaphore
The value of a counting semaphore can range over an unrestricted domain. Counting semaphores
can be used to control access to a given resource consisting of a finite number of instances. The
One important problem that can arise when using semaphores to block processes waiting for a
limited resource is the problem of deadlocks, which occur when multiple processes are blocked,
each waiting for a resource that can only be freed by one of the other ( blocked ) processes, as
illustrated in the following example. ( Deadlocks are covered more completely in chapter 7. )
Another problem to consider is that of starvation, in which one or more processes gets blocked
forever, and never get a chance to take their turn in the critical section. For example, in the
semaphores above, we did not specify the algorithms for adding processes to the waiting queue
in the semaphore in the wait( ) call, or selecting one to be removed from the queue in the signal( )
call. If the method chosen is a FIFO queue, then every process will eventually get their turn, but
if a LIFO queue is implemented instead, then the first process to start waiting could starve.
Reader’s –writers problem and provide a semaphore solution using semaphore’s for reader’s
priority problem.
Suppose that a database is to be shared among several concurrent processes. Some of these
processes may want only to read the database, whereas others may want to update (that is, to
read and write) the database. These two types of processes is referred as readers and as writers
respectively. If two readers access the shared data simultaneously, no adverse effects will result.
However, if a writer and some other process (either a reader or a writer) access the database
simultaneously, chaos may ensue.
To ensure that these difficulties do not arise, we require that the writers have exclusive access to the
shared database while writing to the database. This synchronization problem is referred to as the
readers-writers problem.
In the solution to the first readers-writers problem, the reader processes share the following data
structures:
semaphore mutex, wrt;
int readcount;
The semaphores mutex and wrt are initialized to 1; readcount is initialized to 0. The semaphore wrt
is common to both reader and writer processes. The mutex semaphore is used to ensure mutual
exclusion when the variable readcount is updated. The readcount variable keeps track of how
many processes are currently reading the object. The semaphore wrt function is as a mutual-
exclusion semaphore for the writers. It is also used by the first or last reader that enters or exits
the critical section. It is not used by readers who enter or exit while other readers are in their
critical sections. The code for a writer process is shown below
do
{
BGSCET/AIML 2023-2024 11/34
Operating Systems - BCS303
wait(wrt);
// writing is performed
signal(wrt);
} while (TRUE);
If a writer is in the critical section and n readers are waiting, then one reader is queued on wrt, and
n- 1 readers are queued on mutex. Also observe that, when a writer executes signal (wrt), we
may resume the execution of either the waiting readers or a single waiting writer. The selection
is made by the scheduler.
Figure 6.1
One simple solution is to represent each chopstick with a semaphore. A philosopher tries to grab a
chopstick by executing await () operation on that semaphore; she releases her chopsticks by
executing the signal() operation on the appropriate semaphores. Thus, the shared data are
semaphore chopstick[5];
where all the elements of chopstick are initialized to 1. The structure of philosopher i is shown
below.
do
{
wait(chopstick[i]);
wait(chopstick[(i+l) % 5]);
// eat
signal(chopstick[i]);
signal(chopstick[(i+l) % 5]);
II think
} while (TRUE);
*****
Q. Appling the concepts of binary semaphore and counting semaphore (also define), explain
the implementation of wait () and signal () semaphore operation.
A semaphore S is an integer variable that, apart from initialization, is accessed only through two
standard atomic operations: wait () and signal (). The wait () operation is termed as P and signal()
is called V.
The definition of wait () is as follows:
wait(S)
{
while S <= 0
; no-op
S--;
}
The definition of signal() is as follows:
signal(S) {
S++;
}
BGSCET/AIML 2023-2024 13/34
Operating Systems - BCS303
All modifications to the integer value of the semaphore in the wait () and signal() operations must
be executed indivisibly. That is, when one process modifies the semaphore value, no other
process can simultaneously modify that same semaphore value. In addition, in the case of wait
(S), the testing of the integer value of S (S<=0), as well as its possible modification (S--), must
be executed without interruption.
Binary Semaphore
The value of a binary semaphore can range only between 0 and 1. On some systems, binary
semaphores are known as mutex locks, as they are locks that provide mutual exclusion.
We can use binary semaphores to deal with the critical-section problem for multiple processes.
Then processes share a semaphore, mutex, initialized to 1. Each process Pi is organized as shown
below.
do
{
wait (mutex) ;
// critical section
signal(mutex);
// remainder section
} while (TRUE);
.
Counting Semaphore
The value of a counting semaphore can range over an unrestricted domain. Counting semaphores
can be used to control access to a given resource consisting of a finite number of instances. The
semaphore is initialized to the number of resources available. Each process that wishes to use a
resource performs a wait() operation on the semaphore (thereby decrementing the count). When
a process releases a resource, it performs a signal() operation
(incrementing the count). When the count for the semaphore goes to 0, all resources are being used.
After that, processes that wish to use a resource will block until the count becomes greater than 0.
Q. Justify the reason of not appropriateness of spinlocks in single-processor systems, yet they
are often used in multiprocessor systems.
Spinlocks are not appropriate for single-processor systems because the condition that would
break a process out of the spinlock can be obtained only by executing a different process.
If the process is not relinquishing the processor, other processes do not get the opportunity to set
the program condition required for the first process to make progress.
In a multiprocessor system, other processes execute on other processors and thereby modify the
program state in order to release the first process from the spinlock.
Q. A barbershop consists of a waiting room with n chairs and a barber room with one barber
chair. If there are no customers to be served, the barber goes to sleep. If a customer enters
the barbershop and all chairs are occupied, then the customer leaves the shop. If the
barber is busy but chairs are available, then the customer sits in one of the free chairs. If
the barber is asleep, the customer wakes up the barber. Write a program to coordinate the
barber and the customer.
We use 3 semaphores. Semaphore customers counts waiting customers; semaphore barbers is the
number of idle barbers (0 or 1); and mutex is used for mutual exclusion. A shared data variable
customers1 also counts waiting customers. It is a copy of customers. But we need it here because
Q. A file is to be shared among different processes, each of which has a unique number. The
file can be accessed simultaneously by several processes, subject to the following constraint:
The sum of all unique numbers associated with all the processes currently accessing the file
must be less than n. Write a semaphores to coordinate access to the file.
int sumid=0; /* Shared var that contains the sum of the process ids currently accessing the file */
int waiting=0; /* Number of process waiting on the semaphore OkToAccess */
semaphore mutex=1; /* Our good old Semaphore variable ;) */
semaphore OKToAccess=0; /* The synchronization semaphore */
get_access(int id)
{
sem_wait(mutex);
while(sumid+id > n) {
waiting++;
sem_signal(mutex);
sem_wait(OKToAccess);
sem_wait(mutex);
BGSCET/AIML 2023-2024 15/34
Operating Systems - BCS303
}
sumid += id;
sem_signal(mutex);
}
release_access(int id)
{
int i;
sem_wait(mutex);
sumid -= id;
for (i=0; i < waiting;++i) {
sem_signal(OKToAccess);
}
waiting = 0;
sem_signal(mutex);
}
main()
{
get_access(id);
do_stuff();
release_access(id);
}
TOPICS
DEADLOCKS
SYSTEM MODEL
DEADLOCK CHARACTERIZATION
METHODS FOR HANDLING DEADLOCKS
DEADLOCK PREVENTION
DEADLOCK AVOIDANCE
DEADLOCK DETECTION
RECOVERY FROM DEADLOCK
MEMORY MANAGEMENT STRATEGIES
BACKGROUND
SWAPPING
CONTIGUOUS MEMORY ALLOCATION
PAGING, STRUCTURE OF PAGE TABLE
SEGMENTATION
DEADLOCKS
When processes request a resource and if the resources are not available at that time the process
enters into waiting state. Waiting process may not change its state because the resources they are
requested are held by other process. This situation is called deadlock.
The situation where the process waiting for the resource i.e., not available is called deadlock.
SYSTEM MODEL
A system may consist of finite number of resources and is distributed among number of
processes. There resources are partitioned into several instances each with identical instances.
A process must request a resource before using it and it must release the resource after using it. It
can request any number of resources to carry out a designated task. The amount of resource
requested may not exceed the total number of resources available.
Multithread programs are good candidates for deadlock because they compete for shared resources.
DEADLOCK CHARACTERIZATION
Necessary Conditions: A deadlock situation can occur if the following 4 conditions occur
simultaneously in a system:-
A directed edge from process Pi to resource type Rj denoted by Pi->Ri indicates that Pi
requested an instance of resource Rj and is waiting. This edge is called Request edge.
A directed edge Ri-> Pj signifies that resource Rj is held by process Pi. This is called Assignment
edge
R1 R3
R2 R4
If the graph contain no cycle, then no process in the system is deadlock. If the graph
contains a cycle then a deadlock may exist.
If each resource type has exactly one instance than a cycle implies that a deadlock has occurred. If
each resource has several instances then a cycle do not necessarily implies that a deadlock has
occurred.
To ensure that the deadlock never occur, the system can use either deadlock avoidance or a
deadlock prevention.
DEADLOCK PREVENTION
For a deadlock to occur each of the four necessary conditions must hold. If at least one of these
condition does not hold then we can prevent occurrence of deadlock.
1. Mutual Exclusion
This holds for non-sharable resources.
Example: A printer can be used by only one process at a time.
Mutual exclusion is not possible in sharable resources and thus they cannot be involved in deadlock.
Read-only files are good examples for sharable resources. A process never waits for accessing a
sharable resource. So we cannot prevent deadlock by denying the mutual exclusion condition in
non-sharable resources.
4. Circular Wait
The fourth and the final condition for deadlock is the circular wait condition. One way to ensure
that this condition never, is to impose ordering on all resource types and each process requests
resource in an increasing order.
Let R={R1,R2,………Rn} be the set of resource types. We assign each resource type with a unique
integer value. This will allows us to compare two resources and determine whether one precedes
the other in ordering.
Example : Define a one-to-one function F: R —> N, where N is the set of natural numbers. For
example, if the set of resource types R includes tape drives, disk drives, and printers, then the
function F might be defined as follows:
F(disk drive)=5
F(printer)=12
F(tape drive)=1
Deadlock can be prevented by using the following protocol:-
Each process can request the resource in increasing order. A process can request any number of
instances of resource type say Ri and it can request instances of resource type Rj only F(Rj) >
F(Ri).
Alternatively, when a process requests an instance of resource type Rj, it has released any resource
Ri such that F(Ri) >= F(Rj). If these two protocol are used then the circular wait can’t hold.
DEADLOCK AVOIDANCE
• Deadlock prevention algorithm may lead to low device utilization and reduces system
throughput.
• Avoiding deadlocks requires additional information about how resources are to be requested.
With the knowledge of the complete sequences of requests and releases we can decide for each
requests whether or not the process should wait.
• For each requests it requires to check the resources currently available, resources that are currently
allocated to each processes future requests and release of each process to decide whether the
current requests can be satisfied or must wait to avoid future possible deadlock.
Safe State:
• A state is a safe state in which there exists at least one order in which all the process will run
completely without resulting in a deadlock.
• A system is in safe state if there exists a safe sequence.
• A sequence of processes <P1,P2,………..Pn> is a safe sequence for the current allocation state if
for each Pi the resources that Pi can request can be satisfied by the currently available resources.
• If the resources that Pi requests are not currently available then Pi can obtain all of its needed
resource to complete its designated task.
• A safe state is not a deadlock state.
• Whenever a process request a resource i.e., currently available, the system must decide whether
resources can be allocated immediately or whether the process must wait. The request is granted
only if the allocation leaves the system in safe state.
• In this, if a process requests a resource i.e., currently available it must still have to wait. Thus
resource utilization may be lower than it would be without a deadlock avoidance algorithm.
Banker’s Algorithm:
State and explain banker’s algorithm for deadlock avoidance.
The resource-allocation-graph algorithm is not applicable to a resource allocation system with
multiple instances of each resource type.
Banker’s algorithm is applicable to such a system but is less efficient than the resource-allocation
graph scheme.
When, a new process enters the system, it must declare the maximum number of instances of
each resource type that it may need.
This number may not exceed the total number of resources in the system.
When a user requests a set of resources, the system must determine whether the allocation of
these resources will leave the system in a safe state.
If it will, the resources are allocated; otherwise, the process must wait until some other process
releases enough resources.
Let n be the number of processes in the system and m be the number of resource types. We need the
following data structures:
Available. A vector of length m indicates the number of available resources of each type. If
Available[ j ] equals k, there are k instances of resource type Rj are available.
Max. An n X m matrix defines the maximum demand of each process. If M[ i ] [ j ] equals k,
then process Pi may request at most k instances of resource type Rj.
Allocation. An n X m matrix defines the number of resources of each type currently allocated to
each process. If Allocation[ i ][ j ] equals k, then process Pi is currently allocated k instances of
resource type Rj.
Need. An n X m matrix indicates the remaining resource need of each process. If Need[ i ][ j ]
equals k, then process Pi may need k more instances of resource type Rj to complete its task.
Note that Need[ i ][ j ] equals Max[ i ][ j ] - Allocation[ i ][ j ].
Explain safety algorithm and resource request algorithm for deadlock avoidance.
Safety Algorithm :
This algorithm helps in finding out whether or not a system is in a safe state. The steps of the
algorithm is as follows:
1. Let Work and Finish be vectors of length m and n, respectively. Initialize Work =
Available and Finish[ i ] = false for i = 0, 1, ..., n - l .
2. Find an index i such that both
a. Finish[ i ] ==false
b. Needi < Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi
Finish[i] = true
BGSCET/AIML 2023-2024 22/34
Operating Systems - BCS303
Go to step 2.
4. If Finish[ i ] = true for all i, then the system is in a safe state.
This algorithm may require an order of m X n2 operations to determine whether a state is safe.
Problems:
Q1). Construct the wait-for graph that corresponds to the following resource allocation graph
and say whether or not there is deadlock:
Ans: The given resource allocation graph is multi instance with a cycle contained in it.
So, the system may or may not be in a deadlock state.
Allocation Need
R1 R2 R3 R1 R2 R3
P1 0 1 0 1 0 0
P2 1 0 1 0 0 0
Q2). Consider the resource allocation graph in the figure- Find if the system is in a deadlock
state otherwise find a safe sequence.
Ans:
The given resource allocation graph is multi instance with a cycle contained in it.
BGSCET/AIML 2023-2024 24/34
Operating Systems - BCS303
So, the system may or may not be in a deadlock state.
Using the given resource allocation graph, we have-
Available = [ R1 R2 ] = [ 0 0 ]
Step-01:
Since process P3 does not need any resource, so it executes.
After execution, process P3 release its resources.
Then,
Available
=[00]+[01] =[01]
Step-02:
With the instances available currently, only the requirement of the process P1 can be satisfied.
So, process P1 is allocated the requested resources.
It completes its execution and then free up the instances of resources held by it.
Then-
Available
=[01]+[10] =[11]
Step-03:
With the instances available currently, the requirement of the process P2 can be satisfied.
So, process P2 is allocated the requested resources.
It completes its execution and then free up the instances of resources held by it.
Then-
Available
=[11]+[01] =[12]
Thus,
There exists a safe sequence P3, P1, P2 in which all the processes can be executed.
So, the system is in a safe state.
Q3. Consider the resource allocation graph in the figure- Find if the system is in a deadlock
( 2marks)
Available = [ R1 R2 R3 ] = [ 0 0 1 ]
Step-01:
With the instances available currently, only the requirement of the process P2 can be satisfied.
So, process P2 is allocated the requested resources.
It completes its execution and then free up the instances of resources held by it.
Then-
Available
=[001]+[010] =[011]
Step-02:
With the instances available currently, only the requirement of the process P0 can be satisfied.
So, process P0 is allocated the requested resources.
It completes its execution and then free up the instances of resources held by it.
Then-
Available
=[011]+[101] =[112]
Step-03:
With the instances available currently, only the requirement of the process P1 can be satisfied.
So, process P1 is allocated the requested resources.
It completes its execution and then free up the instances of resources held by it.
Then-
Available
=[112]+[110] =[222]
Step-04:
With the instances available currently, the requirement of the process P3 can be satisfied.
So, process P3 is allocated the requested resources.
It completes its execution and then free up the instances of resources held by it.
Then-
Available
=[222]+[010]
=[232]
Thus,
There exists a safe sequence P2, P0, P1, P3 in which all the processes can be executed.
So, the system is in a safe state
Ans:
-------------------------------------------------------------------------------------------------------------------
DEADLOCK DETECTION
1. What is wait for graph? Explain how it is useful for detection of deadlock.
Wait-for Graph:
If all resources have only a single instance, then we can define a deadlock detection algorithm that
uses a variant of the resource-allocation graph, called a wait-for graph.
An edge from Pi to Pj in a wait-for graph implies that process Pi is waiting for process Pj to release
a resource that Pi needs.
An edge Pi -> Pj exists in a wait-for graph if and only if the corresponding resource allocation
graph contains two edges Pi -> Rq and Rq -> Pj for some resource Rq.
Above figure presents a resource-allocation graph and the corresponding wait-for graph.
A deadlock exists in the system if and only if the wait-for graph contains a cycle. To detect
deadlocks, the system needs to maintain the wait-for graph and periodically invoke an algorithm
that searches for a cycle in the graph.
An algorithm to detect a cycle in a graph requires an order of n2 operations, where n is the number
of vertices in the graph.
iii) If a request from process P1 arrives for (1, 0, 2) can the request be granted immediately?
Apply resource request algorithm
Request = 1 0 2
Need of P1 = 1 2 2
Available = 3 2 2
Request < Need and request < available. Hence, the request can be granted immediately.
--------------------------------------------------------------------------------------------------------------------
2. For the following snapshot. Find the safe sequence using Banker’s algorithm.
Is the system in safe state?
If a request from process P2 arrives for (002), can the request be granted immediately?
Allocation Max Available
ABC ABC ABC
P0 0 0 2 0 0 4 1 0 2
P1 10 0 2 0 1
P2 1 3 5 1 3 7
P3 6 3 2 8 4 2
P4 1 4 3 1 5 7
Is the system in safe state?
First find need matrix
Processes Need
A B C
P0 0 0 2
P1 1 0 1
P2 0 0 2
P3 2 1 0
P4 0 1 4
System is in safe state and Safe sequence = P0, P2, P4, P1, P3
If a request from process P2 arrives for (0 0 2), can the request be granted immediately?
Apply resource request algorithm
Request = 0 0 2
Need of P2 = 0 0 2
Available = 1 0 2
Request <= Need and request < available. Hence, the request can be granted immediately.
---------------------------------------------------------------------------------------------------------------
Answer:
Allocation Max Available
A B C D A B C D A B C D
P1 0 0 1 2 0 0 1 2 1 5 2 0
P2 1 0 0 0 1 7 5 0
P3 1 3 5 4 2 3 5 6
P4 0 6 3 2 0 6 5 2
P5 0 0 1 4 0 6 5 6
1. What is the need matrix content?
Need Matrix = max - allocation
Processes Need
A B C D
P1 0 0 0 0
P2 0 7 5 0
P3 1 0 0 2
P4 0 0 2 0
P5 0 6 4 2
2. Is the System in safe state?
Apply Safety algorithm on the given system.
m=4 (resources), n=5 (processes)
work = available
work = 1 5 2 0
Process Check need <= work If yes, work = work + allocation
P1 0 0 0 0 < 1 5 2 0 (Yes) Work = 1 5 2 0 + 0 0 1 2 = 1 5 3 2
P2 0 7 5 0 < 1 5 3 2 (No)
P3 1 0 0 2 < 1 5 3 2 (Yes) Work = 1 5 3 2 + 1 3 5 4 = 2 8 8 6
P4 0 0 2 0 < 2 8 8 6 (yes) Work = 2 8 8 6 + 0 6 3 2 = 2 14 11 8
P5 0 6 4 2 < 2 14 11 8 (yes) Work = 2 14 11 8 + 0 0 1 4 = 2 14 12 12
P2 0 7 5 0 < 2 14 12 12 (yes) Work = 2 14 12 12 + 1 0 0 0 = 3 14 12 12
Yes system is in safe sate and safe sequence is P1, P3, P4, P5, P2
3. If a request from process P2(0,4,2,0) arrives, can it be granted
Apply resource request algorithm
Request = 0 4 2 0
Need of P2 = 0 7 5 0
Available = 1 5 2 0
Request <= Need but request > available. Hence, the request cannot be granted immediately.
Processes Need
A B C
P1 1 4 5
P2 2 3 0
P3 2 2 0
Apply Safety algorithm on the given system.
m=3 (resources), n=3 (processes)
work = available work = 2 3 0
Process Check need <= work If yes, work = work + allocation
P1 1 4 5 <= 2 3 0 (no)
P2 2 3 0 <= 2 3 0 (yes) Work = 2 3 0 + 2 0 3 = 4 3 3
P3 2 2 0 <= 4 3 3 (yes) Work = 4 3 3 + 1 2 4 = 5 5 7
P1 5 5 7 <= 1 4 5 (yes) Work = 5 5 7 + 2 2 3 = 7 7 10
Yes system is in safe sate and safe sequence is P2, P3, P1
--------------------------------------------------------------------------------------------------------------
5. System consists of five jobs (J1, J2, J3, J4, J5) and three resources (R1, R2, R3). Resource
type R1 has 10 instances, resource type R2 has 5 instances and R3 has 7 instances. The
following snapshot of the system has been taken :
Jobs Allocation Maximum Available
R1 R2 R3 R1 R2 R3 R1 R2 R3
J1 0 1 0 7 5 3 3 3 2
J2 2 0 0 3 2 2
J3 3 0 1 9 0 2
J4 2 1 1 2 2 2
J5 0 0 2 4 3 3
BGSCET/AIML 2023-2024 32/34
Operating Systems - BCS303
Find need matrix and calculate the safe sequence by using Banker’s algorithm. Mention the above
system is safe or not safe.
Answer:
Need = max - allocation
Processes Need
R1 R2 R3
J1 7 4 3
J2 1 2 2
J3 6 0 1
J4 0 1 1
J5 4 3 1
Apply Safety algorithm on the given system.
m=3 (resources), n=5 (processes)
work = available
work = 3 3 2
Process Check need <= work If yes, work = work + allocation
J1 7 4 3 <= 3 3 2 (No)
J2 1 2 2 <= 3 3 2 (Yes) Work = 3 3 2 + 2 0 0 = 5 3 2
J3 6 0 1 <= 5 3 2 (No)
J4 0 1 1 <= 5 3 2 (yes) Work = 5 3 2 + 2 1 1 = 7 4 3
J5 4 3 1 <= 7 4 3 (yes) Work = 7 4 3 + 0 0 2 = 7 4 5
J1 7 4 3 <= 7 4 5(yes) Work = 7 4 5 + 0 1 0 = 7 5 5
J3 6 0 1 <= 7 5 5 (yes) Work = 7 5 5 + 3 0 1 = 10 5 6
Yes system is in safe sate and safe sequence is J2, J4, J5, J1, J3