Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
4 views

Unit3OS

Uploaded by

Vedika Patil
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Unit3OS

Uploaded by

Vedika Patil
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Unit- 3

Process management:

Process management can help organizations improve their operational efficiency, reduce costs, increase
customer satisfaction, and maintain compliance with regulatory requirements. It involves analyzing the
performance of existing processes, identifying bottlenecks, and making changes to optimize the process
flow. Process management includes various tools and techniques such as process mapping, process
analysis, process improvement, process automation, and process control. By applying these tools and
techniques, organizations can streamline their processes, eliminate waste, and improve productivity.
Overall, process management is a critical aspect of modern business operations and can help
organizations achieve their goals and stay competitive in today’s rapidly changing marketplace.

If the operating system supports multiple users then services under this are very important. In this
regard, operating systems have to keep track of all the completed processes, Schedule them, and
dispatch them one after another. But the user should feel that he has full control of the CPU.
Some of the systems call in this category are as follows.
1. Create a child’s process identical to the parent’s.
2. Terminate a process
3. Wait for a child process to terminate
4. Change the priority of the process
5. Block the process
6. Ready the process
7. Dispatch a process
8. Suspend a process
9. Resume a process
10. Delay a process
11. Fork a process

Process Synchronization:

Process Synchronization is the coordination of execution of multiple processes in a multi-process


system to ensure that they access shared resources in a controlled and predictable manner. It aims to
resolve the problem of race conditions and other synchronization issues in a concurrent system.
The main objective of process synchronization is to ensure that multiple processes access shared
resources without interfering with each other and to prevent the possibility of inconsistent data due to
concurrent access. To achieve this, various synchronization techniques such as semaphores, monitors,
and critical sections are used.
In a multi-process system, synchronization is necessary to ensure data consistency and integrity, and to
avoid the risk of deadlocks and other synchronization problems. Process synchronization is an
important aspect of modern operating systems, and it plays a crucial role in ensuring the correct and
efficient functioning of multi-process systems.
On the basis of synchronization, processes are categorized as one of the following two types:
 Independent Process: The execution of one process does not affect the execution of other
processes.
 Cooperative Process: A process that can affect or be affected by other processes executing in the
system.

Concurrency Conditions:
Concurrency is the execution of the multiple instruction sequences at the same time. It happens
in the operating system when there are several process threads running in parallel. The running
process threads always communicate with each other through shared memory or message
passing. Concurrency results in sharing of resources result in problems like deadlocks and
resources starvation.
It helps in techniques like coordinating execution of processes, memory allocation and execution
scheduling for maximizing throughput.

There are several motivations for allowing concurrent execution:


 Physical resource sharing : Multiuser environment since hardware resources are limited.
 Logical resource sharing: Shared file(same piece of information).
 Computation speedup: Parallel execution
 Modularity: Divide system functions into separation processes.

Principles of Concurrency :
Both interleaved and overlapped processes can be viewed as examples of concurrent processes,
they both present the same problems.
The relative speed of execution cannot be predicted. It depends on the following:
 The activities of other processes
 The way operating system handles interrupts
 The scheduling policies of the operating system

Problems in Concurrency :
 Sharing global resources –
Sharing of global resources safely is difficult. If two processes both make use of a global
variable and both perform read and write on that variable, then the order in which various read
and write are executed is critical.
 Optimal allocation of resources –
It is difficult for the operating system to manage the allocation of resources optimally.
 Locating programming errors –
It is very difficult to locate a programming error because reports are usually not reproducible.
 Locking the channel –
It may be inefficient for the operating system to simply lock the channel and prevents its use by
other processes.

Advantages of Concurrency :
 Running of multiple applications –
It enable to run multiple applications at the same time.
 Better resource utilization –
It enables that the resources that are unused by one application can be used for other
applications.
 Better average response time –
Without concurrency, each application has to be run to completion before the next one can be
run.
 Better performance –
It enables the better performance by the operating system. When one application uses only the
processor and another application uses only the disk drive then the time to run both
applications concurrently to completion will be shorter than the time to run each application
consecutively.

Drawbacks of Concurrency :
 It is required to protect multiple applications from one another.
 It is required to coordinate multiple applications through additional mechanisms.
 Additional performance overheads and complexities in operating systems are required for
switching among applications.
 Sometimes running too many applications concurrently leads to severely degraded
performance.

Issues of Concurrency :
 Non-atomic –
Operations that are non-atomic but interruptible by multiple processes can cause problems.
 Race conditions –
A race condition occurs of the outcome depends on which of several processes gets to a point
first.
 Blocking –
Processes can block waiting for resources. A process could be blocked for long period of time
waiting for input from a terminal. If the process is required to periodically update some data,
this would be very undesirable.
 Starvation –
It occurs when a process does not obtain service to progress.
 Deadlock –
It occurs when two processes are blocked and hence neither can proceed to execute.

Race Condition:
When more than one process is executing the same code or accessing the same memory or any shared
variable in that condition there is a possibility that the output or the value of the shared variable is wrong so
for that all the processes doing the race to say that my output is correct this condition known as a race
condition. Several processes access and process the manipulations over the same data concurrently, and then
the outcome depends on the particular order in which the access takes place. A race condition is a situation
that may occur inside a critical section. This happens when the result of multiple thread execution in the
critical section differs according to the order in which the threads execute. Race conditions in critical
sections can be avoided if the critical section is treated as an atomic instruction. Also, proper thread
synchronization using locks or atomic variables can prevent race conditions.

Example:

Let’s understand one example to understand the race condition better:


Let’s say there are two processes P1 and P2 which share a common variable (shared=10), both processes are
present in – queue and waiting for their turn to be executed. Suppose, Process P1 first come under
execution, and the CPU store a common variable between them (shared=10) in the local variable (X=10)
and increment it by 1(X=11), after then when the CPU read line sleep(1),it switches from current process P1
to process P2 present in ready-queue. The process P1 goes in a waiting state for 1 second.
Now CPU execute the Process P2 line by line and store common variable (Shared=10) in its local variable
(Y=10) and decrement Y by 1(Y=9), after then when CPU read sleep(1), the current process P2 goes in
waiting for state and CPU remains idle for some time as there is no process in ready-queue, after completion
of 1 second of process P1 when it comes in ready-queue, CPU takes the process P1 under execution and
execute the remaining line of code (store the local variable (X=11) in common variable (shared=11) ), CPU
remain idle for sometime waiting for any process in ready-queue,after completion of 1 second of Process P2,
when process P2 comes in ready-queue, CPU start executing the further remaining line of Process P2(store
the local variable (Y=9) in common variable (shared=9) ).
Initially Shared = 10
Process 1 Process 2
Process 1 Process 2

int X = shared int Y = shared

X++ Y–

sleep(1) sleep(1)

shared = X shared = Y

Note: We are assuming the final value of a common variable(shared) after execution of Process P1 and
Process P2 is 10 (as Process P1 increment variable (shared=10) by 1 and Process P2 decrement variable
(shared=11) by 1 and finally it becomes shared=10). But we are getting undesired value due to a lack of
proper synchronization.

Actual meaning of race-condition


 If the order of execution of the process(first P1 -> then P2) then we will get the value of common
variable (shared) =9.
 If the order of execution of the process(first P2 -> then P1) then we will get the final value of common
variable (shared) =11.
 Here the (value1 = 9) and (value2=10) are racing, If we execute these two processes in our computer
system then sometime we will get 9 and sometime we will get 10 as the final value of a common
variable(shared). This phenomenon is called race condition.

Critical Section Problem:


A critical section is a code segment that can be accessed by only one process at a time. The critical section
contains shared variables that need to be synchronized to maintain the consistency of data variables. So
the critical section problem means designing a way for cooperative processes to access shared resources
without creating data inconsistencies.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
 Mutual Exclusion: If a process is executing in its critical section, then no other process ocess is allowed to
execute in the critical section.
 Progress: If no process is executing in the critical section and other processes are waiting outside the
critical section, then only those processes that are not executing in their remainder section can participate
in deciding which will enter the critical section next, and the selection can not be postponed indefinitely.
 Bounded Waiting: A bound must exist on the number of times that other processes are allowed to enter
their critical sections after a process
ocess has made a request to enter its critical section and before that
request is granted.

Software
oftware and hardware solution
Peterson’s Solution:
Peterson’s Solution is a classical software
software-based solution to the critical section problem. In Peterson’s
solution, we have two shared variables:
 boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical section
 int turn: The process whose turn is to enter the critical section.
Peterson’s Solution preserves all three conditions:
 Mutual Exclusion is assured as only one process can access the critical section at any time.
 Progress is also assured, as a process outside the critical section does not block other processes from
entering the critical section.
 Bounded Waiting is preserved as every process gets a fair chance.

Disadvantages of Peterson’s Solution


 It involves busy waiting. (In the Peterson’s solution, the code statement- “while(flag[j] && turn == j);”
is responsible for this. Busy waiting is not favored because it wastes CPU cycles that could be used to
perform other tasks.)
 It is limited to 2 processes.
 Peterson’s solution cannot be used in modern CPU architectures.

Semaphores:
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be signaled by
another thread. This is different than a mutex as the mutex can be signaled only by the thread that is called
the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization.
A Semaphore is an integer variable, which can be accessed only through two operations wait() and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.
 Binary Semaphores: They can only be either 0 or 1. They are also known as mutex locks, as the locks
can provide mutual exclusion. All the processes can share the same mutex semaphore that is initialized
to 1. Then, a process has to wait until the lock becomes 0. Then, the process can make the mutex
semaphore 1 and start its critical section. When it completes its critical section, it can reset the value of
the mutex semaphore to 0 and some other process can enter its critical section.
 Counting Semaphores: They can have any value and are not restricted to a certain domain. They can be
used to control access to a resource that has a limitation on the number of simultaneous accesses. The
semaphore can be initialized to the number of instances of the resource. Whenever a process wants to
use that resource, it checks if the number of remaining instances is more than zero, i.e., the process has
an instance available. Then, the process can enter its critical section thereby decreasing the value of the
counting semaphore by 1. After the process is over with the use of the instance of the resource, it can
leave the critical section thereby adding 1 to the number of available instances of the resource.

Advantages of Process Synchronization


 Ensures data consistency and integrity
 Avoids race conditions
 Prevents inconsistent data due to concurrent access
 Supports efficient and effective use of shared resources

Disadvantages of Process Synchronization


 Adds overhead to the system
 This can lead to performance degradation
 Increases the complexity of the system
 Can cause deadlocks if not implemented properly.

Conditional critical regions and monitors:


In an operating system, a critical region refers to a section of code or a data structure that must be
accessed exclusively by one method or thread at a time. Critical regions are utilized to prevent
concurrent entry to shared sources, along with variables, information structures, or devices, that allow
you to maintain information integrity and keep away from race conditions.
The concept of important regions is carefully tied to the want for synchronization and mutual exclusion
in multi-threaded or multi-manner environments. Without proper synchronization mechanisms,
concurrent admission to shared resources can lead to information inconsistencies, unpredictable
conduct, and mistakes.
To implement mutual exclusion and shield important areas, operating structures provide
synchronization mechanisms, inclusive of locks, semaphores, or monitors. These mechanisms ensure
that the handiest one procedure or thread can get the right of entry to the vital location at any given
time, even as other procedures or threads are averted from entering till the cutting-edge occupant
releases the lock.
Critical Region Characteristics and Requirements
Following are the characteristics and requirements for critical regions in an operating system.
1. Mutual Exclusion
Only one procedure or thread can access the important region at a time. This ensures
that concurrent entry does not bring about facts corruption or inconsistent states.
2. Atomicity
The execution of code within an essential region is dealt with as an indivisible unit of execution. It way
that after a system or thread enters a vital place, it completes its execution without interruption.
3. Synchronization
Processes or threads waiting to go into a essential vicinity are synchronized to prevent simultaneous
access. They commonly appoint synchronization primitives, inclusive of locks or semaphores, to
govern access and put in force mutual exclusion.
4. Minimal Time Spent in Critical Regions
It is perfect to reduce the time spent inside crucial regions to reduce the capacity for contention and
improve gadget overall performance. Lengthy execution within essential regions can increase the
waiting time for different strategies or threads.

Classical concurrency problems:


Dining philosophers problem :
The dining philosopher's problem is the classical problem of synchronization which says that Five
philosophers are sitting around a circular table and their job is to think and eat alternatively. A bowl of
noodles is placed at the center of the table along with five chopsticks for each of the philosophers. To eat
a philosopher needs both their right and a left chopstick. A philosopher can only eat if both immediate left
and right chopsticks of the philosopher is available. In case if both immediate left and right chopsticks of
the philosopher are not available then the philosopher puts down their (either left or right) chopstick and
starts thinking again.

The dining philosopher demonstrates a large class of concurrency control problems hence it's a classic
synchronization problem.

Dining Philosophers Problem- Let's understand the Dining Philosophers Problem with the below code,
we have used fig 1 as a reference to make you understand the problem exactly. The five Philosophers are
represented as P0, P1, P2, P3, and P4 and five chopsticks by C0, C1, C2, C3, and C4.

1. Void Philosopher
2. {
3. while(1)
4. {
5. take_chopstick[i];
6. take_chopstick[ (i+1) % 5] ;
7. ..
8. . EATING THE NOODLE
9. .
10. put_chopstick[i] );
11. put_chopstick[ (i+1) % 5] ;
12. .
13. . THINKING
14. }
15. }

Let's discuss the above code:

Suppose Philosopher P0 wants to eat, it will enter in Philosopher() function, and


execute take_chopstick[i]; by doing this it holds C0 chopstick after that it execute take_chopstick[
(i+1) % 5]; by doing this it holds C1 chopstick( since i =0, therefore (0 + 1) % 5 = 1)

Similarly suppose now Philosopher P1 wants to eat, it will enter in Philosopher() function, and
execute take_chopstick[i]; by doing this it holds C1 chopstick after that it execute take_chopstick[
(i+1) % 5]; by doing this it holds C2 chopstick( since i =1, therefore (1 + 1) % 5 = 2)

But Practically Chopstick C1 is not available as it has already been taken by philosopher P0, hence the
above code generates problems and produces race condition.

The solution of the Dining Philosophers Problem

We use a semaphore to represent a chopstick and this truly acts as a solution of the Dining Philosophers
Problem. Wait and Signal operations will be used for the solution of the Dining Philosophers Problem, for
picking a chopstick wait operation can be executed while for releasing a chopstick signal semaphore can
be executed.

Semaphore: A semaphore is an integer variable in S, that apart from initialization is accessed by only two
standard atomic operations - wait and signal, whose definitions are as follows:

1. 1. wait( S )
2. {
3. while( S <= 0) ;
4. S--;
5. }
6.
7. 2. signal( S )
8. {
9. S++;
10. }
From the above definitions of wait, it is clear that if the value of S <= 0 then it will enter into an infinite
loop(because of the semicolon; after while loop). Whereas the job of the signal is to increment the value
of S.

The structure of the chopstick is an array of a semaphore which is represented as shown below -

1. semaphore C[5];

Initially, each element of the semaphore C0, C1, C2, C3, and C4 are initialized to 1 as the chopsticks are
on the table and not picked up by any of the philosophers.

Let's modify the above code of the Dining Philosopher Problem by using semaphore operations wait and
signal, the desired code looks like

1. void Philosopher
2. {
3. while(1)
4. {
5. Wait( take_chopstickC[i] );
6. Wait( take_chopstickC[(i+1) % 5] ) ;
7. ..
8. . EATING THE NOODLE
9. .
10. Signal( put_chopstickC[i] );
11. Signal( put_chopstickC[ (i+1) % 5] ) ;
12. .
13. . THINKING
14. }
15. }

In the above code, first wait operation is performed on take_chopstickC[i] and take_chopstickC [ (i+1) %
5]. This shows philosopher i have picked up the chopsticks from its left and right. The eating function is
performed after that.

On completion of eating by philosopher i the, signal operation is performed on take_chopstickC[i] and


take_chopstickC [ (i+1) % 5]. This shows that the philosopher i have eaten and put down both the left and
right chopsticks. Finally, the philosopher starts thinking again.

READERS WRITERS PROBLEM:

The readers-writers problem is a classical problem of process synchronization, it relates to a data set such
as a file that is shared between more than one process at a time. Among these various processes, some are
Readers - which can only read the data set; they do not perform any updates, some are Writers - can both
read and write in the data sets.
The readers-writers problem is used for managing synchronization among various reader and writer
process so that there are no problems with the data sets, i.e. no inconsistency is generated.

Let's understand with an example - If two or more than two readers want to access the file at the same
point in time there will be no problem. However, in other situations like when two writers or one reader
and one writer wants to access the file at the same point of time, there may occur some problems, hence
the task is to design the code in such a manner that if one reader is reading then no writer is allowed to
update at the same point of time, similarly, if one writer is writing no reader is allowed to read the file at
that point of time and if one writer is updating a file other writers should not be allowed to update the file
at the same point of time. However, multiple readers can access the object at the same time.

Let us understand the possibility of reading and writing with the table given below:

TABLE 1

Case Process 1 Process 2 Allowed / Not Allowed

Case 1 Writing Writing Not Allowed

Case 2 Reading Writing Not Allowed

Case 3 Writing Reading Not Allowed

Case 4 Reading Reading Allowed

The solution of readers and writers can be implemented using binary semaphores.

We use two binary semaphores "write" and "mutex", where binary semaphore can be defined as:

Semaphore: A semaphore is an integer variable in S, that apart from initialization is accessed by only two
standard atomic operations - wait and signal, whose definitions are as follows:

1. 1. wait( S )
2. {
3. while( S <= 0) ;
4. S--;
5. }
6.
7. 2. signal( S )
8. {
9. S++;
10. }

From the above definitions of wait, it is clear that if the value of S <= 0 then it will enter into an infinite
loop (because of the semicolon; after while loop). Whereas the job of the signal is to increment the value
of S.
The below code will provide the solution of the reader-writer problem, reader and writer process codes
are given as follows -

Code for Reader Process

The code of the reader process is given below -

1. static int readcount = 0;


2. wait (mutex);
3. readcount ++; // on each entry of reader increment readcount
4. if (readcount == 1)
5. {
6. wait (write);
7. }
8. signal(mutex);
9.
10. --READ THE FILE?
11.
12. wait(mutex);
13. readcount --; // on every exit of reader decrement readcount
14. if (readcount == 0)
15. {
16. signal (write);
17. }
18. signal(mutex);

In the above code of reader, mutex and write are semaphores that have an initial value of 1, whereas the
readcount variable has an initial value as 0. Both mutex and write are common in reader and writer
process code, semaphore mutex ensures mutual exclusion and semaphore write handles the writing
mechanism.

The readcount variable denotes the number of readers accessing the file concurrently. The moment
variable readcount becomes 1, wait operation is used to write semaphore which decreases the value by
one. This means that a writer is not allowed how to access the file anymore. On completion of the read
operation, readcount is decremented by one. When readcount becomes 0, the signal operation which is
used to write permits a writer to access the file.

Code for Writer Process

The code that defines the writer process is given below:

1. wait(write);
2. WRITE INTO THE FILE
3. signal(wrt);
If a writer wishes to access the file, wait operation is performed on write semaphore, which
decrements write to 0 and no other writer can access the file. On completion of the writing job by the
writer who was accessing the file, the signal operation is performed on write.

Let's see the proof of each case mentioned in Table 1

CASE 1: WRITING - WRITING → NOT ALLOWED. That is when two or more than two processes are
willing to write, then it is not allowed. Let us see that our code is working accordingly or not?

1. Explanation :
2. The initial value of semaphore write = 1
3. Suppose two processes P0 and P1 wants to write, let P0 enter first the writer code, The moment P0 enters

4. Wait( write ); will decrease semaphore write by one, now write = 0


5. And continue WRITE INTO THE FILE
6. Now suppose P1 wants to write at the same time (will it be allowed?) let's see.
7. P1 does Wait( write ), since the write value is already 0, therefore from the definition of wait, it will go int
o an infinite loop (i.e. Trap), hence P1 can never write anything, till P0 is writing.
8. Now suppose P0 has finished the task, it will
9. signal( write); will increase semaphore write by 1, now write = 1
10. if now P1 wants to write it since semaphore write > 0
11. This proofs that, if one process is writing, no other process is allowed to write.

CASE 2: READING - WRITING → NOT ALLOWED. That is when one or more than one process is
reading the file, then writing by another process is not allowed. Let us see that our code is working
accordingly or not?

1. Explanation:
2. Initial value of semaphore mutex = 1 and variable readcount = 0
3. Suppose two processes P0 and P1 are in a system, P0 wants to read while P1 wants to write, P0 enter first
into the reader code, the moment P0 enters
4. Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0
5. Increment readcount by 1, now readcount = 1, next
6. if (readcount == 1)// evaluates to TRUE
7. {
8. wait (write); // decrement write by 1, i.e. write = 0(which
9. clearly proves that if one or more than one
10. reader is reading then no writer will be
11. allowed.
12. }
13.
14. signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to ent
er.
15.
16. And reader continues to --READ THE FILE?
17.
18. Suppose now any writer wants to enter into its code then:
19.
20. As the first reader has executed wait (write); because of which write value is 0, therefore wait(writer); of t
he writer, code will go into an infinite loop and no writer will be allowed.
21. This proofs that, if one process is reading, no other process is allowed to write.
22. Now suppose P0 wants to stop the reading and wanted to exit then
23. Following sequence of instructions will take place:
24. wait(mutex); // decrease mutex by 1, i.e. mutex = 0
25. readcount --; // readcount = 0, i.e. no one is currently reading
26. if (readcount == 0) // evaluates TRUE
27. {
28. signal (write); // increase write by one, i.e. write = 1
29. }
30. signal(mutex);// increase mutex by one, i.e. mutex = 1
31.
32. Now if again any writer wants to write, it can do it now, since write > 0

CASE 3: WRITING -- READING → NOT ALLOWED. That is when if one process is writing into the
file, then reading by another process is not allowed. Let us see that our code is working accordingly or
not?

1. Explanation:
2. The initial value of semaphore write = 1
3. Suppose two processes P0 and P1 are in a system, P0 wants to write while P1 wants to read, P0 enter first
into the writer code, The moment P0 enters
4. Wait( write ); will decrease semaphore write by 1, now write = 0
5. And continue WRITE INTO THE FILE
6. Now suppose P1 wants to read the same time (will it be allowed?) let's see.
7. P1 enters reader's code
8. Initial value of semaphore mutex = 1 and variable readcount = 0
9. Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0
10. Increment readcount by 1, now readcount = 1, next
11. if (readcount == 1)// evaluates to TRUE
12. {
13. wait (write); // since value of write is already 0, hence it
14. will enter into an infinite loop and will not be
15. allowed to proceed further (which clearly
16. proves that if one writer is writing then no
17. reader will be allowed.
18. }
19.
20. The moment writer stops writing and willing to exit then
21. This proofs that, if one process is writing, no other process is allowed to read.
22.
23. The moment writer stops writing and willing to exit then it will execute:
24. signal( write); will increase semaphore write by 1, now write = 1
25. if now P1 wants to read it can since semaphore write > 0

CASE 4: READING - READING → ALLOWED. That is when one process is reading the file, and other
process or processes is willing to read, then they all are allowed i.e. reading - reading is not mutually
exclusive. Let us see that our code is working accordingly or not?

1. Explanation :
2. Initial value of semaphore mutex = 1 and variable readcount = 0
3. Suppose three processes P0, P1 and P2 are in a system, all the three processes P0, P1, and P2 want to read
, let P0 enter first into the reader code, the moment P0 enters
4. Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0
5. Increment readcount by 1, now readcount = 1, next
6. if (readcount == 1)// evaluates to TRUE
7. {
8. wait (write); // decrement write by 1, i.e. write = 0(which
9. clearly proves that if one or more than one
10. reader is reading then no writer will be
11. allowed.
12. }
13.
14. signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to ent
er.
15.
16. And P0 continues to --READ THE FILE?
17.
18. →Now P1 wants to enter the reader code
19. current value of semaphore mutex = 1 and variable readcount = 1
20. let P1 enter into the reader code, the moment P1 enters
21. Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0
22. Increment readcount by 1, now readcount = 2, next
23. if (readcount == 1)// eval. to False, it will not enter if block
24.
25. signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to ent
er.
26.
27. Now P0 and P1 continues to --READ THE FILE?
28.
29. →Now P2 wants to enter the reader code
30. current value of semaphore mutex = 1 and variable readcount = 2
31. let P2 enter into the reader code, The moment P2 enters
32. Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0
33. Increment readcount by 1, now readcount = 3, next
34. if (readcount == 1)// eval. to False, it will not enter if block
35.
36. signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to ent
er.
37.
38. Now P0, P1, and P2 continues to --READ THE FILE?
39.
40.
41. Suppose now any writer wants to enter into its code then:
42.
43. As the first reader P0 has executed wait (write); because of which write value is 0, therefore wait(writer);
of the writer, code will go into an infinite loop and no writer will be allowed.
44.
45. Now suppose P0 wants to come out of system( stop reading) then
46. wait(mutex); //will decrease semaphore mutex by 1, now mutex = 0
47. readcount --; // on every exit of reader decrement readcount by
48. one i.e. readcount = 2
49.
50. if (readcount == 0)// eval. to FALSE it will not enter if block
51.
52. signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to exit

53.
54.
55. → Now suppose P1 wants to come out of system (stop reading) then
56. wait(mutex); //will decrease semaphore mutex by 1, now mutex = 0
57. readcount --; // on every exit of reader decrement readcount by
58. one i.e. readcount = 1
59.
60. if (readcount == 0)// eval. to FALSE it will not enter if block
61.
62. signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to exit

63.
64. →Now suppose P2 (last process) wants to come out of system (stop reading) then
65. wait(mutex); //will decrease semaphore mutex by 1, now mutex = 0
66. readcount --; // on every exit of reader decrement readcount by
67. one i.e. readcount = 0
68.
69. if (readcount == 0)// eval. to TRUE it will enter into if block
70. {
71. signal (write); // will increment semaphore write by one, i.e.
72. now write = 1, since P2 was the last process
73. which was reading, since now it is going out,
74. so by making write = 1 it is allowing the writer
75. to write now.
76. }
77.
78. signal(mutex); // will increase semaphore mutex by 1, now mutex = 1
79.
80.
81. The above explanation proves that if one or more than one processes are willing to read simultaneous

Inter Process Communication (IPC)


Inter-process communication (IPC) is a mechanism that allows processes to communicate with each
other and synchronize their actions. The communication between these processes can be seen as a
method of co-operation between them. Processes can communicate with each other through both:

1. Shared Memory
2. Message passing
Figure 1 below shows a basic structure of communication between processes via the shared memory
method and via the message passing method.

An operating system can implement both methods of communication. First, we will discuss the shared
memory methods of communication and then message passing. Communication between processes
using shared memory requires processes to share some variable, and it completely depends on how the
programmer will implement it. One way of communication using shared memory can be imagined like
this: Suppose process1 and process2 are executing simultaneously, and they share some resources or
use some information from another process. Process1 generates information about certain computations
or resources being used and keeps it as a record in shared memory. When process2 needs to use the
shared information, it will check in the record stored in shared memory and take note of the information
generated by process1 and act accordingly. Processes can use shared memory for extracting
information as a record from another process as well as for delivering any specific information to other
processes.

Shared Memory IPC:


Producer consumer problem:

The Producer-Consumermer problem is a classical multi


multi-process
process synchronization problem, that is we are
trying to achieve synchronization between more than one process.

There is one Producer in the producer


producer-consumer
consumer problem, Producer is producing some items, whereas
there is one Consumer that is consuming the items produced by the Producer. The same memory buffer is
shared by both producers and consumers which is of fixed
fixed-size.

The task of the Producer is to produce the item, put it into the memory buffer, and again start producing
produci
items. Whereas the task of the Consumer is to consume the item from the memory buffer.

Below are a few points that considered as the problems occur in Producer
Producer-Consumer:

o The producer should produce data only when the buffer is not full. In case it is found
fo that the
buffer is full, the producer is not allowed to store any data into the memory buffer.
o Data can only be consumed by the consumer if and only if the memory buffer is not empty. In
case it is found that the buffer is empty, the consumer is not al
allowed
lowed to use any data from the
memory buffer.
o Accessing memory buffer should not be allowed to producer and consumer at the same time.

Let's see the code for the above problem:


Producer Code

Consumer Code

Let's understand above Producer and Consumer code:

Before Starting an explanation of code, first, understand the few terms used in the above code:

1. "in" used in a producer code represent the next empty buffer


2. "out" used in consumer code represent first filled buffer
3. count keeps the count number of elements in the buffer
4. count is further divided into 3 lines code represented in the block in both the producer and
consumer code.

If we talk about Producer code first:

--Rp is a register which keeps the value of m[count]


--Rp is incremented (As element has been added to buffer)

--an Incremented value of Rp is stored back to m[count]

Similarly, if we talk about Consumer code next:

--Rc is a register which keeps the value of m[count]

--Rc is decremented (As element has been removed out of buffer)

--the decremented value of Rc is stored back to m[count].

The solution of Producer-Consumer Problem using Semaphore

The above problems of Producer and Consumer which occurred due to context switch and producing
inconsistent result can be solved with the help of semaphores.

To solve the problem occurred above of race condition, we are going to use Binary Semaphore and
Counting Semaphore

Binary Semaphore: In Binary Semaphore, only two processes can compete to enter into its CRITICAL
SECTION at any point in time, apart from this the condition of mutual exclusion is also preserved.

Counting Semaphore: In counting semaphore, more than two processes can compete to enter into
its CRITICAL SECTION at any point of time apart from this the condition of mutual exclusion is also
preserved.

Semaphore: A semaphore is an integer variable in S, that apart from initialization is accessed by only
two standard atomic operations - wait and signal, whose definitions are as follows:

1. 1. wait( S )
2. {
3. while( S <= 0) ;
4. S--;
5. }

1. 2. signal( S )
2. {
3. S++;
4. }

From the above definitions of wait, it is clear that if the value of S <= 0 then it will enter into an infinite
loop (because of the semicolon; after while loop). Whereas the job of the signal is to increment the value
of S.
Let's see the code as a solution of producer and consumer problem using semaphore ( Both Binary and
Counting Semaphore):

Producer Code- solution

1. void producer( void )


2. {
3. wait ( empty );
4. wait(S);
5. Produce_item(item P)
6. buffer[ in ] = item P;
7. in = (in + 1)mod n
8. signal(S);
9. signal(full);
10.
11. }

Consumer Code- solution

1. void consumer(void)
2. {
3. wait ( empty );
4. wait(S);
5. itemC = buffer[ out ];
6. out = ( out + 1 ) mod n;
7. signal(S);
8. signal(empty);
9. }

Let's understand the above Solution of Producer and Consumer code:

Before Starting an explanation of code, first, understand the few terms used in the above code:

1. "in" used in a producer code represent the next empty buffer


2. "out" used in consumer code represent first filled buffer
3. "empty" is counting semaphore which keeps a score of no. of empty buffer
4. "full" is counting semaphore which scores of no. of full buffer
5. "S" is a binary semaphore BUFFER

Semaphores used in Producer Code:

6. wait(empty) will decrease the value of the counting semaphore variable empty by 1, that is when the
producer produces some element then the value of the space gets automatically decreased by one in the
buffer. In case the buffer is full, that is the value of the counting semaphore variable "empty" is 0, then
wait(empty); will trap the process (as per definition of wait) and does not allow to go further.

7. wait(S) decreases the binary semaphore variable S to 0 so that no other process which is willing to
enter into its critical section is allowed.

8. signal(s) increases the binary semaphore variable S to 1 so that other processes who are willing to enter
into its critical section can now be allowed.

9. signal(full) increases the counting semaphore variable full by 1, as on adding the item into the buffer,
one space is occupied in the buffer and the variable full must be updated.

Semaphores used in Producer Code:

10.0wait(full) will decrease the value of the counting semaphore variable full by 1, that is when the
consumer consumes some element then the value of the full space gets automatically decreased by one in
the buffer. In case the buffer is empty, that is the value of the counting semaphore variable full is 0, then
wait(full); will trap the process(as per definition of wait) and does not allow to go further.

11. wait(S) decreases the binary semaphore variable S to 0 so that no other process which is willing to
enter into its critical section is allowed.

12. signal(S) increases the binary semaphore variable S to 1 so that other processes who are willing to
enter into its critical section can now be allowed.

13. signal(empty) increases the counting semaphore variable empty by 1, as on removing an item from
the buffer, one space is vacant in the buffer and the variable empty must be updated accordingly.

Message Passing IPC:

In this method, processes communicate with each other without using any kind of shared memory. If
two processes p1 and p2 want to communicate with each other, they proceed as follows:

 Establish a communication link (if a link already exists, no need to establish it again.)
 Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)
The message size can be of fixed size or of variable size. If it is of fixed size, it is easy for an OS
designer but complicated for a programmer and if it is of variable size then it is easy for a programmer
but complicated for the OS designer. A standard message can have two parts: header and body.
The header part is used for storing message type, destination id, source id, message length, and
control information. The control information contains information like what to do if runs out of buffer
space, sequence number, priority. Generally, message is sent using FIFO style.

Message Passing through Communication Link.


Direct and Indirect Communication link
Now, We will start our discussion about the methods of implementing communication links. While
implementing the link, there are some questions that need to be kept in mind like :

1. How are links established?


2. Can a link be associated with more than two processes?
3. How many links can there be between every pair of communicating processes?
4. What is the capacity of a link? Is the size of a message that the link can accommodate fixed
fi or
variable?
5. Is a link unidirectional or bi-directional?
directional?
A link has some capacity that determines the number of messages that can reside in it temporarily for
which every link has a queue associated with it which can be of zero capacity, bounded capacity,
capaci or
unbounded capacity. In zero capacity, the sender waits until the receiver informs the sender that it has
received the message. In non-zero
zero capacity cases, a process does not know whether a message has been
received or not after the send operation. F
For
or this, the sender must communicate with the receiver
explicitly. Implementation of the link depends on the situation, it can be either a direct communication
link or an in-directed
directed communication link.
Direct Communication links are implemented when the processes use a specific process identifier for
the communication, but it is hard to identify the sender ahead of time.
For example the print server.
In-direct Communication is done via a shared mailbox (port), which consists of a queue of messages.
The sender
ender keeps the message in mailbox and the receiver picks them up.

Message Passing through Exchanging the Messages.


Synchronous and Asynchronous Message Passing:
A process that is blocked is one that is waiting for some event, such as a resource becoming available
or the completion of an I/O operation. IPC is possible between the processes on same computer as well
as on the processes running on different computer i.e. in networked/distributed system. In both cases,
the process may or may not be blocked while sending a message or attempting to receive a message so
message passing may be blocking or non-blocking. Blocking is considered synchronous and blocking
send means the sender will be blocked until the message is received by receiver. Similarly, blocking
receive has the receiver block until a message is available. Non-blocking is
considered asynchronous and Non-blocking send has the sender sends the message and continue.
Similarly, Non-blocking receive has the receiver receive a valid message or null. After a careful
analysis, we can come to a conclusion that for a sender it is more natural to be non-blocking after
message passing as there may be a need to send the message to different processes. However, the
sender expects acknowledgment from the receiver in case the send fails. Similarly, it is more natural for
a receiver to be blocking after issuing the receive as the information from the received message may be
used for further execution. At the same time, if the message send keep on failing, the receiver will have
to wait indefinitely. That is why we also consider the other possibility of message passing. There are
basically three preferred combinations:

 Blocking send and blocking receive


 Non-blocking send and Non-blocking receive
 Non-blocking send and Blocking receive (Mostly used)
In Direct message passing, The process which wants to communicate must explicitly name the
recipient or sender of the communication.
e.g. send(p1, message) means send the message to p1.
Similarly, receive(p2, message) means to receive the message from p2.
In this method of communication, the communication link gets established automatically, which can be
either unidirectional or bidirectional, but one link can be used between one pair of the sender and
receiver and one pair of sender and receiver should not possess more than one pair of links. Symmetry
and asymmetry between sending and receiving can also be implemented i.e. either both processes will
name each other for sending and receiving the messages or only the sender will name the receiver for
sending the message and there is no need for the receiver for naming the sender for receiving the
message. The problem with this method of communication is that if the name of one process changes,
this method will not work.
In Indirect message passing, processes use mailboxes (also referred to as ports) for sending and
receiving messages. Each mailbox has a unique id and processes can communicate only if they share a
mailbox. Link established only if processes share a common mailbox and a single link can be
associated with many processes. Each pair of processes can share several communication links and
these links may be unidirectional or bi-directional. Suppose two processes want to communicate
through Indirect message passing, the required operations are: create a mailbox, use this mailbox for
sending and receiving messages, then destroy the mailbox. The standard primitives used are: send(A,
message) which means send the message to mailbox A. The primitive for the receiving the message
also works in the same way e.g. received (A, message). There is a problem with this mailbox
implementation. Suppose there are more than two processes sharing the same mailbox and suppose the
process p1 sends a message to the mailbox, which process will be the receiver? This can be solved by
either enforcing that only two processes can share a single mailbox or enforcing that only one process
is allowed to execute the receive at a given time or select any process randomly and notify the sender
about the receiver. A mailbox can be made private to a single sender/receiver pair and can also be
shared between multiple sender/receiver pairs. Port is an implementation of such mailbox that can have
multiple senders and a single receiver. It is used in client/server applications (in this case the server is
the receiver). The port is owned by the receiving process and created by OS on the request of the
receiver process and can be destroyed either on request of the same receiver processor when the
receiver terminates itself. Enforcing that only one process is allowed to execute the receive can be done
using the concept of mutual exclusion. Mutex mailbox is created which is shared by n process. The
sender is non-blocking and sends the message. The first process which executes the receive will enter
in the critical section and all other processes will be blocking and will wait.
Now, let’s discuss the Producer-Consumer problem using the message passing concept. The producer
places items (inside messages) in the mailbox and the consumer can consume an item when at least one
message present in the mailbox.

You might also like