Unit3OS
Unit3OS
Process management:
Process management can help organizations improve their operational efficiency, reduce costs, increase
customer satisfaction, and maintain compliance with regulatory requirements. It involves analyzing the
performance of existing processes, identifying bottlenecks, and making changes to optimize the process
flow. Process management includes various tools and techniques such as process mapping, process
analysis, process improvement, process automation, and process control. By applying these tools and
techniques, organizations can streamline their processes, eliminate waste, and improve productivity.
Overall, process management is a critical aspect of modern business operations and can help
organizations achieve their goals and stay competitive in today’s rapidly changing marketplace.
If the operating system supports multiple users then services under this are very important. In this
regard, operating systems have to keep track of all the completed processes, Schedule them, and
dispatch them one after another. But the user should feel that he has full control of the CPU.
Some of the systems call in this category are as follows.
1. Create a child’s process identical to the parent’s.
2. Terminate a process
3. Wait for a child process to terminate
4. Change the priority of the process
5. Block the process
6. Ready the process
7. Dispatch a process
8. Suspend a process
9. Resume a process
10. Delay a process
11. Fork a process
Process Synchronization:
Concurrency Conditions:
Concurrency is the execution of the multiple instruction sequences at the same time. It happens
in the operating system when there are several process threads running in parallel. The running
process threads always communicate with each other through shared memory or message
passing. Concurrency results in sharing of resources result in problems like deadlocks and
resources starvation.
It helps in techniques like coordinating execution of processes, memory allocation and execution
scheduling for maximizing throughput.
Principles of Concurrency :
Both interleaved and overlapped processes can be viewed as examples of concurrent processes,
they both present the same problems.
The relative speed of execution cannot be predicted. It depends on the following:
The activities of other processes
The way operating system handles interrupts
The scheduling policies of the operating system
Problems in Concurrency :
Sharing global resources –
Sharing of global resources safely is difficult. If two processes both make use of a global
variable and both perform read and write on that variable, then the order in which various read
and write are executed is critical.
Optimal allocation of resources –
It is difficult for the operating system to manage the allocation of resources optimally.
Locating programming errors –
It is very difficult to locate a programming error because reports are usually not reproducible.
Locking the channel –
It may be inefficient for the operating system to simply lock the channel and prevents its use by
other processes.
Advantages of Concurrency :
Running of multiple applications –
It enable to run multiple applications at the same time.
Better resource utilization –
It enables that the resources that are unused by one application can be used for other
applications.
Better average response time –
Without concurrency, each application has to be run to completion before the next one can be
run.
Better performance –
It enables the better performance by the operating system. When one application uses only the
processor and another application uses only the disk drive then the time to run both
applications concurrently to completion will be shorter than the time to run each application
consecutively.
Drawbacks of Concurrency :
It is required to protect multiple applications from one another.
It is required to coordinate multiple applications through additional mechanisms.
Additional performance overheads and complexities in operating systems are required for
switching among applications.
Sometimes running too many applications concurrently leads to severely degraded
performance.
Issues of Concurrency :
Non-atomic –
Operations that are non-atomic but interruptible by multiple processes can cause problems.
Race conditions –
A race condition occurs of the outcome depends on which of several processes gets to a point
first.
Blocking –
Processes can block waiting for resources. A process could be blocked for long period of time
waiting for input from a terminal. If the process is required to periodically update some data,
this would be very undesirable.
Starvation –
It occurs when a process does not obtain service to progress.
Deadlock –
It occurs when two processes are blocked and hence neither can proceed to execute.
Race Condition:
When more than one process is executing the same code or accessing the same memory or any shared
variable in that condition there is a possibility that the output or the value of the shared variable is wrong so
for that all the processes doing the race to say that my output is correct this condition known as a race
condition. Several processes access and process the manipulations over the same data concurrently, and then
the outcome depends on the particular order in which the access takes place. A race condition is a situation
that may occur inside a critical section. This happens when the result of multiple thread execution in the
critical section differs according to the order in which the threads execute. Race conditions in critical
sections can be avoided if the critical section is treated as an atomic instruction. Also, proper thread
synchronization using locks or atomic variables can prevent race conditions.
Example:
X++ Y–
sleep(1) sleep(1)
shared = X shared = Y
Note: We are assuming the final value of a common variable(shared) after execution of Process P1 and
Process P2 is 10 (as Process P1 increment variable (shared=10) by 1 and Process P2 decrement variable
(shared=11) by 1 and finally it becomes shared=10). But we are getting undesired value due to a lack of
proper synchronization.
Software
oftware and hardware solution
Peterson’s Solution:
Peterson’s Solution is a classical software
software-based solution to the critical section problem. In Peterson’s
solution, we have two shared variables:
boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical section
int turn: The process whose turn is to enter the critical section.
Peterson’s Solution preserves all three conditions:
Mutual Exclusion is assured as only one process can access the critical section at any time.
Progress is also assured, as a process outside the critical section does not block other processes from
entering the critical section.
Bounded Waiting is preserved as every process gets a fair chance.
Semaphores:
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be signaled by
another thread. This is different than a mutex as the mutex can be signaled only by the thread that is called
the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization.
A Semaphore is an integer variable, which can be accessed only through two operations wait() and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.
Binary Semaphores: They can only be either 0 or 1. They are also known as mutex locks, as the locks
can provide mutual exclusion. All the processes can share the same mutex semaphore that is initialized
to 1. Then, a process has to wait until the lock becomes 0. Then, the process can make the mutex
semaphore 1 and start its critical section. When it completes its critical section, it can reset the value of
the mutex semaphore to 0 and some other process can enter its critical section.
Counting Semaphores: They can have any value and are not restricted to a certain domain. They can be
used to control access to a resource that has a limitation on the number of simultaneous accesses. The
semaphore can be initialized to the number of instances of the resource. Whenever a process wants to
use that resource, it checks if the number of remaining instances is more than zero, i.e., the process has
an instance available. Then, the process can enter its critical section thereby decreasing the value of the
counting semaphore by 1. After the process is over with the use of the instance of the resource, it can
leave the critical section thereby adding 1 to the number of available instances of the resource.
The dining philosopher demonstrates a large class of concurrency control problems hence it's a classic
synchronization problem.
Dining Philosophers Problem- Let's understand the Dining Philosophers Problem with the below code,
we have used fig 1 as a reference to make you understand the problem exactly. The five Philosophers are
represented as P0, P1, P2, P3, and P4 and five chopsticks by C0, C1, C2, C3, and C4.
1. Void Philosopher
2. {
3. while(1)
4. {
5. take_chopstick[i];
6. take_chopstick[ (i+1) % 5] ;
7. ..
8. . EATING THE NOODLE
9. .
10. put_chopstick[i] );
11. put_chopstick[ (i+1) % 5] ;
12. .
13. . THINKING
14. }
15. }
Similarly suppose now Philosopher P1 wants to eat, it will enter in Philosopher() function, and
execute take_chopstick[i]; by doing this it holds C1 chopstick after that it execute take_chopstick[
(i+1) % 5]; by doing this it holds C2 chopstick( since i =1, therefore (1 + 1) % 5 = 2)
But Practically Chopstick C1 is not available as it has already been taken by philosopher P0, hence the
above code generates problems and produces race condition.
We use a semaphore to represent a chopstick and this truly acts as a solution of the Dining Philosophers
Problem. Wait and Signal operations will be used for the solution of the Dining Philosophers Problem, for
picking a chopstick wait operation can be executed while for releasing a chopstick signal semaphore can
be executed.
Semaphore: A semaphore is an integer variable in S, that apart from initialization is accessed by only two
standard atomic operations - wait and signal, whose definitions are as follows:
1. 1. wait( S )
2. {
3. while( S <= 0) ;
4. S--;
5. }
6.
7. 2. signal( S )
8. {
9. S++;
10. }
From the above definitions of wait, it is clear that if the value of S <= 0 then it will enter into an infinite
loop(because of the semicolon; after while loop). Whereas the job of the signal is to increment the value
of S.
The structure of the chopstick is an array of a semaphore which is represented as shown below -
1. semaphore C[5];
Initially, each element of the semaphore C0, C1, C2, C3, and C4 are initialized to 1 as the chopsticks are
on the table and not picked up by any of the philosophers.
Let's modify the above code of the Dining Philosopher Problem by using semaphore operations wait and
signal, the desired code looks like
1. void Philosopher
2. {
3. while(1)
4. {
5. Wait( take_chopstickC[i] );
6. Wait( take_chopstickC[(i+1) % 5] ) ;
7. ..
8. . EATING THE NOODLE
9. .
10. Signal( put_chopstickC[i] );
11. Signal( put_chopstickC[ (i+1) % 5] ) ;
12. .
13. . THINKING
14. }
15. }
In the above code, first wait operation is performed on take_chopstickC[i] and take_chopstickC [ (i+1) %
5]. This shows philosopher i have picked up the chopsticks from its left and right. The eating function is
performed after that.
The readers-writers problem is a classical problem of process synchronization, it relates to a data set such
as a file that is shared between more than one process at a time. Among these various processes, some are
Readers - which can only read the data set; they do not perform any updates, some are Writers - can both
read and write in the data sets.
The readers-writers problem is used for managing synchronization among various reader and writer
process so that there are no problems with the data sets, i.e. no inconsistency is generated.
Let's understand with an example - If two or more than two readers want to access the file at the same
point in time there will be no problem. However, in other situations like when two writers or one reader
and one writer wants to access the file at the same point of time, there may occur some problems, hence
the task is to design the code in such a manner that if one reader is reading then no writer is allowed to
update at the same point of time, similarly, if one writer is writing no reader is allowed to read the file at
that point of time and if one writer is updating a file other writers should not be allowed to update the file
at the same point of time. However, multiple readers can access the object at the same time.
Let us understand the possibility of reading and writing with the table given below:
TABLE 1
The solution of readers and writers can be implemented using binary semaphores.
We use two binary semaphores "write" and "mutex", where binary semaphore can be defined as:
Semaphore: A semaphore is an integer variable in S, that apart from initialization is accessed by only two
standard atomic operations - wait and signal, whose definitions are as follows:
1. 1. wait( S )
2. {
3. while( S <= 0) ;
4. S--;
5. }
6.
7. 2. signal( S )
8. {
9. S++;
10. }
From the above definitions of wait, it is clear that if the value of S <= 0 then it will enter into an infinite
loop (because of the semicolon; after while loop). Whereas the job of the signal is to increment the value
of S.
The below code will provide the solution of the reader-writer problem, reader and writer process codes
are given as follows -
In the above code of reader, mutex and write are semaphores that have an initial value of 1, whereas the
readcount variable has an initial value as 0. Both mutex and write are common in reader and writer
process code, semaphore mutex ensures mutual exclusion and semaphore write handles the writing
mechanism.
The readcount variable denotes the number of readers accessing the file concurrently. The moment
variable readcount becomes 1, wait operation is used to write semaphore which decreases the value by
one. This means that a writer is not allowed how to access the file anymore. On completion of the read
operation, readcount is decremented by one. When readcount becomes 0, the signal operation which is
used to write permits a writer to access the file.
1. wait(write);
2. WRITE INTO THE FILE
3. signal(wrt);
If a writer wishes to access the file, wait operation is performed on write semaphore, which
decrements write to 0 and no other writer can access the file. On completion of the writing job by the
writer who was accessing the file, the signal operation is performed on write.
CASE 1: WRITING - WRITING → NOT ALLOWED. That is when two or more than two processes are
willing to write, then it is not allowed. Let us see that our code is working accordingly or not?
1. Explanation :
2. The initial value of semaphore write = 1
3. Suppose two processes P0 and P1 wants to write, let P0 enter first the writer code, The moment P0 enters
CASE 2: READING - WRITING → NOT ALLOWED. That is when one or more than one process is
reading the file, then writing by another process is not allowed. Let us see that our code is working
accordingly or not?
1. Explanation:
2. Initial value of semaphore mutex = 1 and variable readcount = 0
3. Suppose two processes P0 and P1 are in a system, P0 wants to read while P1 wants to write, P0 enter first
into the reader code, the moment P0 enters
4. Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0
5. Increment readcount by 1, now readcount = 1, next
6. if (readcount == 1)// evaluates to TRUE
7. {
8. wait (write); // decrement write by 1, i.e. write = 0(which
9. clearly proves that if one or more than one
10. reader is reading then no writer will be
11. allowed.
12. }
13.
14. signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to ent
er.
15.
16. And reader continues to --READ THE FILE?
17.
18. Suppose now any writer wants to enter into its code then:
19.
20. As the first reader has executed wait (write); because of which write value is 0, therefore wait(writer); of t
he writer, code will go into an infinite loop and no writer will be allowed.
21. This proofs that, if one process is reading, no other process is allowed to write.
22. Now suppose P0 wants to stop the reading and wanted to exit then
23. Following sequence of instructions will take place:
24. wait(mutex); // decrease mutex by 1, i.e. mutex = 0
25. readcount --; // readcount = 0, i.e. no one is currently reading
26. if (readcount == 0) // evaluates TRUE
27. {
28. signal (write); // increase write by one, i.e. write = 1
29. }
30. signal(mutex);// increase mutex by one, i.e. mutex = 1
31.
32. Now if again any writer wants to write, it can do it now, since write > 0
CASE 3: WRITING -- READING → NOT ALLOWED. That is when if one process is writing into the
file, then reading by another process is not allowed. Let us see that our code is working accordingly or
not?
1. Explanation:
2. The initial value of semaphore write = 1
3. Suppose two processes P0 and P1 are in a system, P0 wants to write while P1 wants to read, P0 enter first
into the writer code, The moment P0 enters
4. Wait( write ); will decrease semaphore write by 1, now write = 0
5. And continue WRITE INTO THE FILE
6. Now suppose P1 wants to read the same time (will it be allowed?) let's see.
7. P1 enters reader's code
8. Initial value of semaphore mutex = 1 and variable readcount = 0
9. Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0
10. Increment readcount by 1, now readcount = 1, next
11. if (readcount == 1)// evaluates to TRUE
12. {
13. wait (write); // since value of write is already 0, hence it
14. will enter into an infinite loop and will not be
15. allowed to proceed further (which clearly
16. proves that if one writer is writing then no
17. reader will be allowed.
18. }
19.
20. The moment writer stops writing and willing to exit then
21. This proofs that, if one process is writing, no other process is allowed to read.
22.
23. The moment writer stops writing and willing to exit then it will execute:
24. signal( write); will increase semaphore write by 1, now write = 1
25. if now P1 wants to read it can since semaphore write > 0
CASE 4: READING - READING → ALLOWED. That is when one process is reading the file, and other
process or processes is willing to read, then they all are allowed i.e. reading - reading is not mutually
exclusive. Let us see that our code is working accordingly or not?
1. Explanation :
2. Initial value of semaphore mutex = 1 and variable readcount = 0
3. Suppose three processes P0, P1 and P2 are in a system, all the three processes P0, P1, and P2 want to read
, let P0 enter first into the reader code, the moment P0 enters
4. Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0
5. Increment readcount by 1, now readcount = 1, next
6. if (readcount == 1)// evaluates to TRUE
7. {
8. wait (write); // decrement write by 1, i.e. write = 0(which
9. clearly proves that if one or more than one
10. reader is reading then no writer will be
11. allowed.
12. }
13.
14. signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to ent
er.
15.
16. And P0 continues to --READ THE FILE?
17.
18. →Now P1 wants to enter the reader code
19. current value of semaphore mutex = 1 and variable readcount = 1
20. let P1 enter into the reader code, the moment P1 enters
21. Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0
22. Increment readcount by 1, now readcount = 2, next
23. if (readcount == 1)// eval. to False, it will not enter if block
24.
25. signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to ent
er.
26.
27. Now P0 and P1 continues to --READ THE FILE?
28.
29. →Now P2 wants to enter the reader code
30. current value of semaphore mutex = 1 and variable readcount = 2
31. let P2 enter into the reader code, The moment P2 enters
32. Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0
33. Increment readcount by 1, now readcount = 3, next
34. if (readcount == 1)// eval. to False, it will not enter if block
35.
36. signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to ent
er.
37.
38. Now P0, P1, and P2 continues to --READ THE FILE?
39.
40.
41. Suppose now any writer wants to enter into its code then:
42.
43. As the first reader P0 has executed wait (write); because of which write value is 0, therefore wait(writer);
of the writer, code will go into an infinite loop and no writer will be allowed.
44.
45. Now suppose P0 wants to come out of system( stop reading) then
46. wait(mutex); //will decrease semaphore mutex by 1, now mutex = 0
47. readcount --; // on every exit of reader decrement readcount by
48. one i.e. readcount = 2
49.
50. if (readcount == 0)// eval. to FALSE it will not enter if block
51.
52. signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to exit
53.
54.
55. → Now suppose P1 wants to come out of system (stop reading) then
56. wait(mutex); //will decrease semaphore mutex by 1, now mutex = 0
57. readcount --; // on every exit of reader decrement readcount by
58. one i.e. readcount = 1
59.
60. if (readcount == 0)// eval. to FALSE it will not enter if block
61.
62. signal(mutex); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to exit
63.
64. →Now suppose P2 (last process) wants to come out of system (stop reading) then
65. wait(mutex); //will decrease semaphore mutex by 1, now mutex = 0
66. readcount --; // on every exit of reader decrement readcount by
67. one i.e. readcount = 0
68.
69. if (readcount == 0)// eval. to TRUE it will enter into if block
70. {
71. signal (write); // will increment semaphore write by one, i.e.
72. now write = 1, since P2 was the last process
73. which was reading, since now it is going out,
74. so by making write = 1 it is allowing the writer
75. to write now.
76. }
77.
78. signal(mutex); // will increase semaphore mutex by 1, now mutex = 1
79.
80.
81. The above explanation proves that if one or more than one processes are willing to read simultaneous
1. Shared Memory
2. Message passing
Figure 1 below shows a basic structure of communication between processes via the shared memory
method and via the message passing method.
An operating system can implement both methods of communication. First, we will discuss the shared
memory methods of communication and then message passing. Communication between processes
using shared memory requires processes to share some variable, and it completely depends on how the
programmer will implement it. One way of communication using shared memory can be imagined like
this: Suppose process1 and process2 are executing simultaneously, and they share some resources or
use some information from another process. Process1 generates information about certain computations
or resources being used and keeps it as a record in shared memory. When process2 needs to use the
shared information, it will check in the record stored in shared memory and take note of the information
generated by process1 and act accordingly. Processes can use shared memory for extracting
information as a record from another process as well as for delivering any specific information to other
processes.
The task of the Producer is to produce the item, put it into the memory buffer, and again start producing
produci
items. Whereas the task of the Consumer is to consume the item from the memory buffer.
Below are a few points that considered as the problems occur in Producer
Producer-Consumer:
o The producer should produce data only when the buffer is not full. In case it is found
fo that the
buffer is full, the producer is not allowed to store any data into the memory buffer.
o Data can only be consumed by the consumer if and only if the memory buffer is not empty. In
case it is found that the buffer is empty, the consumer is not al
allowed
lowed to use any data from the
memory buffer.
o Accessing memory buffer should not be allowed to producer and consumer at the same time.
Consumer Code
Before Starting an explanation of code, first, understand the few terms used in the above code:
The above problems of Producer and Consumer which occurred due to context switch and producing
inconsistent result can be solved with the help of semaphores.
To solve the problem occurred above of race condition, we are going to use Binary Semaphore and
Counting Semaphore
Binary Semaphore: In Binary Semaphore, only two processes can compete to enter into its CRITICAL
SECTION at any point in time, apart from this the condition of mutual exclusion is also preserved.
Counting Semaphore: In counting semaphore, more than two processes can compete to enter into
its CRITICAL SECTION at any point of time apart from this the condition of mutual exclusion is also
preserved.
Semaphore: A semaphore is an integer variable in S, that apart from initialization is accessed by only
two standard atomic operations - wait and signal, whose definitions are as follows:
1. 1. wait( S )
2. {
3. while( S <= 0) ;
4. S--;
5. }
1. 2. signal( S )
2. {
3. S++;
4. }
From the above definitions of wait, it is clear that if the value of S <= 0 then it will enter into an infinite
loop (because of the semicolon; after while loop). Whereas the job of the signal is to increment the value
of S.
Let's see the code as a solution of producer and consumer problem using semaphore ( Both Binary and
Counting Semaphore):
1. void consumer(void)
2. {
3. wait ( empty );
4. wait(S);
5. itemC = buffer[ out ];
6. out = ( out + 1 ) mod n;
7. signal(S);
8. signal(empty);
9. }
Before Starting an explanation of code, first, understand the few terms used in the above code:
6. wait(empty) will decrease the value of the counting semaphore variable empty by 1, that is when the
producer produces some element then the value of the space gets automatically decreased by one in the
buffer. In case the buffer is full, that is the value of the counting semaphore variable "empty" is 0, then
wait(empty); will trap the process (as per definition of wait) and does not allow to go further.
7. wait(S) decreases the binary semaphore variable S to 0 so that no other process which is willing to
enter into its critical section is allowed.
8. signal(s) increases the binary semaphore variable S to 1 so that other processes who are willing to enter
into its critical section can now be allowed.
9. signal(full) increases the counting semaphore variable full by 1, as on adding the item into the buffer,
one space is occupied in the buffer and the variable full must be updated.
10.0wait(full) will decrease the value of the counting semaphore variable full by 1, that is when the
consumer consumes some element then the value of the full space gets automatically decreased by one in
the buffer. In case the buffer is empty, that is the value of the counting semaphore variable full is 0, then
wait(full); will trap the process(as per definition of wait) and does not allow to go further.
11. wait(S) decreases the binary semaphore variable S to 0 so that no other process which is willing to
enter into its critical section is allowed.
12. signal(S) increases the binary semaphore variable S to 1 so that other processes who are willing to
enter into its critical section can now be allowed.
13. signal(empty) increases the counting semaphore variable empty by 1, as on removing an item from
the buffer, one space is vacant in the buffer and the variable empty must be updated accordingly.
In this method, processes communicate with each other without using any kind of shared memory. If
two processes p1 and p2 want to communicate with each other, they proceed as follows:
Establish a communication link (if a link already exists, no need to establish it again.)
Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)
The message size can be of fixed size or of variable size. If it is of fixed size, it is easy for an OS
designer but complicated for a programmer and if it is of variable size then it is easy for a programmer
but complicated for the OS designer. A standard message can have two parts: header and body.
The header part is used for storing message type, destination id, source id, message length, and
control information. The control information contains information like what to do if runs out of buffer
space, sequence number, priority. Generally, message is sent using FIFO style.