Operating System Assignment
Operating System Assignment
Properties of a Process:
Creation of each process includes system calls for each
process separately.
A process is an isolated execution entity and does not
share data and information.
Processes use IPC (Inter-process communication)
mechanism for communication which significantly
increases the number of system calls.
Process management consumes more system calls.
Each process has its own stack and heap memory,
instruction, data and memory map.
Definition of Thread:
The thread is a program execution that uses process
resources for accomplishing the task. All threads within a
single program are logically contained within a process.
The kernel allocates a stack and a thread control block
(TCB) to each thread. The operating system saves only the
stack pointer and CPU state at the time of switching
between the threads of the same process.
Properties of a Thread:
Only one system call can create more than one thread
(Lightweight process).
Threads share data and information.
Threads shares instruction, global and heap regions but
has its own individual stack and registers.
Thread management consumes no or fewer system calls
as the communication between threads can be achieved
using shared memory.
The isolation property of the process increases its
overhead in terms of resource consumption.
Conclusion:
Processes are used to achieve execution of programs in
concurrent and sequential manner. While a thread is a
program execution unit which uses the environment of
the process when many threads use the environment of
the same process they need to share its code, data and
resources. The operating system uses this fact to reduce
the overhead and improve computation.
What is Context Switching?
A context switching is a process that involves switching of
the CPU from one process or task to another. In this
phenomenon, the execution of the process that is present
in the running state is suspended by the kernel and
another process that is present in the ready state is
executed by the CPU.
It is one of the essential features of the multitasking
operating system. The processes are switched so fastly
that it gives an illusion to the user that all the processes
are being executed at the same time.
But the context switching process involved a number of
steps that need to be followed. You can't directly switch a
process from the running state to the ready state. You
have to save the context of that process. If you are not
saving the context of any process P then after some time,
when the process P comes in the CPU for execution again,
then the process will start executing from starting. But in
reality, it should continue from that point where it left the
CPU in its previous execution. So, the context of the
process should be saved before putting any other process
in the running state.
A context is the contents of a CPU's registers and program
counter at any point in time. Context switching can
happen due to the following reasons:
When a process of high priority comes in the ready
state. In this case, the execution of the running
process should be stopped and the higher priority
process should be given the CPU for execution.
When an interruption occurs then the process in the
running state should be stopped and the CPU should
handle the interrupt before doing something else.
When a transition between the user mode and kernel
mode is required then you have to perform the
context switching.
Steps involved in Context Switching
The process of context switching involves a number of
steps. The following diagram depicts the process of
context switching between the two processes P1 and P2.
In the above figure, you can see that initially, the process
P1 is in the running state and the process P2 is in the
ready state. Now, when some interruption occurs then
you have to switch the process P1 from running to the
ready state after saving the context and the process P2
from ready to running state. The following steps will be
performed:
1. Firstly, the context of the process P1 i.e. the process
present in the running state will be saved in the
Process Control Block of process P1 i.e. PCB1.
2. Now, you have to move the PCB1 to the relevant
queue i.e. ready queue, I/O queue, waiting queue,
etc.
3. From the ready state, select the new process that is to
be executed i.e. the process P2.
4. Now, update the Process Control Block of process P2
i.e. PCB2 by setting the process state to running. If the
process P2 was earlier executed by the CPU, then you
can get the position of last executed instruction so
that you can resume the execution of P2.
5. Similarly, if you want to execute the process P1 again,
then you have to follow the same steps as mentioned
above(from step 1 to 4).
For context switching to happen, two processes are at
least required in general, and in the case of the round-
robin algorithm, you can perform context switching with
the help of one process only.
The time involved in the context switching of one
process by other is called the Context Switching Time.
Program execution
I/O operations
File System manipulation
Communication
Error Detection
Resource Allocation
Protection
Program execution:
Operating systems handle many kinds of activities from
user programs to system programs like printer spooler,
name servers, file server, etc. Each of these activities is
encapsulated as a process.
Safety Algorithm:
The algorithm for finding out whether or not a system is in
a safe state can be described as follows:
1) Let Work and Finish be vectors of length ‘m’ and ‘n’
respectively.
Initialize: Work = Available
Finish[i] = false; for i=1, 2, 3, 4….n
2) Find an i such that both
a) Finish[i] = false
b) Needi <= Work
if no such i exists goto step (4)
3) Work = Work + Allocation[i]
Finish[i] = true
goto step (2)
4) if Finish [i] = true for all i
then the system is in a safe state
Resource-Request Algorithm:
Let Requesti be the request array for process Pi.
Requesti [j] = k means process Pi wants k instances of
resource type Rj. When a request for resources is made by
process Pi, the following actions are taken:
1) If Requesti <= Needi
Goto step (2) ; otherwise, raise an error condition, since
the process has exceeded its maximum claim.
2) If Requesti <= Available
Goto step (3); otherwise, Pi must wait, since the resources
are not available.
3) Have the system pretend to have allocated the
requested resources to process Pi by modifying the state
as
follows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi– Requesti
Example:
Considering a system with five processes P0 through
P4 and three resources of type A, B, C. Resource type A
has 10 instances, B has 5 instances and type C has 7
instances. Suppose at time t0 following snapshot of the
system has been taken:
int main()
{
// P0, P1, P2, P3, P4 are the Process names here
int n, m, i, j, k;
n = 5; // Number of processes
m = 3; // Number of resources
int alloc[5][3] = { { 0, 1, 0 }, // P0 // Allocation
Matrix
{ 2, 0, 0 }, // P1
{ 3, 0, 2 }, // P2
{ 2, 1, 1 }, // P3
{ 0, 0, 2 } }; // P4
int flag = 0;
for (j = 0; j < m; j++) {
if (need[i][j] > avail[j]){
flag = 1;
break;
}
}
if (flag == 0) {
ans[ind++] = i;
for (y = 0; y < m; y++)
avail[y] += alloc[i][y];
f[i] = 1;
}
}
}
}
return (0);
}
Output:
Following is the SAFE Sequence
P1 -> P3 -> P4 -> P0 -> P2
Wait operation
x.wait() : Process performing wait operation on any
condition variable are suspended. The suspended
processes are placed in block queue of that condition
variable.
Note: Each condition variable has its unique block queue.
Signal operation
x.signal(): When a process performs signal operation on
condition variable, one of the blocked processes is given
chance.
If (x block queue empty)
// Ignore signal
else
// Resume a process from block queue.
Advantages of Monitor:
Monitors have the advantage of making parallel
programming easier and less error prone than using
techniques such as semaphore.
Disadvantages of Monitor:
Monitors have to be implemented as part of the
programming language . The compiler must generate
code for them. This gives the compiler the additional
burden of having to know what operating system facilities
are available to control access to critical sections in
concurrent processes. Some languages that do support
monitors are Java,C#,Visual Basic,Ada and concurrent
Euclid.
Q-9 Show that, if the wait () and signal () semaphore
operations are not executed atomically,
then mutual exclusion may be violated.
Ans. A wait operation atomically decrements the
value associated with a semaphore. If two wait
operations are executed on a semaphore when its
value is 1, if the two operations are not performed
atomically, then it is possible that both operations
might proceed to decrement the semaphore value,
thereby violating mutual exclusion.