Chapter 2
Chapter 2
Chapter 2
SYLLABUS :
Process Concepts: What is a process? – Process and Program – Process States – Process
Control Block – Operations on a process - Co-operating processes – Inter-process
communication
Process scheduling: Scheduler – Types of schedulers – Scheduling criteria – Pre-
emptive and non-pre-emptive scheduling – Scheduling algorithms (FCFS, SJF, Round-
robin, Priority)
Threads: Overview of Threads - Multi-threading model
PROCESS CONCEPTS
WHAT IS A PROCESS?
Process is the execution of a program that performs the actions specified in that program. It
can be defined as an execution unit where a program runs. The OS helps you to create,
schedule, and terminates the processes which is used by CPU. A process created by the main
process is called a child process.
Process operations can be easily controlled with the help of PCB(Process Control Block). You
can consider it as the brain of the process, which contains all the crucial information related
to processing like process id, priority, state, CPU registers, etc.
It is the job of OS to manage all the running processes of the system. It handles operations by
performing tasks like process scheduling and such as resource allocation.
Process memory is divided into four sections for efficient working. Below is the
architecture diagram of Process.
Process in memory
➢ Stack: The Stack stores temporary data like function parameters, returns
addresses, and local variables.
➢ Heap :containing dynamically allocated memory during run time.
➢ Data: It contains the global variable.
➢ Text: Text Section includes the current activity, which is represented by the value
of the Program Counter.
STATES OF A PROCESS
During the execution of a process, it undergoes a number of states. So, in this section of
the blog, we will learn various states of a process during its lifecycle.
➢ New (Create) – In this step, the process is about to be created but not yet created, it is
the program which is present in secondary memory that will be picked up by OS to
create the process.
➢ Ready – New -> Ready to run. After the creation of a process, the process enters the
ready state i.e. the process is loaded into the main memory. The process here is ready
to run and is waiting to get the CPU time for its execution. Processes that are ready for
execution by the CPU are maintained in a queue for ready processes.
➢ Run – The process is chosen by CPU for execution and the instructions within the
process are executed by any one of the available CPU cores.
➢ Blocked or wait – Whenever the process requests access to I/O or needs input from
the user or needs access to a critical region(the lock for which is already acquired) it
enters the blocked or wait state. The process continues to wait in the main memory and
does not require CPU. Once the I/O operation is completed the process goes to the ready
state.
➢ Terminated or completed – Process is killed as well as PCB is deleted.
➢ Suspend ready – Process that was initially in the ready state but was swapped out of
main memory(refer Virtual Memory topic) and placed onto external storage by
scheduler is said to be in suspend ready state. The process will transition back to ready
state whenever the process is again brought onto the main memory.
➢ Suspend wait or suspend blocked – Similar to suspend ready but uses the process
which was performing I/O operation and lack of main memory caused them to move to
secondary memory. When work is finished it may go to suspend ready.-
PROGRAM PROCESS
It is also accountable for storing the contents of processor registers. These are saved
when the process moves from the running state and then returns back to it. The
information is quickly updated in the PCB by the OS as soon as the process makes the
state transition.
1. Process ID or PID – Unique Integer Id for each process in any stage of execution.
2. Process Stage – The state any process currently is in, like Ready, wait, exit etc
3. Process Privileges – The special access to different resources to the memory or
devices the process has.
4. Pointer – Pointer location to the parent process.
5. Program Counter – It will always have the address of the next instruction in line
of the processes
6. CPU Registers – Before the execution of the program the CPU registered where
the process needs to be stored at.
7. Scheduling Information – There are different scheduling algorithms for a
process based on which they will be selected in priority. This section contains all
the information about the scheduling.
8. Memory Management Information – The operating system will use a lot of
memory and it needs to know information like – page table, memory limits,
Segment table to execute different programs MIM has all the information about
this.
9. Accounting Information – As the name suggest it will contain all the information
about the time process took, Execution ID, Limits etc.
10. I/O Status – The list of all the information of I/O the process can use.
NOTE :The PCB architecture is completely different for different OSes. The above is
the most generalised architecture
OPERATIONS ON A PROCESS
Process Creation
➢ A process may create several new processes, via a create process system call,
during the course of execution.
➢ The creating process is called the parent process, and the new processes are called
the children of that process.
Process Deletion
➢ A process terminates when it finishes executing its final statement and asks the
operating system to delete it using the exit() system call.
➢ At that point, the process may return a status value (an integer) to it parent
process via the wait() system call. All the resources of the process are de-allocated
by the operating system.
CO-OPERATING PROCESSES
In the computer system, there are many processes which may be either independent
processes or cooperating processes that run in the operating system.
There are several reasons for providing an environment that allows process
cooperation:
➢ Information sharing
In the information sharing at the same time, many users may want the same piece of
information(for instance, a shared file) and we try to provide that environment in which
the users are allowed to concurrent access to these types of resources.
➢ Computation speedup
When we want a task that our process run faster so we break it into a subtask, and each
subtask will be executing in parallel with another one. It is noticed that the speedup can
be achieved only if the computer has multiple processing elements (such as CPUs or I/O
channels).
➢ Modularity
In the modularity, we are trying to construct the system in such a modular fashion,
in which the system dividing its functions into separate processes.
➢ Convenience
An individual user may have many tasks to perform at the same time and the user
is able to do his work like editing, printing and compiling.
There is a producer and a consumer, producer produces on the item and places it into
buffer whereas consumer consumes that item. For example, a print program produces
characters that are consumed by the printer driver. A compiler may produce assembly
code, which is consumed by an assembler. The assembler, in turn, may produce objects
modules which are consumed by the loader.
INTER-PROCESS COMMUNICATION
Processes in operating system needs to communicate with each other. That is called
Interprocess communication. In general, Inter Process Communication is a type of
mechanism usually provided by the operating system (or OS). The main aim or goal of
this mechanism is to provide communications in between several processes.
Inter process communication (IPC) is used for exchanging data between multiple
threads in one or more processes or programs.
1. Message passing
2. Shared Memory Area
Message passing
Another important way that inter-process communication takes place with other
processes is via message passing. When two or more processes participate in inter-
process communication, each process sends messages to the others via Kernel. Here is an
example of sending messages between two processes: – Here, the process sends a
message like “M” to the OS kernel. This message is then read by Process B. A
communication link is required between the two processes for successful message
exchange. There are several ways to create these links.
2. Shared memory
Shared memory is a memory shared between all processes by two or more processes
established using shared memory. This type of memory should protect each other by
synchronizing access between all processes. Both processes like A and B can set up a
shared memory segment and exchange data through this shared memory area. Shared
memory is important for these reasons-
Suppose process A wants to communicate to process B, and needs to attach its address
space to this shared memory segment. Process A will write a message to the shared
memory and Process B will read that message from the shared memory. So processes are
responsible for ensuring synchronization so that both processes do not write to the same
location at the same time.
Process scheduling allows OS to allocate a time interval of CPU execution for each process.
Another important reason for using a process scheduling system is that it keeps the CPU
busy all the time. This allows you to get the minimum response time for programs.
OBJECTIVES OF SCHEDULING:
The fundamental objective of scheduling is to arrange the manufacturing activities in
such a way that the cost of production is minimized, and the goods produced are
delivered on due dates.
Long term scheduler is also known as job scheduler. It chooses the processes from the
pool (secondary memory) and keeps them in the ready queue maintained in the primary
memory.
Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of
long term scheduler is to choose a perfect mix of IO bound and CPU bound processes
among the jobs present in the pool.
If the job scheduler chooses more IO bound processes then all of the jobs may reside in
the blocked state all the time and the CPU will remain idle most of the time. This will
reduce the degree of Multiprogramming. Therefore, the Job of long term scheduler is very
critical and may affect the system for a very long time.
It is responsible for selecting one process from ready state for scheduling it on the
running state. Note: Short-term scheduler only selects the process to schedule it
doesn’t load the process on running. Here is when all the scheduling algorithms are
used. The CPU scheduler is responsible for ensuring there is no starvation owing to
high burst time processes.
3. Medium-term scheduler :
It is responsible for suspending and resuming the process. It mainly does swapping
(moving processes from main memory to disk and vice versa). Swapping may be
necessary to improve the process mix or because a change in memory requirements has
overcommitted available memory, requiring memory to be freed up. It is helpful in
maintaining a perfect balance between the I/O bound and the CPU bound. It reduces the
degree of multiprogramming.
SCHEDULING CRITERIA IN OS
The aim of the scheduling algorithm is to maximize and minimize the following:
Maximize:
• CPU utilization - It makes sure that the CPU is operating at its peak and is busy.
• Throughoutput - It is the number of processes that complete their execution per
unit of time.
Minimize:
Scheduling Criteria
➢ CPU Utilization
The scheduling algorithm should be designed in such a way that the usage of the
CPU should be as efficient as possible.
➢ Throughput
It can be defined as the number of processes executed by the CPU in a given
amount of time. It is used to find the efficiency of a CPU.
➢ Response Time
The Response time is the time taken to start the job when the job enters the
queues so that the scheduler should be able to minimize the response time.
Response time = Time at which the process gets the CPU for the first time - Arrival
time
➢ Turnaround time
Turnaround time is the total amount of time spent by the process from coming in
the ready state for the first time to its completion.
Turnaround time = Burst time + Waiting time
or
Turnaround time = Exit time - Arrival time
➢ Waiting time
The Waiting time is nothing but where there are many jobs that are competing for
the execution, so that the Waiting time should be minimized.
Waiting time = Turnaround time - Burst time
Preemptive Scheduling
In Preemptive Scheduling, the tasks are mostly assigned with their priorities. Sometimes
it is important to run a task with a higher priority before another lower priority task, even
if the lower priority task is still running. The lower priority task holds for some time and
resumes when the higher priority task finishes its execution.
Non-Preemptive Scheduling
In this type of scheduling method, the CPU has been allocated to a specific process. The
process that keeps the CPU busy will release the CPU either by switching context or
terminating. It is the only method that can be used for various hardware platforms. That’s
because it doesn’t need special hardware (for example, a timer) like preemptive
scheduling.
Characteristics of FCFS:
➢ FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
➢ Tasks are always executed on a First-come, First-serve concept.
➢ FCFS is easy to implement and use.
➢ This algorithm is not much efficient in performance, and the wait time is quite
high.
Advantages of FCFS:
➢ Easy to implement
➢ First come, first serve method
Disadvantages of FCFS:
➢ FCFS suffers from Convoy effect.
➢ The average waiting time is much higher than the other algorithms.
➢ FCFS is very simple and easy to implement and hence not much efficient.
Shortest job first (SJF) is a scheduling process that selects the waiting process with
the smallest execution time to execute next. This scheduling method may or may not be
preemptive. Significantly reduces the average waiting time for other processes waiting
to be executed. The full form of SJF is Shortest Job First.
Characteristics of SJF:
➢ Shortest Job first has the advantage of having a minimum average waiting
time among all operating system scheduling algorithms.
➢ It is associated with each task as a unit of time to complete.
➢ It may cause starvation if shorter processes keep coming. This problem
can be solved using the concept of ageing.
Advantages of Shortest Job first:
➢ As SJF reduces the average waiting time thus, it is better than the first
come first serve scheduling algorithm.
➢ SJF is generally used for long term scheduling
Disadvantages of SJF:
➢ One of the demerit SJF has is starvation.
➢ Many times it becomes complicated to predict the length of the upcoming CPU
request
3. PRIORITY SCHEDULING:
to wait for a longer amount of time to get scheduled into the CPU. This condition
is called the starvation problem.
4. ROUND ROBIN:
Round Robin is a CPU scheduling algorithm where each process is cyclically assigned
a fixed time slot. It is the preemptive version of First come First Serve CPU Scheduling
algorithm. Round Robin CPU Algorithm generally focuses on Time Sharing technique.
Characteristics of Round robin:
• It’s simple, easy to use, and starvation-free as all processes get the balanced
CPU allocation.
• One of the most widely used methods in CPU scheduling as a core.
• It is considered preemptive as the processes are given to the CPU for a very
limited time.
Advantages of Round robin:
• Round robin seems to be fair as every process gets an equal share of CPU.
• The newly created process is added to the end of the ready queue.
What is Dispatcher?
It is a module that provides control of the CPU to the process. The Dispatcher should be
fast so that it can run on every context switch. Dispatch latency is the amount of time
needed by the CPU scheduler to stop one process and start another.
• Context Switching
• Switching to user mode
• Moving to the correct location in the newly loaded program.
FCFS may suffer from the convoy effect if the burst time of the first job is the highest
among all. As in the real life, if a convoy is passing through the road then the other persons
may get blocked until it passes completely. This can be simulated in the Operating System
also.
If the CPU gets the processes of the higher burst time at the front end of the ready queue
then the processes of lower burst time may get blocked which means they may never get
the CPU if the job in the execution has a very high burst time. This is called convoy
effect or starvation.
REVIOEW QUESTIONS
TWO-MARKS QUESTIONS
1. Differentiate Process and Program
2. List the objectives of scheduling
3. What is context switching?
4. What is starvation and aging?
5. Define CONVEY effect
6. What is dispatcher
7. What is co-operative and independent process
8. Explain the operations on a process
FOUR-MARKS QUESTIONS
1. What is process? Explain process state transition with neat diagram
2. Explain PCB and its contents
3. What is process scheduling? Explain process scheduling criteria
4. differentiate between pre-emptive and non-preemptive scheduling algoritm
EIGHT-MARKS QUESTIONS
1. What is scheduler? Explain different types of schedulers
2. Describe SJF and RR scheduling algorithm with advantages and disadvantages
3. Explain FCFS and Priority scheduling algorithm with advantages and disadvantages
4. Write a note on
a) Co-operating Process
b) Explain Inter-Process communication