OS Unit-2 Notes
OS Unit-2 Notes
OS Unit-2 Notes
UNIT-II
Process and CPU Scheduling - Process concepts and scheduling, Operations on
processes, Cooperating Processes, Threads, and Interprocess Communication,
Scheduling Criteria, Scheduling Algorithms, Multiple-Processor Scheduling.
System call interface for process management-fork, exit, wait, waitpid, exec
1. Process Concepts:
Process is a program in execution. A program by itself is not a process; a
program is a passive entity, such as a file containing a list of instructions stored on
disk (often called an executable file), whereas a process is an active entity, with a
program counter specifying the next instruction to execute and a set of associated
resources. A program becomes a process when an executable file is loaded into
memory. Two common techniques for loading executable files are double-clicking
anicon representing the executable file and entering the name of the executable
file on the command line (as in prog. exe or a. out.)
Program Process
Stack
Heap
Data
Text
• The Data section contains the global and static variables required to run the
process.
• The Heap is used for the dynamic memory allocation, and is managed via
calls to new, delete, malloc, free,etc.
• The Stack is used for local variables. Space on the stack is reserved for local
variables when they are declared. It also deals with function calls.
Processes in the operating system can be in any of the following five states:
•WAITING: Whenever the running process requests access to I/O or needs other
events to occur, it enters into the waiting state.
Process Control Block (PCB) is a data structure that contains in formation of the
process related to it. The process control stores many data items that are needed for
efficient process management. These data items are explained with the help of the
given diagram:
ProcessID
ProcessState
ProgramCounter
CPURegisters
ListOfOpenFiles
.. .
CPUSchedulingInformat
ion
MemoryManagementInf
ormation
I/OStatusInformation
AccountingInformation
Fig : Process control block
2. Process State: This specifies the process state i.e. new, ready, running, waiting
or terminated.
3. Program Counter: This contains the address of the next instruction that needs
to be executed in the process.
4. CPU Registers: This specifies the registers that are used by the process. They
PROCESS SCHEDULING
The act of determining which process is in the ready state should be moved to the
running State is known as Process Scheduling.
The prime aim of the process scheduling system is to keep the CPU busy all the
time and to deliver minimum response time for all programs. For achieving this, the
scheduler must apply appropriate rules for swapping processes IN and OUT of
CPU. Scheduling fell into one of the two general categories:
back from the process at any time during the execution of the process.
Scheduling Queues
• All processes , upon entering into the system, are stored in the Job Queue.
• The process could issue an I/O request, and then be placed in the I/O
queue.(The process is placed in waiting state).
• The process could create an sub-process and wait for its termination. (The
process is placed in waiting state).
• The process could be removed forcibly from the CPU, as a result of an
Jobqueue
interrupt, and be put back in the ready queue. (The process is placed in
ready state).
Types of Schedulers
Long Term Scheduler: It is also called a job scheduler. It selects processes from
the job queueand loads them into memory for execution. The primary objective of
the job scheduler is toprovide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming.
Short Term Scheduler: It is also called as CPU scheduler. It selects one of the
processes in the ready queue as per scheduling algorithm and allocates process or to
execute it. It shifts the process from ready state to running state. Its main objective
is to increase system performance. Short-term schedulers are faster than long-term
schedulers.
CONTEXT SWITCH
A context switch is the mechanism to store and restore the state or context of a CPU
in PCB (Process Control block) so that a process execution can be resumed from the
same point at a latertime. Using this technique, a context switcher enables multiple
processes to share a single CPU.Context switching is an essential part of a
multitasking operating system. When the scheduler switches the CPU from one
process to execute another process, the state of the current running process is stored
into the PCB. After this, the state for other process to run next is loaded from itsown
PCB and used to set the PC, registers, etc. At that point, the second process can start
executing.
process enters into blocked or waiting state during the execution, then the
processor starts executing theotherprocesses.
• Process Termination: Once the purpose of the process gets over then the OS
will kill theprocess. The Context of the process (PCB) will be deleted and the
process gets terminated bythe Operating system. The running process can enter
into termination state by using system call exit().The parent process terminates
only after terminating all its child processes. If the parent process terminates
before termination of its child processes, then the child process is called as
zombie process or orphan process. These are orphan processes are killed off.
3. INTERPROCESS COMMUNICATION(IPC)
There may be many reasons for the requirement of cooperating processes. Some of
these aregiven as follows:
• Modularity: Modularity involves dividing complicated tasks into smaller
subtasks. These subtasks can complete by different cooperating processes.
This leads to faster andmoreefficient completion of therequired tasks.
• Information Sharing: Sharing of information between multiple processes
Cooperating processes can coordinate and communicate with each other using
shared data or messages. Details about these are given as follows:
ProcessP1
Shared Data
ProcessP2
Kernel
Fig: Shared memory
In the above diagram, Process P1 and P2 can cooperate with each other using
shared datasuchas memory, variables, files, databasesetc.
ProcessP1 M
ProcessP2 M
Kernel M
In the above diagram, Process P1 and P2 can cooperate with each other using
messages to communicate. Process P1 send message ‘M’ to the kernel. The
kernel shares the message ‘M’ to the processP2.
4. THREADS
Thread is an execution unit which consists of its own program counter, a stack, and
a set of registers. Threads are also known as Lightweight processes. Threads are
popular way to improve application through parallelism. The CPU switches rapidly
back and forth among the threads giving illusion that the threads are running in
parallel.
As each thread has its own independent resource for process execution, multiple
processes can be executed parallel by increasing number of threads.
Types of Thread
2. Kernel Threads
User threads, are above the kernel and without kernel support. These are the threads
that application programmers use in their programs.
Kernel threads are supported within the kernel of the OS itself. All modern OSs
support kernel level threads, allowing the kernel to perform multiple simultaneous
tasks and/or to service multiple kernel system calls simultaneously.
Multithreading Models: The user threads must be mapped to kernel threads, by one
of the following strategies:
ManytoOneModel
In the many to one model, many user-level threads are all mapped onto a single
kernel thread.
efficient in nature.
The one to one model creates a separate kernel thread to handle each and every
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
user thread.
Most implementations of this model place a limit on how many threads can be
created.
Linux and Windows from 95toXP implement the one-to-one model for threads.
• The many to many model multiplexes any number of user threads onto an
equal or smaller number of kernel threads, combining the best features of the
one-to-one and many-to-one models.
• Users can create any number of the threads.
• Blocking the kernel system calls does not block the entire process.
• Processes can be split across multiple processors.
Thread Libraries: Thread libraries provide programmers with API for creation and
management of threads. Thread libraries may be implemented either in user space
or in kernel space. The user space involves API functions implemented solely within
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
the user space, with no kernel support. The kernel space involves system calls, and
requires a kernel with thread library support.
Benefits of Multithreading
1. Responsiveness
5. SCHEDULING CRITERIA
• The "best" scheduling algorithm can be decided based on many criteria. They
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
are:
• CPU Utilization: CPU utilization is the amount of time that the processor is busy
over aperiod of time. It is also used to estimate system performance. To make out
the best use of CPU and not to waste any CPU cycle, CPU would be working most
of the time(Ideally100% of the time). Considering a real system, CPU usage
should range from 40% (lightly loaded) to 90% (heavily loaded.)
• Throughput: It is the total number of processes completed per unit time or rather
say total amount of work done in a unit of time.
• Turnaround Time: It is the amount of time taken to execute a particular process,
i.e. theinterval from time of submission of the process (arrival time) to the time of
completion of the process (Finish time).
Turnaround Time=Finish Time -Arrival Time
• Waiting Time: The sum of the periods of time spent waiting in the ready queue
waiting forCPU.
Waiting Time=Turnaround Time-Burst Time
• Load Average: It is the average number of processes residing in the ready queue
waiting for their turn to get into the CPU.
• Response Time: Amount of time it takes from when a request was submitted until
the first response is produced. Remember, it is the time till the first response and
not the completion of process execution (final response).
Note: In general CPU utilization and Throughput are maximized and other factors
are reduced for proper optimization.
The CPU scheduling algorithms decides the order of execution of the processes to
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
Important Terminologies
• Finish Time: It is the time at which the process completes the execution.
• Turnaround Time: It is the time interval from arrival time of the process to
the finish time of the process.
• Average waiting time: It is the average of the waiting times of all the
processes executed over a period of time.
• Average turnaround time: It is the average of the turnaround times of all
the processes executed over a period of time.
• Response time: Amount of time it takes from when a request was
submitted until the first response is produced. Remember, it is the time till
the first response and not the completion of process execution (final
response).
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
i. First Come First Serve Scheduling: First Come First Served(FCFS) scheduling
algorithm schedules the execution of the processes based on their arrival time. The
process which arrives first, gets executed first, the process arrives second gets
executed next and so on. It is implemented by using the FIFO (First in First out)
queue. In this, a new process enters through the tail of the queue, and the scheduler
selects a process from the head of the queue for execution. A perfect real life
example of FCFS scheduling is buying tickets at ticket counter.
• It is the easiest and most simple CPU scheduling algorithm.
• Average waiting time is very high as compared to other algorithms.
• It Is Non Pre-emptive algorithm.
Advantages of FCFS
1. Simple
2. Easy
Disadvantages of FCFS
1. The scheduling method is non preemptive, the process will run to the completion.
2. Due to the non-preemptive nature of the algorithm, the problem of starvation may
occur.
3. Although it is easy to implement, but it is poor in performance since the average
waiting time is higher as compare to other scheduling algorithms.
4. It suffers from Convoy Effect.
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
FCFS may suffer from the convoy effect if the burst time of the first job is the
highest among all. As in the real life, if a convoy is passing through the road then
the other persons may get blocked until it passes completely. This can be simulated
in the Operating System also.
If the CPU gets the processes of the higher burst time at the front end of the ready
queue then the processes of lower burst time may get blocked which means they
may never get the CPU if the job in the execution has a very high burst time. This is
called convoy effect or starvation.
PROBLEMS:
Question 1: Consider the following five processes =(P1,P2,P3,P4,P5) with Arrival
times=(0,0, 2, 5, 8 ) and Burst Time = ( 8, 6, 3, 5, 2 ) respectively. Find average
waiting time and average turnaround time for the above processes using FCFS CPU
scheduling algorithm.
Solution:
Solution:
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
PROCE ARRIVAL BURST START FINISH TURNAROUND WAITING
SS TIME TIME TIME TIME TIME TIME
P1 0 2 0 2 2 0
P2 1 6 2 8 7 1
P3 2 4 8 12 10 6
P4 3 9 12 21 18 9
P5 4 12 21 33 29 17
ii. Shortest Job First (SJF) Scheduling: Shortest job first (SJF) scheduling
algorithm schedules the execution of the processes based on their burst time. The
process with the lowest burst time among the list of available processes is
executed first, second shortest burst time process executed next and so on. If two
processes have the same bust time then FCFS is used to break the tie. This
scheduling algorithm can be preemptive or non-preemptive.
1. Maximum throughput
2. Minimum average waiting and turnaround time
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
Disadvantages of SJF
iii. Shortest Remaining Time First(SRTF): Shortest Remaining Time Next (SRN)
is thepreemptive version of Shortest Job First (SJF) algorithm. Initially, it starts
like SJF, but when anew process arrives with burst time shorter than remaining
burst time of currently executing process, then the currently executing process is
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
1. Maximum throughput
2. Minimum average waiting and turnaround time
Disadvantages of SJF
0 2 3 5 8 12 18 27
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
Averageturnaroundtime =(27+18+8+5+12)/5
=14
Average waiting time=(18+10+2+0+3)/5
=6.6
iv. Priority Scheduling: Priority scheduling algorithm schedules the execution of the
processes based on their priorities. A priority value is associated with each process.
The process with the highest priority among the list of available processes is
executed first, second highest priority process executed next and so on. If two
processes have the same priority then FCFS is used to break the tie. This scheduling
algorithm can be preemptive or non-preemptive.
Question: Consider the following five processes = ( P1, P2, P3, P4, P5 ) with
Arrival times = ( 0,0, 2, 3, 5 ), Burst Time = ( 9, 8, 4, 2, 4 ) and their Priority
values = (4,3,2,5,1) respectively. Find average waiting time and average
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
Solution:
0 8 12 16 25 27
Averageturnaroundtime =(25+8+14+24+7)/5
=15.6
Average waiting time =(16+0+10+22+3)/5
=10.2
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
Averageturnaroundtime =(25+16+8+24+4)/5
=15.4
Average waiting time= (16+8+4+22+0)/5
=10.0
vi. Round Robin Scheduling: A certain fixed time is defined in the system which
is called time quantum or time slice. Each process present in the ready queue is
executed for thattime quantum, if the execution of the process is completed
during that time then the processwill terminate else the process will go back to
the ready queue and waits for the next roundto complete the remaining execution
part. Context switching is used to save the execution state of the preempted
process. This algorithm avoids starvation problem.
ReadyQueue
CPU Executioncompleted
New process EXIT
Executionnotcompleted Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
Advantages
Disadvantages
1. The higher the time quantum, the higher the response time in the system.
2. The lower the time quantum, the higher the context switching overhead in the
system.
3. Deciding a perfect time quantum is really a very difficult task in the system.
Question: Consider the following five processes =( P1, P2,P3,P4,P5) with Arrival
times =( 0,2,3,4,7) and Burst Time=(9,8,4,6, 8 ) respectively. Find average
waiting time and average turnaround time for the above processes using Round
Robin CPU Scheduling algorithm. Use time quantum / time slice=3.
0 3 6 9 12 15 18 21 22 25 28 30 35
18
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
Averageturnaroundtime =(25+28+19+24+28)/5
=24.8
Average waiting time =(16+20+15+18+20)/5
=17.8
Each queue hasabsolute priority over lower-priority queues. For example, no process
in the batch queue could run unless the queues for system processes, interactive
processes, and interactive editing processes were all empty. If an interactive editing
process entered the ready queue while a batch process was running, the batch process
willbe preempted.
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
Advantages:
• The processes are permanently assigned to the queue, so it has advantage of low
scheduling overhead.
Disadvantages:
• Some processes may starve for CPU if some higher priority queues are never
becoming empty.
• It is inflexible in nature.
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
7. MULTIPLE-PROCESSORSCHEDULING
In multiple-processor scheduling multiple CPU’s are available. The OS can use any
available processor to run any process in the queue. Hence multiple processes can
be executed parallel. However multiple processor scheduling is more complex as
compared to single processor scheduling.
Multiprocessing, all the scheduling decisions and I/O processing are handled by a
single processor called the Master Server and the other processors executes only
the user code. This is simple and reduces the need of data sharing.
In Symmetric Multiprocessing, each processor is self scheduling. All
processes may be in a common ready queue or each processor may maintain its
own private queue for ready processes. The scheduling proceeds further by having
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
the scheduler for each processor examine the ready queue and select a process to
execute.
ii. Processor Affinity: When a process runsona specific processor there are certain
effectson the cache memory. The data most recently accessed by the process
populate the cache forthe processor and as a result successive memory access by
the process is often satisfied in the cache memory. Now if the process migrates to
another processor, the contents of the cache memory must be invalidated for the
first processor and the cache for the second processor must be repopulated.
Because of the high cost of invalidating and repopulating caches, most of the
SMP(symmetric multiprocessing) systems try to avoid migration of processes
from one processor to another and try to keep a process running on the same
processor. This is known as PROCESSOR AFFINITY. There are two types of
processor affinity:
a. Soft Affinity: When an operating system has a policy of attempting to keep
a process running on the same processor but not guaranteeing it will do so,
this situation is calledsoftaffinity.
b. Hard Affinity: Some OS such as Linux provide some system calls to support
places on thesame physical chip. Each core has a register set to maintain its
architectural state and thus appears to the operating system as a separate physical
processor. SMP systems that use multicore processors are faster and consume less
power than systems in which each processor has its own physical chip.
However multicare processors may complicate the scheduling problems.
When processor accesses memory then it spends a significant amount of time
waiting for the data tobecome available. This situation is called MEMORY
STALL. It occurs for various reasons such as cache miss, which is accessing the
data that is not in the cache memory. In such cases the processor can spend up to
fifty percent of its time waiting for data to become available from the memory. To
solve this problem recent hardware designs have implemented multithreaded
processor cores in which two or more hardware threads are assigned to each core.
Therefore if one thread stalls while waiting for the memory, core can switch to
another thread. There are two ways to multithread a processor :
a. Coarse-Grained Multithreading: In coarse grained multithreading a
even a single CPU system acts like a multiple-processor system. In a system with
Virtualization, the virtualization presents one or more virtual CPU to each of
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
virtual machines running on the system and then schedules the use of physical
CPU among the virtual machines. Most virtualized environments have one host
operating system and many guest operating systems. The host operating system
creates and manages the virtual machines. Each virtual machine has a guest
operating system installed and applications run within that guest. Each guest
operating system may be assigned for specific use cases, applications or users
including time sharing or even real-time operation. Any guest operating-system
scheduling algorithm that assumes a certain amount of progress in a given amount
of time will be negatively impacted by the virtualization. The net effect of such
scheduling layering is that individual virtualized operating systems receive only a
portion of the available CPU cycles, even though they believe they are receiving
all cycles and that they are scheduling all of those cycles. Commonly, the time-
of-day clocks in virtual machines are incorrect because timers take no longer to
trigger than they would on dedicated CPU’s. Virtualizations can thus undo the
good scheduling-algorithm efforts of the operating systems within virtual
machines.
The process management is done with a number of system calls, each with a single
(simple)purpose. These system calls can then be combined to implement more
complex behaviors. The basic process management system calls are: fork( ), exec(),
wait( ),waitpid( ),exit( )etc.
• fork( ) : This system call is used to create a new process. A parent process uses
fork tocreate a new child process. The child process is almost-exact duplicate of
the parent process. After fork, both parent and child executes the same program
but in separate processes.
pid_t=fork( );
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR
R20 CMR Technical Campus
• wait( ): This system call is used by the parent process to obtain the exit status of
a child process. When this system call is used, the execution of the parent process
is suspended until its child terminates. The signature of wait( )is:
pid_twait(int*status);
• Waitpid( ) : When a parent process has more than one child processes, waitpid(
) system call is used by the parent process to know the termination state of a
particular child. The signature of waitpid( )is :
pid_twaitpid(pid_tpid,int*status,intoptions);
• exit( ): The exit( ) system call is used by a program to terminate its execution.
The operating systemre claims resources that we reused by the process after the
exit( )system call.
exit(int);
Ravindar.M,Asso.Prof,CSEDept,JITS-KNR