Ch2 - Process and Process Management
Ch2 - Process and Process Management
1
2.1 Process concepts
In a single-user system such as Microsoft Windows, a user may be
able to run several programs at one time.
All these activities are similar, so we call all of them processes.
4
2. Execution of a process creation system call by a running
process (fork in Unix and creatProcess in windows)
5
Process Termination
Events which cause process termination
1. Normal exit (voluntary): a process has finished execution.
6
4. Killed by another process (involuntary): the killer must have
an authorization (kill in Unix and terminate Process in
Windows)
• Process synchronization
• Process communication
7
2.1.1. Process state and process state transition
As the process executes, it changes state.
A process may be in one of the following states.
9
The three main state described as follow
P20 P4
.
.
.
. 10
.
.
Process state Transitions
There are four state transitions two of them
caused by the process scheduler.
Process scheduler is a part of OS that decides
which process should get the CPU next.
1. Ready Running
– The assignment of the CPU to the first process on
the ready list is called dispatching and it is
performed by a system entity called dispatcher.
Dispatch (process_name):Ready Running
11
2. Running Ready
– A process may be stopped by a clock interrupt if
it doesn’t relinquish (release) the CPU before the
allocated time expires.
Timeout (process name):Running Ready
3. Running waiting
– During I/O a process voluntarily relinquishes the
CPU.
Wait(process_name): Running waiting
4. Waiting Ready
– When I/O finishes
Wakeup(process_name): waiting Ready
12
Implementation of Processes
The OS maintains a table (an array of structures), called the
Process Table or Process Control Block.
It enables an interrupted process to resume normally.
It is an area of memory containing all the relevant information
associated with each process.
13
These may include
– Process state
– Program counter
– Stack pointer
– Value of CPU registers when the process is
suspended.
– CPU scheduling information such as priority
– The area of memory used by the process(memory
management information)
– Accounting information: amount of CPU time
used, time limits, process no, etc.
14
– I/O status information: list of I/O devices such as
tape derives allocated for the process, a list of open
files, etc.
– Pointer to the next process’s PCB
Pointer Process
State
Process number
Program counter
Register
Memory limits
.
.
.
15
2.2 Threads
• A thread is a flow of execution through the process code, with its
own program counter, system registers and stack.
• Threads provide a way to improve application performance through
parallelism.
• Threads represent a software approach to improving performance of
operating system by reducing the overhead thread is equivalent to a
classical process.
• Each thread represents a separate flow of control.
16
Con’t
• They also provide a suitable foundation for parallel execution
of applications on shared memory multiprocessors.
• Thread allows multiple executions to take place in the same
process environment, called multithreading.
17
– (a) Three processes each with one thread. (b) One
process with three threads.
Each thread has a program counter, registers stack,
and state; but all threads of a process share address
space, global variables and other resources such as
open files, etc. 18
• The first column lists some items shared by all threads in a
process.
• The second one lists some items private to each thread.
• Threads are sometimes called light weight processes.
• Threads take turns in running
19
Thread Usage-Why do we need threads?
20
Con’t
1. Simplifying the programming model since many activities
are going on at once.
2. They are easier to create and destroy than processes since
they don't have any resources attached to them
3. Performance improves by overlapping activities if there is
too much I/O
4. Real parallelism is possible if there are multiple CPUs
21
2.3 Interprocess communication
• Processes frequently need to communicate with other processes.
22
– These issues are also applied to threads; the first
is easy for thread since they share a common
address space.
Race conditions
– Arises as a result of sharing some resources
– E.g. printer spooler
– When a process wants to print a file, it enters a
file name on a special spooler directory
23
– Another process, the printer daemon, periodically
checks to see if there are any files to be printed, and if
there are it prints them and removes from the directory.
– Recall the daemon is a process running in the back
ground and started automatically when the system is
booted.
– Assume that the spooler directory has a large number
of slots, numbered 0,1,2,3,…,n, each capable of
holding a file name.
24
– There are two shared variables (shared by all
processes)
– out-points to the next file to be printed
– in-points to the next free slot in the directory
25
• Then the following may happen
– Process A reads in and stores the value 7 in a local
variable.
– A clock interrupt occurs and the CPU is switched
to B.
– B also reads in and stores the value 7 in a local
variable.
– It stores the name of its file in slot 7 and updates in
to be 8.
– A runs again; it runs its file name in slot 7, erasing
the file name that B wrote.
– It updates in to be 8.
26
• The printer daemon will not notice this; but B will never
receive any output
• Situation like this, where two or more process are reading and
writing some shared data and the final result depends on who
runs precisely when are called race conditions.
27
2.4. Process Scheduling
• When a process is multi programmed, it frequently has multiple
processes competing for the CPU at the same time.
• Multiprogramming - aims to increase the output
• Time sharing - to allow all users use the CPU equally
28
Scheduling Queues
29
Scheduling levels
• Short-term (CPU scheduler)—selects from jobs in memory those
jobs that are ready to execute and allocates the CPU to them.
• Medium-term—used especially with time-sharing systems as an
intermediate scheduling level.
– A swapping scheme is implemented to remove partially run
programs from memory and reinstate them later to continue
where they left off.
• Long-term (job scheduler)—determines which jobs are brought
into memory for processing.
30
Context switching
• Switching the CPU to another process requires saving the
environment of the old process and loading the saved
environment of the new process. This task is called context
switching.
31
32
Scheduling criteria
• A good scheduling algorithm must ensure the following
– Fairness- make sure that each process gets its fair share of the
CPU.
– Efficiency (CPU utilization)- keep the CPU as busy as possible.
– Response time- minimize the response time for interactive
users
– Turnaround time- minimize the time batch users must wait for
output.
– Throughput- maximize the number of process executed per a
unit of time.
• Some of these goals are contradictory (maintaining fairness means
an increase context switching that decreasing efficiency)
33
Scheduling algorithms
• There are two types of scheduling algorithms
35
• Case i. If they arrive in the order of P1, P2, P3
Process P1 P2 P3
Response time 0 24 27
36
Con’t
• Case ii. If they arrive in the order of P3, P2, P1
Process P3 P2 P1
Response time 0 3 6
Average response time = (0+3+6)/3 = 3
Average turn around time = (3+6+30)/3=13
38
• Advantages
– It is the simplest of all non-preemptive scheduling algorithms: process
selection & maintenance of the queue is simple
– It is often combined with priority scheduling to provide efficiency
• Drawbacks
– Poor CPU and I/O utilization: CPU will be idle when a process is blocked
for some I/O operation
– Poor and unpredictable performance: it depends on the arrival of processes
– Unfair CPU allocation: If a big process is executing, all other processes
will be forced to wait for a long time until the process releases the CPU. It
performs much better for long processes than short ones
• 39
Shortest Job First Scheduling (SJFS)
• Basic Concept
– Process with the shortest expected processing time (CPU
burst) is selected.
– Next its selection function is execution time and it uses non
preemptive scheduling/decision mode
• Illustration
– Consider the following processes arrive at time 0
• Process P1 P2 P3
• CPU Burst (in ms)24 3 3
40
Case i. FCFSS
Process P1 P2 P3
Turn around time 24 27 30 TAT=CT-AT
WT=TAT-BT
Response time 0 24 27
Average response time = (0+24+27)/3 = 17
Average turn around time = (24+27+30)/3=27
Throughput = 3/30
Case ii. SJFS
Process P3 P2 P1
Turn around time 3 6 30
Response time 0 3 6
Average response time = (0+3+6)/3 = 3
Average turn around time = (3+6+30)/3=13
Throughput = 3/30
41
Shortest Job First Scheduling (SJFS)
Eg1. Consider the following processes arrive at time 0, 2, 4, 6,
8 respectively
Process P1 P2 P3 P4 P5
Arrival Time (Ta) 0 2 4 6 8
Service Time (Ts) 3 6 4 5 2
Turn around time (Tr) 3 7 11 14 3
Response time 0 1 7 9 1
Average response time = (0+1+7+9+1)/5 = 3.6
Average turn around time = (3+7+11+14+3)/5=7.6
Throughput = 5/20
42
Eg2 Consider the following processes arrive at time 2, 1, 4, 0,
2 respectively
Process AT BT CT TAT WT RT
P1 2 1 7 5 4 4
P2 1 5 16 15 10 10
P3 4 1 8 4 3 3
P4 0 6 6 6 0 0
P5 2 3 11 9 6 6
43
• Advantages
44
Shortest Remaining Time Scheduling (SRTS)
• Basic Concept
– The process that has the shortest expected remaining process
time
– If a new process arrives with a shorter next CPU burst than
what is left of the currently executing process, the new
process gets the CPU
– Its selection function is remaining execution time and uses
preemptive decision mode
• Illustration
– Consider the following processes arrive at time 0, 2, 4, 6, 8
respectively
45
Process P1 P2 P3 P4 P5
Response time 0 1 0 9 0
46
Process AT BT CT TAT WT RT
P1 2 1 3 1 0 0
P2 1 5 16 15 10 10
P3 4 1 5 1 0 0
P4 0 6 11 11 5 0
P5 2 3 7 5 2 1
47
• Advantages
– It gives superior turnaround time performance to SJFS,
because a short job is given immediate preference to a
running longer process
• Drawbacks
48
Round Robin Scheduling (RRS)
• Basic Concept
– A small amount of time called a quantum or time slice is
defined.
– When the interrupt occurs, the currently running process is
placed in the ready queue, and the next ready process is
selected on a FCFS basis.
– The ready queue is treated as a circular queue
– Its selection function is based on quantum and it uses
preemptive decision mode
49
Process AT BT CT TAT WT RT
P1 0 8 22 22 14 0
P2 5 2 11 6 4 4
P3 1 7 23 22 15 2
P4 6 3 14 8 5 5
P5 8 5 25 17 12 9
50
• Features
– The oldest, simplest, fairest and most widely used preemptive
scheduling
• Drawbacks
– CPU-bound processes tend to receive unfair portion of CPU time,
which results in poor performance for I/O bound processes
– Maximum overhead due to frequent process switch
• Setting the quantum too short causes
– Poor CPU utilization
– Good interactivity
• Setting the quantum too long causes
– Improved CPU utilization
– Poor interactivity
51
Priority Scheduling (PS)
• Basic Concept
– Each process is assigned a priority and the runnable
process with the highest priority is allowed to run i.e. a
ready process with highest priority is given the CPU.
52
– Consider the following priority classes (Non Preemptive)
Process Priority AT BT CT TAT WT RT
P1 3 0 8 8 8 0 0
P2 4 1 2 17 16 14 14
P3 4 3 4 21 18 14 14
P4 5 4 1 22 18 17 17
P5 2 5 6 14 9 3 3
P6 6 6 5 27 21 16 16
P7 1 10 1 15 5 4 4
53
– Consider the following priority classes (Preemptive)
Process Priority AT BT CT TAT WT RT
P1 3 0 8 15 15 7 0
P2 4 1 2 17 16 14 14
P3 4 3 4 21 18 14 14
P4 5 4 1 22 18 17 17
P5 2 5 6 12 7 1 0
P6 6 6 5 27 21 16 16
P7 1 10 1 11 1 0 0
54
• Drawbacks
– A high priority process may run indefinitely and it can
prevent all other processes from running.
– This creates starvation on other processes.
55
Multilevel Queues Scheduling (MLQS)
– processes can be classified into different groups
• Batch processes………………FCFSS
56
Multilevel Feed Back Queues Scheduling (MLFBQS)
58
Defn : A set of processes is deadlocked if each process in the set is
waiting for an event that only another process in the set can
cause. But none of the processes can run, release resources or be
awakened.
– Usually the event is the release of the resource
– E.g., the system has two tape derives p1 and p2 each hold one
tape derive and each one needs another one to proceed.
• Sequence of events required to use a resource:
61
Deadlock Modeling (resource allocation)
– The four conditions can be modeling using direct graph
with two kinds of nodes; process (circles) and resources
(squares)
63
Dealing with deadlock problem
• In general there are four strategies of dealing with deadlock problem
– Take away a resource from its current owner and give it to another
process
– But depends on the nature of resources
66
3. Deadlock Avoidance
• Avoid deadlock by careful resource scheduling.
• Requires that the system has some additional prior information
available.
• The deadlock avoidance algorithm dynamically examines the
resource allocation state to ensure that there can never be a
circular wait.
• Resource allocation state is defined by the number of
available and allocated resources, and the maximum demands
of the process.
67
4. Deadlock Prevention
• Prevent deadlock by resource scheduling so as to negate at
least one of the four conditions
– Attacking the mutual exclusion condition
– Attacking the hold and wait condition
– Attacking the no preemption condition
68
• To avoid starvation, processes may be separated with different
CPU-burst characteristics.
• If a process uses too much CPU time, it will be moved
to lower priority queue. This leaves I/O bound and
interactive processes in the high priority queue
• A process waiting too long in a lower priority queue
may be moved to a higher queue and this prevents
starvation
69
Many
Thanks!!
70