Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
112 views

Ch2 - Process and Process Management

The document discusses key concepts related to processes and process management. It defines a process as a dynamic entity that exists for a limited time to execute a program. Processes can run sequentially or concurrently. The operating system is responsible for process creation, termination, suspension, resumption, synchronization, and communication. Processes transition between different states like ready, running, waiting, and terminated. The OS maintains process information in a process table. Threads represent multiple flows of execution within a process and allow for parallelism. Interprocess communication allows processes to share information and resources.

Uploaded by

Habtie Tesfahun
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
112 views

Ch2 - Process and Process Management

The document discusses key concepts related to processes and process management. It defines a process as a dynamic entity that exists for a limited time to execute a program. Processes can run sequentially or concurrently. The operating system is responsible for process creation, termination, suspension, resumption, synchronization, and communication. Processes transition between different states like ready, running, waiting, and terminated. The OS maintains process information in a process table. Threads represent multiple flows of execution within a process and allow for parallelism. Interprocess communication allows processes to share information and resources.

Uploaded by

Habtie Tesfahun
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 70

Chapter Two

Process and Process Management

1
2.1 Process concepts
 In a single-user system such as Microsoft Windows, a user may be
able to run several programs at one time.
 All these activities are similar, so we call all of them processes.

Process is a dynamic entity i.e. a program in execution.


Process exist in a limited span of time.
A process has object program, data, resources and the status of
the process on execution.
A process is not a closed system; there is a communication
between processes through a shared environment.
2
Programs vs. Process
₋ Static entity ₋ Dynamic entity
₋ Exist in a single space and ₋ Exist in a limited span of
continue to exist. time.
₋ Programs contain instructions ₋ Executes instruction
Types of process
1. Sequential process:
the execution is in sequential fashion, i.e. at any time one
instruction is executed.
2. Concurrent process
a) True concurrency: two or more processes executing at the
same time; it implies a need for more than one processor.
b) Apparent concurrency: switching from one process to the
other.
In both cases if a snapshot of the system is taken, several
processes will be found in partial execution.
3
Process Creation
There are four principal events which cause process to be created
1. System initialization: when OS boots
– Some are foreground process that interact with users and
work for them.
– Others are background processes which have some
specific functions; e.g. process that check emails.

4
2. Execution of a process creation system call by a running
process (fork in Unix and creatProcess in windows)

3. A user request to create a new process.


4. Initiation of a batch job (in large mainframes)

5
Process Termination
Events which cause process termination
1. Normal exit (voluntary): a process has finished execution.

2. Error exit (voluntary): a fatal error has occurred and a process


cannot continue; e.g. cc abc.c but the file doesn’t exist.
3. Fatal error (involuntary): mostly due to program bug; e.g.
executing an illegal instruction, referencing non existing
memory, division by zero, …

6
4. Killed by another process (involuntary): the killer must have
an authorization (kill in Unix and terminate Process in
Windows)

The operating system is responsible for the following


activities in relation to process management.
– Process creation and termination
– Process suspension and resumption
– Provision of mechanisms for

• Process synchronization
• Process communication
7
2.1.1. Process state and process state transition
As the process executes, it changes state.
A process may be in one of the following states.

Figure: process in different state


– New State: The process being created.

– Running State: A process is said to be running if it has the


CPU, that is, process actually using the CPU at that particular
instant. 8
– Blocked (or waiting) State: A process is said to be blocked
if it is waiting for some event to happen such as an I/O
completion before it can proceed.
– A process is unable to run until some external event happens

– Ready State: A process is said to be ready if it use a CPU if


one were available. A ready state process is runnable but
temporarily stopped running to let another process run.
– Terminated state: The process has finished execution.

9
The three main state described as follow

Operating system maintains lists of ready and waiting processes.

Ready List Waiting Lists


P1 P10
Ordered Unordered
P11 P9

P20 P4
.
.
.
. 10
.
.
Process state Transitions
There are four state transitions two of them
caused by the process scheduler.
Process scheduler is a part of OS that decides
which process should get the CPU next.
1. Ready Running
– The assignment of the CPU to the first process on
the ready list is called dispatching and it is
performed by a system entity called dispatcher.
Dispatch (process_name):Ready Running

11
2. Running Ready
– A process may be stopped by a clock interrupt if
it doesn’t relinquish (release) the CPU before the
allocated time expires.
Timeout (process name):Running Ready
3. Running waiting
– During I/O a process voluntarily relinquishes the
CPU.
Wait(process_name): Running waiting
4. Waiting Ready
– When I/O finishes
Wakeup(process_name): waiting Ready
12
Implementation of Processes
The OS maintains a table (an array of structures), called the
Process Table or Process Control Block.
It enables an interrupted process to resume normally.
It is an area of memory containing all the relevant information
associated with each process.

13
These may include
– Process state
– Program counter
– Stack pointer
– Value of CPU registers when the process is
suspended.
– CPU scheduling information such as priority
– The area of memory used by the process(memory
management information)
– Accounting information: amount of CPU time
used, time limits, process no, etc.

14
– I/O status information: list of I/O devices such as
tape derives allocated for the process, a list of open
files, etc.
– Pointer to the next process’s PCB
Pointer Process
State
Process number

Program counter

Register

Memory limits

List of open files

.
.
.
15
2.2 Threads
• A thread is a flow of execution through the process code, with its
own program counter, system registers and stack.
• Threads provide a way to improve application performance through
parallelism.
• Threads represent a software approach to improving performance of
operating system by reducing the overhead thread is equivalent to a
classical process.
• Each thread represents a separate flow of control.

16
Con’t
• They also provide a suitable foundation for parallel execution
of applications on shared memory multiprocessors.
• Thread allows multiple executions to take place in the same
process environment, called multithreading.

17
– (a) Three processes each with one thread. (b) One
process with three threads.
Each thread has a program counter, registers stack,
and state; but all threads of a process share address
space, global variables and other resources such as
open files, etc. 18
• The first column lists some items shared by all threads in a
process.
• The second one lists some items private to each thread.
• Threads are sometimes called light weight processes.
• Threads take turns in running

19
Thread Usage-Why do we need threads?

• E.g., a word processor has different parts; parts


for
– Interacting with the user
– Formatting the page as soon as the changes are
made
– Timed savings (for auto recovery)
– Spelling and grammar checking

20
Con’t
1. Simplifying the programming model since many activities
are going on at once.
2. They are easier to create and destroy than processes since
they don't have any resources attached to them
3. Performance improves by overlapping activities if there is
too much I/O
4. Real parallelism is possible if there are multiple CPUs

21
2.3 Interprocess communication
• Processes frequently need to communicate with other processes.

• Processes may share a memory area or a file for communication


• There are three issues related to IPC

1. How can one process pass information to another.

2. How can we make two or more processes don’t interfere


with each other when engaged in critical activities;
3. Sequencing of events when dependency exist; e.g., one
process produces data and another process consumes it.

22
– These issues are also applied to threads; the first
is easy for thread since they share a common
address space.
Race conditions
– Arises as a result of sharing some resources
– E.g. printer spooler
– When a process wants to print a file, it enters a
file name on a special spooler directory

23
– Another process, the printer daemon, periodically
checks to see if there are any files to be printed, and if
there are it prints them and removes from the directory.
– Recall the daemon is a process running in the back
ground and started automatically when the system is
booted.
– Assume that the spooler directory has a large number
of slots, numbered 0,1,2,3,…,n, each capable of
holding a file name.

24
– There are two shared variables (shared by all
processes)
– out-points to the next file to be printed
– in-points to the next free slot in the directory
25
• Then the following may happen
– Process A reads in and stores the value 7 in a local
variable.
– A clock interrupt occurs and the CPU is switched
to B.
– B also reads in and stores the value 7 in a local
variable.
– It stores the name of its file in slot 7 and updates in
to be 8.
– A runs again; it runs its file name in slot 7, erasing
the file name that B wrote.
– It updates in to be 8.
26
• The printer daemon will not notice this; but B will never
receive any output
• Situation like this, where two or more process are reading and
writing some shared data and the final result depends on who
runs precisely when are called race conditions.

27
2.4. Process Scheduling
• When a process is multi programmed, it frequently has multiple
processes competing for the CPU at the same time.
• Multiprogramming - aims to increase the output
• Time sharing - to allow all users use the CPU equally

28
Scheduling Queues

• As the process enters the system or when a running process is


interrupted, it is put into a ready queue
• There are also device queues(waiting queues), where each
device has its own device queue.
• All are generally stored in a queue(linked list), not necessarily
a FIFO queue.

29
Scheduling levels
• Short-term (CPU scheduler)—selects from jobs in memory those
jobs that are ready to execute and allocates the CPU to them.
• Medium-term—used especially with time-sharing systems as an
intermediate scheduling level.
– A swapping scheme is implemented to remove partially run
programs from memory and reinstate them later to continue
where they left off.
• Long-term (job scheduler)—determines which jobs are brought
into memory for processing.
30
Context switching
• Switching the CPU to another process requires saving the
environment of the old process and loading the saved
environment of the new process. This task is called context
switching.

31
32
Scheduling criteria
• A good scheduling algorithm must ensure the following
– Fairness- make sure that each process gets its fair share of the
CPU.
– Efficiency (CPU utilization)- keep the CPU as busy as possible.
– Response time- minimize the response time for interactive
users
– Turnaround time- minimize the time batch users must wait for
output.
– Throughput- maximize the number of process executed per a
unit of time.
• Some of these goals are contradictory (maintaining fairness means
an increase context switching that decreasing efficiency)

33
Scheduling algorithms
• There are two types of scheduling algorithms

A. Preemptive- The OS can stop the running process even


though it has no I/O. most OSs are based on this. (fairness and
response time is good but less efficient)
B. Non preemptive- once the CPU has been allocated to a
process. It can be keep until it wants to release it either by
terminating or by requesting I/O.
– They are simple and easy.

– Efficient but not fair and response time is high


34
First-Come, First-Served Scheduling
First-Come-First-Served Scheduling (FCFSS)
• Basic Concept
– The process that requested the CPU first is allocated the CPU
and keeps it until it released it, either due to completion or
request of an I/O operation.
– Its selection function is waiting time and it uses non
preemptive scheduling/decision mode
• Illustration
– Consider the following processes arrive at time 0
• Process P1 P2 P3
• CPU Burst/Service Time (in ms) 24 3 3

35
• Case i. If they arrive in the order of P1, P2, P3

Process P1 P2 P3

Service Time (Ts) 24 3 3

Turn around time (Tr) 24 27 30

Response time 0 24 27

Average response time = (0+24+27)/3 = 17


Average turn around time = (24+27+30)/3=27
Throughput = 3/30= 1/10

36
Con’t
• Case ii. If they arrive in the order of P3, P2, P1

Process P3 P2 P1

Service Time (Ts ) 3 3 24

Turn around time (Tr) 3 6 30

Response time 0 3 6
Average response time = (0+3+6)/3 = 3
Average turn around time = (3+6+30)/3=13

Throughput = 3/30= 1/10


37
– Consider the following processes arrive at time 0, 1, 2, 3
respectively
Process P1 P2 P3 P4

Burst Time (Ts) 1 100 1 100

38
• Advantages
– It is the simplest of all non-preemptive scheduling algorithms: process
selection & maintenance of the queue is simple
– It is often combined with priority scheduling to provide efficiency
• Drawbacks

– Poor CPU and I/O utilization: CPU will be idle when a process is blocked
for some I/O operation
– Poor and unpredictable performance: it depends on the arrival of processes
– Unfair CPU allocation: If a big process is executing, all other processes
will be forced to wait for a long time until the process releases the CPU. It
performs much better for long processes than short ones
•   39
Shortest Job First Scheduling (SJFS)
• Basic Concept
– Process with the shortest expected processing time (CPU
burst) is selected.
– Next its selection function is execution time and it uses non
preemptive scheduling/decision mode
• Illustration
– Consider the following processes arrive at time 0
• Process P1 P2 P3
• CPU Burst (in ms)24 3 3
40
Case i. FCFSS
Process P1 P2 P3
Turn around time 24 27 30 TAT=CT-AT
WT=TAT-BT
Response time 0 24 27
Average response time = (0+24+27)/3 = 17
Average turn around time = (24+27+30)/3=27
Throughput = 3/30
Case ii. SJFS
Process P3 P2 P1
Turn around time 3 6 30
Response time 0 3 6
Average response time = (0+3+6)/3 = 3
Average turn around time = (3+6+30)/3=13
Throughput = 3/30
41
Shortest Job First Scheduling (SJFS)
Eg1. Consider the following processes arrive at time 0, 2, 4, 6,
8 respectively
Process P1 P2 P3 P4 P5
Arrival Time (Ta) 0 2 4 6 8
Service Time (T­s) 3 6 4 5 2
Turn around time (T­r) 3 7 11 14 3
Response time 0 1 7 9 1
Average response time = (0+1+7+9+1)/5 = 3.6
Average turn around time = (3+7+11+14+3)/5=7.6
Throughput = 5/20

42
Eg2 Consider the following processes arrive at time 2, 1, 4, 0,
2 respectively

Process AT BT CT TAT WT RT

P1 2 1 7 5 4 4

P2 1 5 16 15 10 10

P3 4 1 8 4 3 3

P4 0 6 6 6 0 0

P5 2 3 11 9 6 6

43
• Advantages

– It produces optimal average turn around time and average


response time
– There is a minimum overhead
• Drawbacks
– Starvation: some processes may not get the CPU at all as
long as there is a steady supply of shorter processes.
– It is not desirable for a time-sharing or transaction
processing environment because of its lack of processing

44
Shortest Remaining Time Scheduling (SRTS)
• Basic Concept
– The process that has the shortest expected remaining process
time
– If a new process arrives with a shorter next CPU burst than
what is left of the currently executing process, the new
process gets the CPU
– Its selection function is remaining execution time and uses
preemptive decision mode
• Illustration
– Consider the following processes arrive at time 0, 2, 4, 6, 8
respectively

45
Process P1 P2 P3 P4 P5

Arrival Time (Ta) 0 2 4 6 8

Service Time (T­s) 3 6 4 5 2

Turn around time (T­r) 3 13 4 14 2

Response time 0 1 0 9 0

Average response time = (0+1+0+9+0)/5 = 2


Average turn around time = (3+13+4+14+2)/5=7.2
Throughput = 5/20

46
Process AT BT CT TAT WT RT

P1 2 1 3 1 0 0

P2 1 5 16 15 10 10

P3 4 1 5 1 0 0

P4 0 6 11 11 5 0

P5 2 3 7 5 2 1

47
• Advantages
– It gives superior turnaround time performance to SJFS,
because a short job is given immediate preference to a
running longer process
• Drawbacks

– There is a risk of starvation of longer processes


– High overhead due to frequent process switch
• Difficulty with SJFS

– Figuring out required processing time of each process

48
Round Robin Scheduling (RRS)

• Basic Concept
– A small amount of time called a quantum or time slice is
defined.
– When the interrupt occurs, the currently running process is
placed in the ready queue, and the next ready process is
selected on a FCFS basis.
– The ready queue is treated as a circular queue
– Its selection function is based on quantum and it uses
preemptive decision mode

49
Process AT BT CT TAT WT RT
P1 0 8 22 22 14 0

P2 5 2 11 6 4 4

P3 1 7 23 22 15 2

P4 6 3 14 8 5 5

P5 8 5 25 17 12 9

50
• Features
– The oldest, simplest, fairest and most widely used preemptive
scheduling
• Drawbacks
– CPU-bound processes tend to receive unfair portion of CPU time,
which results in poor performance for I/O bound processes
– Maximum overhead due to frequent process switch
• Setting the quantum too short causes
– Poor CPU utilization
– Good interactivity
• Setting the quantum too long causes
– Improved CPU utilization
– Poor interactivity

51
Priority Scheduling (PS)
• Basic Concept
– Each process is assigned a priority and the runnable
process with the highest priority is allowed to run i.e. a
ready process with highest priority is given the CPU.

52
– Consider the following priority classes (Non Preemptive)
Process Priority AT BT CT TAT WT RT

P1 3 0 8 8 8 0 0
P2 4 1 2 17 16 14 14
P3 4 3 4 21 18 14 14
P4 5 4 1 22 18 17 17
P5 2 5 6 14 9 3 3
P6 6 6 5 27 21 16 16
P7 1 10 1 15 5 4 4

53
– Consider the following priority classes (Preemptive)
Process Priority AT BT CT TAT WT RT

P1 3 0 8 15 15 7 0

P2 4 1 2 17 16 14 14

P3 4 3 4 21 18 14 14

P4 5 4 1 22 18 17 17

P5 2 5 6 12 7 1 0

P6 6 6 5 27 21 16 16

P7 1 10 1 11 1 0 0
54
• Drawbacks
– A high priority process may run indefinitely and it can
prevent all other processes from running.
– This creates starvation on other processes.

55
Multilevel Queues Scheduling (MLQS)
– processes can be classified into different groups

– The ready queue is partitioned into several separate queues


– Each processes is assigned to one queue permanently, based
on some property of the process such as memory size, process
priority, process type etc.
• System processes……………PS
• Interactive processes…………RRS

• Batch processes………………FCFSS

56
Multilevel Feed Back Queues Scheduling (MLFBQS)

– It allows processes to move between queues


– Multilevel-feedback-queue scheduler is defined by the
following parameters:
• Number of queues

• Scheduling algorithms for each queue


• Method used to determine when to upgrade a process
• Method used to determine when to demote a process

• Method used to determine which queue a process will


enter when that process needs service
57
Deadlocks
• Deadlock can occur when resources are temporarily
granted exclusive access to resources.
• Resources could be of two types
• Preemptable- is the one that can be taken away from the
process without causing an ill effect.
• Non-preemptable- is the one that cannot be taken away
from its current owner without causing the computation to
fail.

58
Defn : A set of processes is deadlocked if each process in the set is
waiting for an event that only another process in the set can
cause. But none of the processes can run, release resources or be
awakened.
– Usually the event is the release of the resource

– E.g., the system has two tape derives p1 and p2 each hold one
tape derive and each one needs another one to proceed.
• Sequence of events required to use a resource:

1. Request the resource.

2. Use the resource.


59
3. Release the resource.
Conditions for deadlock
The following four conditions must hold for there to be a deadlock
1. Mutual exclusion condition-each resource is assigned to exactly one
process.
2. Hold and wait condition-process holding resources can request
additional resources.
3. No preemption condition-previously granted resources can't be
forcibly taken away; only the process can voluntarily release resource.
4. Circular wait condition-there must be a circular chain of two or
more processes. Each of which is waiting for a resource held by the
next member of the chain.
60
e.g. traffic deadlock

61
Deadlock Modeling (resource allocation)
– The four conditions can be modeling using direct graph
with two kinds of nodes; process (circles) and resources
(squares)

a) R is assigned to and currently held by A

b) B is requesting/waiting for S (B is blocked) 62


• If the graph contains no cycle=>no deadlock
• If the graph contains a cycle=>
– If one instance per resource type- deadlock
– If several instances per resource type-possibility of
deadlock

63
Dealing with deadlock problem
• In general there are four strategies of dealing with deadlock problem

1. The Ostrich Approach Just ignore the deadlock problem


altogether.
– It is reasonable if
• Deadlock occur very rarely and the effects are not
catastrophic.
• Cost of prevention is high.
– Unix and windows use this approach.
2. Deadlock Detection and Recovery
– Let them occur, detect them and take an action.
64
Recovery from Deadlock
– Once the deadlock is detected, what to do to recover

• Recovery through preemption

– Take away a resource from its current owner and give it to another
process
– But depends on the nature of resources

• Recovery through rollback

– If the processes are arranged to checkpoint periodically

– Checkpoint means writing the state (memory, resources assigned,


etc.) of a process into a file so that it can be restarted later.
– Then rollback a process to a point in time before it acquire a resource
that is taken away. 65
• Recovery through killing processes
– Crudest but simplest way to break a deadlock
– Kill one of the processes in the deadlock: if not yet solved ,
continue killing a process until the deadlocked is broken.
– The process to be killed has to be selected carefully.

66
3. Deadlock Avoidance
• Avoid deadlock by careful resource scheduling.
• Requires that the system has some additional prior information
available.
• The deadlock avoidance algorithm dynamically examines the
resource allocation state to ensure that there can never be a
circular wait.
• Resource allocation state is defined by the number of
available and allocated resources, and the maximum demands
of the process.
67
4. Deadlock Prevention
• Prevent deadlock by resource scheduling so as to negate at
least one of the four conditions
– Attacking the mutual exclusion condition
– Attacking the hold and wait condition
– Attacking the no preemption condition

– Attacking the circular wait condition

68
• To avoid starvation, processes may be separated with different
CPU-burst characteristics.
• If a process uses too much CPU time, it will be moved
to lower priority queue. This leaves I/O bound and
interactive processes in the high priority queue
• A process waiting too long in a lower priority queue
may be moved to a higher queue and this prevents
starvation

69
Many
Thanks!!
70

You might also like