Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
50 views

Deadlocks: Waiting Waiting Not Held This

The document discusses deadlocks in computing systems. It defines deadlock as a situation where two or more competing processes are each waiting for another process to release a resource, resulting in all processes waiting indefinitely. It identifies four conditions required for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. Methods for handling deadlocks include deadlock prevention, avoidance, and detection and recovery. Deadlock prevention aims to ensure the system's design prevents the four conditions from being simultaneously satisfied.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

Deadlocks: Waiting Waiting Not Held This

The document discusses deadlocks in computing systems. It defines deadlock as a situation where two or more competing processes are each waiting for another process to release a resource, resulting in all processes waiting indefinitely. It identifies four conditions required for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. Methods for handling deadlocks include deadlock prevention, avoidance, and detection and recovery. Deadlock prevention aims to ensure the system's design prevents the four conditions from being simultaneously satisfied.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

( --/ \,\

-f.\ \

I
J- -..,,,.:
:.
Unit IV : Deadlocks

4.1 DEADLOCKS
When processes request a resource and if the resources are not available at that time the
process enters into waiting state. Waiting process.may not change its state because the
resources they are requested are held by othelprocess. This situation is called deadlock,
The situatiotwhere the prgegSSgSLryail_fu-qaqh-other-Lo realease the telallgs_hgldby e*other
r------iT-r r rr r- _-
process ls caueo oeaolocK.

4.2 SYS EM MODEL


c { system consists oi finite number of lesources and is distributeci among number .of
processes.
. i1 process must request a resource befole using it and it rnust release the resource after using-
it. It can request iny number of resources td cany out a designated task, The amouni ol
resource requestedmay not'exceed the total number ofresout'ces available.

A process may utilize the resources in onl the following sequences:


1. Request:-If the request is not granted immediately tiren the lequesting process must vrait
until it c.an acquire the resources.
2. L'l'se:-The pt'ocess can operate on the resource.
3. Bglg4sg:-The process releases the resource after using it.

Deadlock may involve different types ofresources. '


For'eg:Consi,Jer with one priqter (Rl) and one.tape drir,'e.(n:)r-$g.
process Pi cunently".ryri.*
holds a printer Rl and'a process Pj holds the tape drive IU,. If
process Pi request a tape drive and process Pj request a printer then a deadlock
occurs.

Pr;JP,"

a;t I il.

a l')

w.ttJ'; t t-tt,,,
Prof. Sreelatha P K, CSE, SVIT PageZ
Unit IV : Deadlocks
:--

4.3 DEADLOCK CIIARACTERI.ZATION

A deadlock sifuation can occur if the following 4 conditions occur simultaneously in a


system:-
i. Mutual Exclusion: Only one process is holding the resource at atime, If any otherprocess
requesis for ihe resoltrce, the requesting process must wait until tlie resource has been
released.

2. Hold and Wair: A process ,*#U. holding at ieast one resource and waiting to acquire
additional resources that are currently bbing held by the other:process.

3. No Preemption: Resources cannot be preempted i.e., only the process holding the resources
must release il after the process has completed its task.

1. Circular Wait: A set {P0,P1........Pn} of waiting process must exist such that P0 is
foraresource heid by P1, P1 is waiting foraresource held byP2. Pn-1 is waiting
'vaiting
for resource held by process Pn and Pn is waiting for lhe resource heid by P 1 .

All the four condiiions must hold for a deadiock to occur.


a

Resource Allocation Graph :


Deadlocl.:s are described by using a directed graph called system resowce allocation graph.
The graph consists ofset ofvertices (v) and set ofedges (e).
The set ofvertices (v) can be described into two different types ofnodes
1. P= {PI,P2. ..Pn} i.e., set consisting of ail active processes, represented by
c irc le
2. g= {R1,R2. . . ..Rni i.r., set consisting of ali resource types in the systeni,
represented bi' Lectangle, With dot replesenting the instances of that resource type,

There are two types of edges :


1. A directed edge from process Pi to resource type Rj denoted by Pi;*.i indicates thai
Pi requested an instance of resource Rj and is waiting. This edge is called Request
edge.
2. A directed edge Ri Pj signifies that resource Rj is held by process Pi. This is
called Assignment edge/ Allocation edge.

Tlie sets P, R, and E:


P:{P 1,P2,P3 }
R: {R1,R2,R3,R4}
E: {P 1--+ R r , ?z->P,s , R,->P,, R2->Pt-,Rz2P,
, R-)P, )
Resource instances:
o
One instance of resollrce type Ri
o
Tr,vo instances of resource type ro
o
One instancp of resource Wpe R3
o
Three_instances of resoqig!_rypg Ra
Prof, Sreelatha P K, CSE, SVIT P:oe ?
Unit IV : Deadlocks

If the giaph
. If the grapli contains no cycle' then fttr process in the system is
deadlock'

c1,cle impries that a clea'jlock lras


. *tilt',:,'fr',i:ti,il."fr'J*5t?';ffihance,than.au.vir. ao not n*t*tturily implies thai
,.,1.ili'#;;;;thtn
occuned. If each fes'urce nu,
the
an assignment edge bv reversing
caii be co'vened ,into
,
granted
is
clirection of the arc when the recp-rest
"ifJ^tirt*tolT;"r"Sttlu*"

4.4 METIIODS FOR HANDLING DEADLOCKS

Deadlock problem can be solved


in one of three ways:
never ente' a
protocol to p5yg1t or jygldjeadlocks, ensuring that the system wrrr
" use a
deadlock state.
state, qg!gg!it,, and recover'
Allow the system to enter a deadlock nerg:ccw in the svstem'
: ilil Til J.*t; artogether unJpr.t.r'd tfrat deadlocks

.\-:-,
4.5 DEADLOCK PREVEI\TIOI{ ;'
necessary conditirxs utusthold.
If at least one of the
For adeadiock to occur each ofthe four
conditionsdoesnotholdthenwecanpreventoccun.enceofdeadlock'
Lg:-a Printer
pr can be used bY
non-sharable resources ' Es:-A'
i. MutualBxclusion: This holds for
nlv one Process at a tlme'
onlv
Mutual exclusion is not possible in,sharabiel:':^Y'^'-'.:i1li:ii|;1":a19:i.:J1"?iT.X
A process never
U:lllf:'#Hilil"li|]:i;';#";;ffi1,' r"; sharable resources'
,^
--d Paoe 4
.$bv -

Prof, Sreelatha P K, CSE, SVIT


i'
Unit
--------
IV : Deadlocks
,,=,.

;-
,., waits for accessing in a sharable resource. So we cannot prevent deadlock by denying the
mutual exclusion condition in non-sharable resources'

2. HolCandWait: This condition can be eliminated by forcing a process to release all its
resources held by it when it requlst a resource.
o Request all its resources before execution begins
Eg:'-conriO.. aprocess that copies the_data.from atape drive to.the disk, sorts the file
unO t6en prints the results to aprinter. If all the resources are allocated
at the beginning
then the tip. driur, disk files and printer are assigned to the process. The main
problem
with tfris is it leads to low resourl. utilization bicause it requires printer at the last and
is allocated with it from the beginning so that no other procesi can use it.

. Request for resource onl;r w-hen it has none of the resowces.


The'process is allocated with tape drive and disk file, first. It perforrns tle required
operation and releases both. Then the process once again request for disk file and the
printer.

=_3
WJh;Me d isk I
P Yc(\

i. NoPreemption: To ensure rhattris condition never occurs the resources must be preempieci.
The following protocol can be r:sed.
o If a proceJs'is holding soinc i'usource(Ri,R2,R3) and request another resource(M) that
cannot be imnrediatel;,) allo,;ated to it, then all the resources cunently held by tlte
requesting process are preempted and added to the list of resources for which. other
prgcesses may be waiting. The process will be restarted only when it regains the old
iesources and the new resources that it is requesting.
. When aprocess (P1) request resources, we check whether they are avaiiable or not. If
they are available we allocate ihem else we check that whether they are allocated to
some other waiting processr.P2). If
so we preempt the resourc.es from the waiting
process(P2) and allocate them to the requesting process(Pl). If resource is not available
with P2, Pl has to wait.

i :'
4. Circular Wait:-The fourth and the final-condition forr-cleadlock. is \he .ir=utuin*itl
conditio* One way to ensure that this condition never ocq$rQ, is to-impose ordering on all
resource types and each process requests resource in an incfE4sing oider.
-Order a1l resource types as per number of instances of each type.
-Process should request resource in an increasing order of enumeration

Let R={R.1,R2, ...Rn} be the set of resource types. We assign each


resource type., r.vith a uniq,re integer value. This wiil allows us to compare two

Prof. Sr'eelatha P K, CSE, SVIT Page 5


Unit IV : Deadlouks

resources and determine r.vhether one precedes the otlier i: ordering.

Eg:-we can define a oire to one flinction


F:R)N as foliows : where the value 'N' ind.icates the no. of
instances of the resource.

ll.;*?ti'l'J;
F(printer)=12
A process holding 'disk drive', camot request for tapedrive, but it can request for printer.

Deadlock can be prevented by using the tbllorvin_e protccol:


Each process can request the resource in increasing order'. A 'process
can request any number of instances of resource type say Ri initially
and flreq it can request instances of resource type Rj oniy !-(Rj) > if
F(Ri).
Alternatively, a process can reqllest for an instance of resoulce of lorver
number. by releasiqg all the higher nuntber res0urces,

If tirese two orotocol are used then the cilcuiar wait cannot hold.

Eg; If a process is having an instance of disk di'ive, whose 'N'


value is set as '5'
- Then it can l'eque-ct ior printer oniy.
B ecause F(Printer)>'F(disk di'ive).

- if tire process wants the tape drive, it has to


release all the resources held.

4.6 DE,ADLOCK AVOIDANCE

Deadlock prevention algorithm may lead to low device utilization and reduces systetn, throughput.
" Avbiding deadl"ocks requires additional inforrnation about. the sequence -*fl'which, the
resources are to be lequesded. With th-e knowledge of tlte compieteisequbnees of'requestsiand
releases we can decide for each requests ivhethei of not the pr 5cess should wait.
' For each requests it requires to check lvhetlr€r ft?-resoLrrces is cut't'ently available..If
resources is'availabld, the OS checks the resource is assigned will the
if
system lead to deadlock, If no then the actual allocation is done.

Safe State:
A safe state is a state in which there exists at least one ord:r in which all the process will
run completely wlthout resulting in a deadlock.
A system is in safe state if there exists a safe sequence.
A seouenceofnrocesses<Pi,P2, .....Pniisasafesequenceforthecunentallocationstate
rrart!-rrnFl
Ptop h
Prof,. Sreelatha P K, CSE, SVIT
' ,' Unit IV : Deadlocks

ilfor each Pi process request can be satisfied by the currentiy available resources.

. If the resources that Pi requests are not cunently available then Pi can obtain all of its
needed resource to complete its designated task'

: fi^fff:?t'"1L'J.?j?:iiKf 1t15r.u,.., i.e., currently avalabie, the sysrem musr declde


whether resources can 6e allocated immediately or whether the process must watt, I he
i.quJiiitgranted only if the allocation leaves the system in safe stdte.

*t
XWefrffi/#

Resource Allocation Grap-h Alsorithm: "

This,algorithm is used only if we ha''re one instance of a resource type. In addition to the
request edge and the assignment edge anew edge called claimedge is used. For eg:-A claim
edge Pi---- >Rj indicates that process Pi may request Rj
in future. The claim edge is
represented by a dotted line.

t>
+)^-tl
fT6rtLz.bA I't & Yz na$ C/@J/
"s[ "-K -n-/-^D\rtof)
ac!,qn- lO Z ':)
/( li * rz
ftz a
hery)rrk fd, Ry 'n
When aprocess Pi requests the resource Rj, the claim edge is converted to arequest edge.
When resour8e Rj is released by :rocess Pi, the assignment edge Rj.di
is replaced by the
Prof. Sreelatha P K, CSE, SVIT Psoe 7
Unit IV : Deadlocks i
rrccrr rrr -.",n rrrr r .rrrra-alrtrr" rttr ru l

claim edge Pi----- >Rj.


Whin Pi requests resource Rj the request is granted only if_converting the
a process
reouest edse to as asiignment edge Rj-*i do not result in a cycle. Cy9l9 deteciion
Pi+Rj
aisorithm is used to detect the cycle. If there are no cycles then the allocation ofthe resource
to process leave the system in safe state,

E?nkpr's Alqorithm:
This aigoritlm is applicable to the system i,vith multiple instances of each tesource types.

The Banker's Algorithm gets its name because it is a methocl.that bankers could use to assure
that when they leld out resources they rvill stiil be able to satisfy ali their clients. ( A bantr<er
rvon't ioan out a little money to start building a house unless they aie asstred that they will later
be abie to loan out the rest of the money to finish the house. )

When a process starts up. it must state in dcivance the maximum allocation of resources it ma-v*

request, up to tlie amount available on the system.

When a requ.est is made, ihe scheduler determines whether glarrting the request would leave the
system in a saf'e state. If not, then the process must wait r.rntil the lequest can be granted safely.

Several data structufes afe used to implement the batlker's algol'ithm


Lei ' n' be the number of processes in the system
and 'm' be the number ofresources types'

We neeC the following data structures:

. Available:-A vectol of length rir indicates the iruriber of availabie resources. If


Availablefi]=k, then k instances ofresource type Rj is available.
* Max:-An n*m matlix defines the maximutl deinand of each process if
Mu"[i.lLj]=k, then Pi may request at most k instances of resource type Rj.
' Allocation:-An n*m matrir defines the number of resources of each typq cr'gently
allocated to each prgcess. If Allocation[i][]=k, then Pi is c,urrentll' k instances of
resource type Rj
" Need:-An nxm matrix indicates the remaining r'esouices need of each process' If
Needfil[j]:k. then Pi maY need k more instances of resoiirce type Rj to compute its
task.

So Need[i] [j]=Max[i] Ul - AllocationIi]

P*f^S*.lrthu q K, CSE, SVIT Page B


Unit IV : Deadlocks
. , r: i' , r'
Safetv:Atgorithm:
ri ,,, , .Tl I Tdfi;lgorithm is used to find out whether or not a system. is in safe state or not.
Step' 1. Let worliand finish be two vectors of length m and n respectively.
fnitialize work = available and Finish[i]=false for i=\,2,3,- - - . - -.n
. Step,2, Find i such that both Finish[i]=false and Need i <= work
. , I If no suoh i exist then go to step 4
, St.p 3. Work =work +Allocation Finish[i]=true Goto step 2
' , Steir 4. If finishfi]=true for all i, then the system is in safe state. This algorithrn may

Resource Request Algorithm :


Let Requesti be the request vector of process Pi. If Request[i]Li]=k, then process Pi wanls K
, instancei of the resour-ce type Rj. When a request for resources is made by process Pi the
i "i
'r' rfollowing'aetions'are taken.
1) If Request(i) <= Need(i) go to step 2
oiher.iuiselraise anlerror condition since the process has exceeded its maximum claim.
2) lf Request(i) <= Available g0 to step 3
othdrwise'Pi mgst wait. Since the resources are not available'
3) If {he system want to allocate the requested resources to process Pi then modify the state as
follows_ ,1 , :,.
,, Available = Available - Request(i)
, , Alloqation(i) =Allocation(i) +Request(i)
,,' i, I,Jee{(i) =Need(i) - Request(i)
If the resulting resource allocation state is safe[for this, safety algorithm has to be
chnckedl;r.the tiansaition is complete and Pi is allocated its resources. If the new state is
unSafe then Pi mustwait forRequestl and old resouroe allocation state is restored.

fhe faLbr^ttgJ s^oqskut d] a- sqt/tr^.


, Alloca/F*
A BE
$"i"t^,"^!
ft D(
.
w4- *w"
h-Y
lo 7 53 3 3z
z co 3 zz
9, o2 q oz
L
i

2 22
',

o az + 3?
YIw\'UIYY ryJ
'.
B^,Jd,tAlfld'h.^
=lib 2
Prof. Sreelatha P K, CSE, SVIT /. Paq,e 9

il* b9- tMt'e-t


CDl\- f*
: Deadlocks
.:
h-ot)fuh'cA8 uL'\

) 9=3IQ't'2 ,"Co
e-1 -'l
5 44

preoJ 3 dsLk
gr,3
:L,,"i4 Lsts,4
=
3r ,e"r,Eb
Y-
r, )oI.L " U eoljt t
=Pt;A'i'd,u,6 ="f;, s'€
Alloailt>'- C?'J'

L'* \ l': i ir'


€t"^* AJ-? i-
:

NelA f!.J 1ilo\kt,


fc to '"J =C5fi,zJ
4-(

X
r\D?'J '
?r' eztc'ePtb
"-"'*f
Page 1.0
Prof. Sreelatha P K, CSE, SVIT
|l 22 waut'
ilo\A
h),i P Ps t ueldc?r1
NelACf]
'C",;;13 =
aFP,zJ a
fu Q@7 * la<a6r*_6
=-u + AI
'Jfr,+$J
'"J
"r"L + /^2,
fs,e,tl t ril {a^,th[W'ttu
Unit IV ,: Deadlocks

< Pd.,l?
3 F,ofJ
e, k bt'"ft* fgl

' 'n"zArcy= 6JeLA


,, ;r,irl 3L1,4 ,51 .
st'c!ffit i, ktLouil,,'*
1ys
*' "[r
" jtursJ * ("')'
oJ -- U' 5'5J
i
i , F;*>,|,L?A-- YYt'^o--

N*lg"J -' uo$


" f6 ,'o roJ
(!,s;sJ
=
ffiJrs.
="-,7;i= Da\A+ *ttn-"@r7l^ ,
,i,
,ir,
,,:' , , =ff,14*" Glt,z1 = 0"r
,
5,1J
F;,,^Vk CVilefuJa-

F.c,* EPo-P,il: br,* , ALt ktL Tre/> hs"'/z


t/- t
// t
/1--
g.B.{A,aL
f,
:t
t//+- ,n"J t, So
e/XQfuvllltLot c) kle-
ut
"4th*
/
/)- vtz ru,

it'
4L' s"(z
]
.3i -

Prof. Sreelathp F K, CSE, SVIT Page 1L


Unit IV : Deadlocks

> D, o., zJ

,' ',," -l/-. -U,J: 'l'*-


t"li'gi
,'r',IJ 3 L3fird
,I 'Pnn^ro>rfud t*td;t )L a'va"/dfr '
' ;

il,l,k;;l; &4: n-*<,nu: 4 a'/lrca"6J /b n' fl'el"


" -- -@-_ -
nL dv^4'r
",- o9\e-
l

i
ALIor_aIo,*5e -t t yNC' I
i.

kltt c"f,rr1r,J, b t o, oJ + [,o r4 - E, o,q


il
NtAg,J= N"t%l - n1*ute,S .
., I r, -,i1
-t .,o.,LJ=fo,zroJ
ttoo)^ - - fr - t
N""olr6 = Lt , ,LJ
z - Lt t
ftro',fl = Lo,Z, 4
^- ----
r,r*,ln^fu .= Ar*/o^6(n - Req"wl:T.
= 6,s,zl-fiio,47b'4
f\) o,h't' d& ftv-}a, cl,a^Xu ,+I W
/8 ,t-.jt)
b" f^t A iL il- s,p,6r* ,,2 . Lr. u#,#t-
sLdh
Wo W ""tL
Po F+; ffi z aAs
,,' bz
; 3 ,2 o zo^, =

Pz.. ; oz 1 o z' !:'.6


?s z ll Zzz cl ?o, l

(u' o oZ t+ 3 g /+31
,hvd]^ILU,std
a.

Prof. Sreelatha P K, CSE, SVIT


Page 1.2
Unit IV : Deadlocks

= ilek

j 'i '*'-ou-='E,"rol+[3,o,?J=
i i l', ' rri , ' Fi*,>Lr6g =bE
.-\ t
;
s) L-- 12 t l>,

i Nrd7il = t^tetA
:
.l le ,o:oJ tL'l'-g.3 ,zl /x ,-
Yz eP-n'f'gx- oguJ '
'o/x!/uJtLQ'
'\ . a^)
I.+) L =t/z , NuA rpJ * t^Je\,/
l, Lo,t,J3fs'sr4/
?: ?TQ^L&-,
"
I
;il;-rl d,k+ ttlot-'co'^tr t7
i

:'
Fir.,il,--CP{= try
I=l
:)tt, L -
n
'Yt
.t L) -
n
NetaSiu",'
<;----

<NolA
LarltA I D/b,{ / a
lq yl^ftg ,

D eLjz'- Dffih+ ftLL'r-a&^ LPal


-'' D' +'zJ +Lo' o' 4 0'o'sJ'
=

fi,wzLfr^J_:t-*-
a) l= Po,, Ur-"lCp"J= u*L
-"*--a '' ,
1,
U t ',
tr,4 3 ft,+ ,,sJ /
Prof. Sreelatha P K, CSE, SVIT . Page p
uolk = F,4 ,sJf b,, , d = fr,srdJ
lii
.i'1 t,,
,i'.,i :,i
"

,iiir ti
In
i;l
Unit IV : Deadlocks

Resource Request Alqorithm :


Let Requb*. U'. the request vector of process Pi. lf Request[i] f['
instancei of\e resource type Rj. When R request for resources

the process hN exceeded its maxim

are not avai


resources the state as

Availa\le = Available
Allocati\(i) =Allocati i) + Request(i)
Need(i) =\ed(i) - Req

rrithm f to
tfansactlon is"complgQ aqd f the new stat e

ilet) ' ,
'Larz)
|l'.'/.' :.i-'?n. n
1tletAgezJ a wu\R
-|
ti
,;'l:li-' ju,', ,,' , '' < brSrSJ
,i'I;,.,..t,: t'', LG,,O,()1
; ,l
ri' ,, '': L)ek" LJffikI Av'.iliia"-6'4
: = S,ersJ t pzotzJ
l' ,'. , ,. ,r ,,, , , r,
. ; 5_o r,,ir.1
' " ,jl lt 't-;-W"ffA=t*t-
''' ,An *U,ft*is,l*Cf:-PrTLte'
,,iffi\,.ffi h"o'*-p^-ffuJ
' +L suplt^* A )tu sa61 uta'/ta''
;.'. 0 \^ -/D ?^

Prof. Sreelatha P K, CSE, SVIT


vaseli+
Unit IV : Deadlocks
.. 4.7 DEADLOCK DETECTION
dea-dtock is, allow the deadrock ro occur, then
til*:t-:fl.1.:f,ffi*|ng detect and
In this environment the system nay provide
' ##*.::ithm
that examinei ihe state of the sysrem to derermine whether
a deadlock has
o An algorithm to recover from the deadlock.

Ifalltheresourcthenwecandefinedeadlockdetectionalgorithm
that uses avariant ofresourie alloc"eion
by removing the nodes oftype ,riourr." slrrrr;
;riill,i;#rit fgrgraph. This graph is obtained
and removing appr'prlare edges.
An from Pi to Pj in wait fbrgraph irnplies
edge
pi is waiting forpj
that ro release
that Pi needs. ares'urce

Aa edee fi'om ?i to Pj exists in wait for


allocation gmph contains trr. Jag., ^ g'aph if rno
and only if the corresponding
onry tt corre resource
n nq ana n[
Deadlock exists within the systern
if a'd onry if there is a cycre.

€t: i

(b)

Figure 7.8 (a) Resource-allocation graph.


(b) Corresponding wait_for
graph.

P ro f. S re el a th a p K cs EJ VI T----

Page 1.5.
lesl I n Snsss.-qla. R.s-qurFJJp-s
-Tf;.ffiabletoonlyasingl9instanceofaresourcetype.The
:

instances of a"r'esource type, The following


data
foilorvin.g algorithm uppti., ifthere are ieu.ral
siructures afe usec:-

. Available:-Is a vecior of length m indicating the numbei of avajlable resources of


each type of each type
t Allocation:-Is an m*n matrix which defi nes the number of resources
currently allocaleci to eacir process'.
.Request:-Is an mxn matrix inclicating the current reqqes!
of w"ch flocess' If
type Rj '
,.]q;.rtii;t=k then ei i, ,.qr.sting k niore instartces of iesources
length m aird u respeciiveil"
step 1. Let lvork and finish be vectors of
available
Initiatize Wcrk = .nl^
For i=O,L,'/.. -. . - -. ..n
if allocation(i)!:0,
then FinishIiJ=PalSa '
elseFinishfi]=true ,n-...^+/
false and R-eq*e st(i)(-work
Step 2. Find a.-inG<ij it.'n ii'tuiUo,it
Finish[i]=
If no such i exist go to steP 4'

S'ieP 3' Work =wcrk +Allocation(i)


Finishfi] = true Go to steP 2'

SieP 4' if
Finishfi.i = false
in adeadlcck staie. f'his algcrithm
forsome i where m>=i>=l. when asystem is state or
detect whether the svslem is in deadlock
needs an order of m*n square operations to
not.

,, n ll*; ,4
0l ct {frr oL on
'd^u
5k ^+*"Y3
C"4sto{uh t\
- t\
L&6^14 fLN'e- LroY /rNL#. 1
t,

vr- rt l'
Arrcr.[g,lfla
,l\
KU]p/J>//l-
AlLsoa$-rtn
,--:-; :-7;-1
AV!- L? U
/\ l'\ -\-

tc
IJ
!_?
0t\-/ ; CO
a>
l/
ll z oc)
n
v
() \J
4--
Y-
lc- -a-,i
.tv rA(a
l'l .-11
Z_
lL-'v
An2
o e)v
tt,
-11 Ol-'/
t Jouio'
,\rso4tls "A"*itA c-Lr'a'

vageL6
ffi bL 'ti- s"-la'
r,A Yhn'x+.- &"qfaw'
Unit IV : Deadlocks
LJd*---(c,o t o) , {'r*tx U" -?rJ= ; a3 oJl pr'atuZ
G.}s ,irh- fie->auk4-o)
O-l.t
"",kd
r) L-- Po ) t?rql4">L; a t,^Jdlb

(e,u,Qt(c,a,o)
bJ€"b. -- AE.h+ft|t"*I"*
wvv'! . ;
=G",j,q"Ieot,tto) = (o,t,d
r a
(c,o,o) <
^\
L/ L-- V^
) t#f;j=trc'
UcfrA = (o , , o)t (= ,o,9 -- (t,
t t rO
fr"*"{d =tn o

3) i 'Pu

t-+/ L-- i i.
It't
'',: /)

'\..-
/l-l)
L* l1
-/

frs -,r-n^J')
tri^^r4l^e(" - )-u J =, fIYv u\e
rr.o
,) er2/fr^ )s )^ s4 bfah
I t,
A-ps-ot rs,ca.su **-
/._-fun fA "/lsc^hl
41 r si) n D p fl ...-
l|.A- t2 {-fot (z>(s,fq;H }
Prof. Sreelatha p'K, CSE, SVIT
,.
(
I

7.8 Recovery From Deadlock


.
There are fv'ro basic approaches to recovery from deadlock: '
1. Process Termination
2. Resource Preemption
7.8.1 Process Termination

. Two basic approaches to terminate processes:


o Abort all deadlocked processes. This method breaks the deadlock cycle b), tenlinating
all the processes involved in deadlock. But, at a great expense, the deadlockeci pr.ocesses
may have computed for a long time, and the results of these partial computations rnust
be
discarded and probably will have to be recomputed iater.
o Abort one process at a tirne until the deadlock cycle is elirninated. This methocl incrus
considerable overhead, since, after each process is aborted, a deacllock-detection
' algorithm must be'invoked to determine whether any processes are still deadlocked.

' In the latter case there are many factors that can go into deciding which processes t.
terminate next:
. 1. Process priorities.
2' How long the process has been running, ancl how close it is to finishing.
3" How many and what fype of resoulces is the process holding. ( Are they easy to
preempt and restore? )
4" Hcv" 6iipy more resources does the process need to complete.
5. FIo.w rnany pt.ocesses rn,ijl neecl to be terrninated
6. Whether the process is interactive or batch.
i. ( Whether or not the process has made non.-restorable changes to any rescurce. )
7.7.2 Resource Preemption

' When preempting resources to relieve deadlock, there are three important issues to be
addressed:
1. Selecting a victim - Deciding which resources to preempt from which processes
involves many of the same decision criteria outlinecl above. -.e" ': t
i l.
2. Rollback - ideally one would.like to roll baek a preempfed prodess to a safe state
prior to tire point at which that resourcawa*originaJly allocated to the process.
Unforhrnately it can be difficult or impossi6te to deiermine what such i safe state is,
and so the only safe rollback is to roll back all the way back to rhe beginning. ( i.e.
abort the process and make it starl over. )
3. Starvation - Thele are chances.that the same lesource is picked from proc.ug{f u
victim,. every time the deadlock occul's and this continues. This is starvation. n
can be'kept on number of rollback of a process and the process has to be a victim for
Sunt
' finite number of times onlv.

Prof. Sreelatha p K, CSE, SVIT paoc 1e


Module III –Memory Management

Modulue III
Main Memory Management Strategies
• Every program to be executed has to be executed must be in memory. The instruction must be fetched
from memory before it is executed.
• In multi-tasking OS memory management is complex, because as processes are swapped in and out of
the CPU, their code and data must be swapped in and out of memory.

Basic Hardware

Main memory, cache and CPU registers in the processors are the only storage spaces that CPU can
access directly.
The program and data must be bought into the memory from the disk, for the process to run. Each
process has a separate memory space and must access only this range of legal addresses. Protection of
memory is required to ensure correct operation. This prevention is provided by hardware
implementation. Two registers are used - a base register and a limit register. The base register holds the
smallest legal physical memory address; the limit register specifies the size of the range.

For example, if the base register holds 1000 and limit register is 500, then the program can legally
access all addresses from 1000 through 1500 (inclusive).

Protection of memory space is done. Any attempt by an executing program to access operating-
system memory or other program memory results in a trap to the operating system, which treats the
attempt as a fatal error. This scheme prevents a user program from (accidentally or deliberately)
modifying the code or data structures of either the operating system or other users.
The base and limit registers can be loaded only by the operating system, which uses a special
privileged instruction. Since privileged instructions can be executed only in kernel mode only the
operating system can load the base and limit registers.

Prof. Sreelatha.P.K , Dept. of CSE, SVIT Page 1


Module III –Memory Management

Address Binding

• User programs typically refer to memory addresses with symbolic names such as "i", "count", and
"average Temperature". These symbolic names must be mapped or bound to physical memory
addresses, which typically occurs in several stages:
o Compile Time - If it is known at compile time where a program will reside in physical memory,
then absolute code can be generated by the compiler, containing actual physical addresses. However
if the load address changes at some later time, then the program will have to be recompiled.
o Load Time - If the location at which a program will be loaded is not known at compile time, then
the compiler must generate relocatable code, which references addresses relative to the start of the
program. If that starting address changes, then the program must be reloaded but not recompiled.
o Execution Time - If a program can be moved around in memory during the course of its execution,
then binding must be delayed until execution time.
• Figure 8.3 shows the various stages of the binding processes and the units involved in each stage:

Prof. Sreelatha.P.K , Dept. of CSE, SVIT Page 2


Module III –Memory Management

Logical Versus Physical Address Space

• The address generated by the CPU is a logical address, whereas the memory address where programs
are actually stored is a physical address.
• The set of all logical addresses used by a program composes the logical address space, and the set of all
corresponding physical addresses composes the physical address space.
• The run time mapping of logical to physical addresses is handled by the memory-management unit,
MMU.

o The MMU can take on many forms. One of the simplest is a modification of the base-register
scheme described earlier.
o The base register is now termed a relocation register, whose value is added to every memory
request at the hardware level.

8.1.4 Dynamic Loading

• Rather than loading an entire program into memory at once, dynamic loading loads up each routine as it
is called. The advantage is that unused routines need not be loaded, thus reducing total memory usage
and generating faster program startup times. The disadvantage is the added complexity and overhead of
checking to see if a routine is loaded every time it is called and then loading it up if it is not already
loaded.

8.1.5 Dynamic Linking and Shared Libraries

• With static linking library modules get fully included in executable modules, wasting both disk space
and main memory usage, because every program that included a certain routine from the library would
have to have their own copy of that routine linked into their executable code.
• With dynamic linking, however, only a stub is linked into the executable module, containing references
to the actual library module linked in at run time.

Prof. Sreelatha.P.K , Dept. of CSE, SVIT Page 3


Module III –Memory Management

o This method saves disk space, because the library routines do not need to be fully included in the
executable modules, only the stubs.
o An added benefit of dynamically linked libraries (DLLs, also known as shared libraries or shared
objects on UNIX systems) involves easy upgrades and updates.

8.2 Swapping

• A process must be loaded into memory in order to execute.


• If there is not enough memory available to keep all running processes in memory at the same time, then
some processes that are not currently using the CPU may have their memory swapped out to a fast local
disk called the backing store.
• Swapping is the process of moving a process from memory to backing store and moving another
process from backing store to memory. Swapping is a very slow process compared to other
operations.
• It is important to swap processes out of memory only when they are idle, or more to the point, only
when there are no pending I/O operations. (Otherwise the pending I/O operation could write into the
wrong process's memory space.) The solution is to either swap only totally idle processes, or do I/O
operations only into and out of OS buffers, which are then transferred to or from process's main memory
as a second step.
• Most modern OSes no longer use swapping, because it is too slow and there are faster alternatives
available. (e.g. Paging. ) However some UNIX systems will still invoke swapping if the system gets
extremely full, and then discontinue swapping when the load reduces again. Windows 3.1 would use a
modified version of swapping that was somewhat controlled by the user, swapping process's out if
necessary and then only swapping them back in when the user focused on that particular window.

8.3 Contiguous Memory Allocation


• One approach to memory management is to load each process into a contiguous space. The operating
system is allocated space first, usually at either low or high memory locations, and then the remaining
available memory is allocated to processes as needed. ( The OS is usually loaded low, because that is

Prof. Sreelatha.P.K , Dept. of CSE, SVIT Page 4


Module III –Memory Management

where the interrupt vectors are located). Here each process is contained in a single contiguous section of
memory.

8.3.1 Memory Mapping and Protection

• The system shown in figure below allows protection against user programs accessing areas that they
should not, allows programs to be relocated to different memory starting addresses as needed, and
allows the memory space devoted to the OS to grow or shrink dynamically as needs change.

8.3.2 Memory Allocation

• One method of allocating contiguous memory is to divide all available memory into equal sized
partitions, and to assign each process to their own partition (called as MFT). This restricts both the
number of simultaneous processes and the maximum size of each process, and is no longer used.
• An alternate approach is to keep a list of unused (free) memory blocks ( holes ), and to find a hole of a
suitable size whenever a process needs to be loaded into memory (called as MVT). There are many
different strategies for finding the "best" allocation of memory to processes, including the three most
commonly discussed:
1. First fit - Search the list of holes until one is found that is big enough to satisfy the request,
and assign a portion of that hole to that process. Whatever fraction of the hole not needed by
the request is left on the free list as a smaller hole. Subsequent requests may start looking
either from the beginning of the list or from the point at which this search ended.
2. Best fit - Allocate the smallest hole that is big enough to satisfy the request. This saves large
holes for other process requests that may need them later, but the resulting unused portions of
holes may be too small to be of any use, and will therefore be wasted. Keeping the free list
sorted can speed up the process of finding the right hole.
3. Worst fit - Allocate the largest hole available, thereby increasing the likelihood that the
remaining portion will be usable for satisfying future requests.
• Simulations show that either first or best fit are better than worst fit in terms of both time and storage
utilization. First and best fits are about equal in terms of storage utilization, but first fit is faster.

Prof. Sreelatha.P.K , Dept. of CSE, SVIT Page 5


Module III –Memory Management

8.3.3. Fragmentation

The allocation of memory to process leads to fragmentation of memory. A hole is the free space
available within memory. The two types of fragmentation are –

o External fragmentation – holes present in between the process


o Internal fragmentation - holes are present within the process itself. ie. There is free space within
a process.

• Internal fragmentation occurs with all memory allocation strategies. This is caused by the fact that
memory is allocated in blocks of a fixed size, whereas the actual memory needed will rarely be that
exact size.
• If the programs in memory are relocatable, ( using execution-time address binding ), then the external
fragmentation problem can be reduced via compaction, i.e. moving all processes down to one end of
physical memory so as to place all free memory together to get a large free block. This only involves
updating the relocation register for each process, as all internal work is done using logical addresses.
• Another solution to external fragmentation is to allow processes to use non-contiguous blocks of
physical memory- Paging and Segmentation.

8.4 Paging

• Paging is a memory management scheme that allows processes to be stored in physical memory
discontinuously. It eliminates problems with fragmentation by allocating memory in equal sized blocks
known as pages.
• Paging eliminates most of the problems of the other methods discussed previously, and is the
predominant memory management technique used today.

8.4.1 Basic Method

• The basic idea behind paging is to divide physical memory into a number of equal sized blocks
called frames, and to divide a program’s logical memory space into blocks of the same size
called pages.
• Any page ( from any process ) can be placed into any
available frame.
• The page table is used to look up which frame a
particular page is stored in at the moment. In the
following example, for instance, page 2 of the program's
logical memory is currently stored in frame 3 of physical
memory.

• A logical address consists of two parts: A page number


in which the address resides, and an offset from the
beginning of that page. (The number of bits in the page
number limits how many pages a single process can
address. The number of bits in the offset determines the
Prof. Sreelatha.P.K , Dept. of CSE, SVIT Page 6
Module III –Memory Management

maximum size of each page, and should correspond to the system frame size. )
• The page table maps the page number to a frame number, to yield a physical address which also has two
parts: The frame number and the offset within that frame. The number of bits in the frame number
determines how many frames the system can address, and the number of bits in the offset determines the
size of each frame.
• Page numbers, frame numbers, and frame sizes are determined by the architecture, but are typically
powers of two, allowing addresses to be split at a certain number of bits. For example, if the logical
address size is 2^m and the page size is 2^n, then the high-order m-n bits of a logical address designate
the page number and the remaining n bits represent the offset.

• Note that paging is like having a table of relocation registers, one for each page of the logical memory.
• There is no external fragmentation with paging. All blocks of physical memory are used, and there are
no gaps in between and no problems with finding the right sized hole for a particular chunk of memory.
• There is, however, internal fragmentation. Memory is allocated in chunks the size of a page, and on the
average, the last page will only be half full, wasting on the average half a page of memory per process.
• Larger page sizes waste more memory, but are more efficient in terms of overhead. Modern trends have
been to increase page sizes, and some systems even have multiple size pages to try and make the best of
both worlds.

• Consider the following example, in which a process has 16 bytes of logical memory, mapped in 4 byte
pages into 32 bytes of physical memory. (Presumably some other processes would be consuming the
remaining 16 bytes of physical memory. )

Prof. Sreelatha.P.K , Dept. of CSE, SVIT Page 7


Module III –Memory Management

• When a process requests memory ( e.g. when its code is loaded in from disk ), free frames are allocated
from a free-frame list, and inserted into that process's page table.
• Processes are blocked from accessing anyone else's memory because all of their memory requests are
mapped through their page table. There is no way for them to generate an address that maps into any
other process's memory space.
• The operating system must keep track of each individual process's page table, updating it whenever the
process's pages get moved in and out of memory, and applying the correct page table when processing
system calls for a particular process. This all increases the overhead involved when swapping processes
in and out of the CPU. ( The currently active page table must be updated to reflect the process that is
currently running. )

Prof. Sreelatha.P.K , Dept. of CSE, SVIT Page 8


Module III –Memory Management

8.4.2 Hardware Support

• Page lookups must be done for every memory reference, and whenever a process gets swapped
in or out of the CPU, its page table must be swapped in and out too, along with the instruction
registers, etc. It is therefore appropriate to provide hardware support for this operation, in order
to make it as fast as possible and to make process switches as fast as possible also.
• One option is to use a set of dedicated registers for the page table. Here each register content is
loaded, when the program is loaded into memory. For example, the DEC PDP-11 uses 16-bit
addressing and 8 KB pages, resulting in only 8 pages per process. ( It takes 13 bits to address 8
KB of offset, leaving only 3 bits to define a page number. )

CPU

Dedicated Registers

• An alternate option is to store the page table in main memory, and to use a single register ( called
the page-table base register, PTBR ) to record the address of the page table is memory.
• Process switching is fast, because only the single register needs to be changed.
• However memory access is slow, because every memory access now
requires two memory accesses - One to fetch the frame number from memory and then
another one to access the desired memory location.

Memory
CPU
Page1

PTBR

Page 2

Page Table (PT)

Prof. Sreelatha.P.K , Dept. of CSE, SVIT Page 9


Module III –Memory Management

• The solution to this problem is to use a very special high-speed memory device called
the translation look-aside buffer, TLB.
▪ The benefit of the TLB is that it can search an entire table for a key value in parallel, and
if it is found anywhere in the table, then the corresponding lookup value is returned.
▪ It is used as a cache device.
▪ Addresses are first checked against the TLB, and if the page is not there ( a TLB miss ),
then the frame is looked up from main memory and the TLB is updated.
▪ If the TLB is full, then replacement strategies range from least-recently used, LRU to
random.
▪ Some TLBs allow some entries to be wired down, which means that they cannot be
removed from the TLB. Typically these would be kernel frames.

▪ Some TLBs store address-space identifiers, ASIDs, to keep track of which process "owns" a particular
entry in the TLB. This allows entries from multiple processes to be stored simultaneously in the TLB
without granting one process access to some other process's memory location. Without this feature the
TLB has to be flushed clean with every process switch.
▪ The percentage of time that the desired information is found in the TLB is termed the hit ratio.
▪ For example, suppose that it takes 100 nanoseconds to access main memory, and only 20 nanoseconds
to search the TLB. So a TLB hit takes 120 nanoseconds total ( 20 to find the frame number and
then another 100 to go get the data ), and a TLB miss takes 220 ( 20 to search the TLB, 100 to go
get the frame number, and then another 100 to go get the data. ) So with an 80% TLB hit ratio, the
average memory access time would be:
0.80 * 120 + 0.20 * 220 = 140 nanoseconds
for a 40% slowdown to get the frame number. A 98% hit rate would yield 122 nanoseconds
average access time ( you should verify this ), for a 22% slowdown.

Prof. Sreelatha.P.K , Dept. of CSE, SVIT Page 10


Module III –Memory Management

8.4.3 Protection

• The page table can also help to protect processes from accessing memory.
• A bit can be added to the page table.
Valid / invalid bits can be added to
the page table. The valid bit ‘V’
shows that the page is valid and
updated, and the invalid bit ‘i’ shows
that the page is not valid and updated
page is not in the physical memory.
• Note that the valid / invalid bits
described above cannot block all
illegal memory accesses, due to the
internal fragmentation. Many
processes do not use all the page table
entries available to them.
• Addresses of the pages 0,1,2,3,4 and
5 are mapped using the page table as
they are valid.
• Addresses of the pages 6 and 7 are
invalid and cannot be mapped. Any
attempt to access those pages will
send a trap to the OS.

8.4.4 Shared Pages

• Paging systems can make it very easy to share blocks of memory, by simply duplicating page
numbers in multiple page frames. This may be done with either code or data.
• If code is reentrant(read-only files) that means that it does not write to or change the code in any
way. More importantly, it means the code can be shared by multiple processes, so long as each
has their own copy of the data and registers, including the instruction register.
• In the example given below, three different users are running the editor simultaneously, but the
code is only loaded into memory ( in the page frames ) one time.
• Some systems also implement shared memory in this fashion.

Prof. Sreelatha.P.K , Dept. of CSE, SVIT Page 11


Module III –Memory Management

8.5 Structure of the Page Table

8.5.1 Hierarchical Paging

• This structure supports two or more page tables at different levels (tiers).
• Most modern computer systems support logical address spaces of 2^32 to 2^64.

Prof. Sreelatha.P.K , Dept. of CSE, SVIT Page 12


Module III –Memory Management

• VAX Architecture divides 32-bit addresses into 4 equal sized sections, and each page is 512 bytes,
yielding an address form of:

• With a 64-bit logical address space and 4K pages, there are 52 bits worth of page numbers, which is still
too many even for two-level paging. One could increase the paging level, but with 10-bit page tables it
would take 7 levels of indirection, which would be prohibitively slow memory access. So some other
approach must be used.

8.5.2 Hashed Page Tables

• One common data structure for accessing data that is sparsely distributed over a broad range of
possible values is with hash tables. Figure 8.16 below illustrates a hashed page table using
chain-and-bucket hashing:

Prof. Sreelatha.P.K , Dept. of CSE, SVIT Page 13


Module III –Memory Management

8.5.3 Inverted Page Tables

• Another approach is to use an inverted page table. Instead of a table listing all of the pages for a
particular process, an inverted page table lists all of the pages currently loaded in memory, for all
processes. ( i.e. there is one entry per frame instead of one entry per page. )
• Access to an inverted page table can be slow, as it may be necessary to search the entire table in order to
find the desired page .
• The ‘id’ of process running in each frame and its corresponding page number is stored in the page table.

8.6 Segmentation

• Most users (programmers) do not think of their programs as existing in one continuous linear
address space.
• Rather they tend to think of their memory in
multiple segments, each dedicated to a particular use, such
as code, data, the stack, the heap, etc.
• Memory segmentation supports this view by providing
addresses with a segment number ( mapped to a segment
base address ) and an offset from the beginning of that
segment.
• The logical address consists of 2 tuples:

<segment-number, offset>

• For example, a C compiler might generate 5 segments


for the user code, library code, global ( static ) variables, the
stack, and the heap, as shown in Figure 8.18:

Prof. Sreelatha.P.K , Dept. of CSE, SVIT Page 14


Module III –Memory Management

8.6.2 Hardware

• A segment table maps segment-offset addresses to physical addresses, and simultaneously checks for
invalid addresses.
• Each entry in the segment table has a segment base and a segment limit. The segment base contains the
starting physical address where the segment resides in memory, whereas the segment limit specifies
the length of the segment.
• A logical address consists of two parts: a segment number, s, and an offset into that segment, d. The
segment number is used as an index to the segment table. The offset d of the logical address must be
between 0 and the segment limit. When an offset is legal, it is added to the segment base to produce the
address in physical memory of the desired byte. The segment table is thus essentially an array of base
and limit register pairs.

Prof. Sreelatha.P.K , Dept. of CSE, SVIT Page 15

You might also like