Deadlocks: Waiting Waiting Not Held This
Deadlocks: Waiting Waiting Not Held This
-f.\ \
I
J- -..,,,.:
:.
Unit IV : Deadlocks
4.1 DEADLOCKS
When processes request a resource and if the resources are not available at that time the
process enters into waiting state. Waiting process.may not change its state because the
resources they are requested are held by othelprocess. This situation is called deadlock,
The situatiotwhere the prgegSSgSLryail_fu-qaqh-other-Lo realease the telallgs_hgldby e*other
r------iT-r r rr r- _-
process ls caueo oeaolocK.
Pr;JP,"
a;t I il.
a l')
w.ttJ'; t t-tt,,,
Prof. Sreelatha P K, CSE, SVIT PageZ
Unit IV : Deadlocks
:--
2. Hold and Wair: A process ,*#U. holding at ieast one resource and waiting to acquire
additional resources that are currently bbing held by the other:process.
3. No Preemption: Resources cannot be preempted i.e., only the process holding the resources
must release il after the process has completed its task.
1. Circular Wait: A set {P0,P1........Pn} of waiting process must exist such that P0 is
foraresource heid by P1, P1 is waiting foraresource held byP2. Pn-1 is waiting
'vaiting
for resource held by process Pn and Pn is waiting for lhe resource heid by P 1 .
If the giaph
. If the grapli contains no cycle' then fttr process in the system is
deadlock'
.\-:-,
4.5 DEADLOCK PREVEI\TIOI{ ;'
necessary conditirxs utusthold.
If at least one of the
For adeadiock to occur each ofthe four
conditionsdoesnotholdthenwecanpreventoccun.enceofdeadlock'
Lg:-a Printer
pr can be used bY
non-sharable resources ' Es:-A'
i. MutualBxclusion: This holds for
nlv one Process at a tlme'
onlv
Mutual exclusion is not possible in,sharabiel:':^Y'^'-'.:i1li:ii|;1":a19:i.:J1"?iT.X
A process never
U:lllf:'#Hilil"li|]:i;';#";;ffi1,' r"; sharable resources'
,^
--d Paoe 4
.$bv -
;-
,., waits for accessing in a sharable resource. So we cannot prevent deadlock by denying the
mutual exclusion condition in non-sharable resources'
2. HolCandWait: This condition can be eliminated by forcing a process to release all its
resources held by it when it requlst a resource.
o Request all its resources before execution begins
Eg:'-conriO.. aprocess that copies the_data.from atape drive to.the disk, sorts the file
unO t6en prints the results to aprinter. If all the resources are allocated
at the beginning
then the tip. driur, disk files and printer are assigned to the process. The main
problem
with tfris is it leads to low resourl. utilization bicause it requires printer at the last and
is allocated with it from the beginning so that no other procesi can use it.
=_3
WJh;Me d isk I
P Yc(\
i. NoPreemption: To ensure rhattris condition never occurs the resources must be preempieci.
The following protocol can be r:sed.
o If a proceJs'is holding soinc i'usource(Ri,R2,R3) and request another resource(M) that
cannot be imnrediatel;,) allo,;ated to it, then all the resources cunently held by tlte
requesting process are preempted and added to the list of resources for which. other
prgcesses may be waiting. The process will be restarted only when it regains the old
iesources and the new resources that it is requesting.
. When aprocess (P1) request resources, we check whether they are avaiiable or not. If
they are available we allocate ihem else we check that whether they are allocated to
some other waiting processr.P2). If
so we preempt the resourc.es from the waiting
process(P2) and allocate them to the requesting process(Pl). If resource is not available
with P2, Pl has to wait.
i :'
4. Circular Wait:-The fourth and the final-condition forr-cleadlock. is \he .ir=utuin*itl
conditio* One way to ensure that this condition never ocq$rQ, is to-impose ordering on all
resource types and each process requests resource in an incfE4sing oider.
-Order a1l resource types as per number of instances of each type.
-Process should request resource in an increasing order of enumeration
ll.;*?ti'l'J;
F(printer)=12
A process holding 'disk drive', camot request for tapedrive, but it can request for printer.
If tirese two orotocol are used then the cilcuiar wait cannot hold.
Deadlock prevention algorithm may lead to low device utilization and reduces systetn, throughput.
" Avbiding deadl"ocks requires additional inforrnation about. the sequence -*fl'which, the
resources are to be lequesded. With th-e knowledge of tlte compieteisequbnees of'requestsiand
releases we can decide for each requests ivhethei of not the pr 5cess should wait.
' For each requests it requires to check lvhetlr€r ft?-resoLrrces is cut't'ently available..If
resources is'availabld, the OS checks the resource is assigned will the
if
system lead to deadlock, If no then the actual allocation is done.
Safe State:
A safe state is a state in which there exists at least one ord:r in which all the process will
run completely wlthout resulting in a deadlock.
A system is in safe state if there exists a safe sequence.
A seouenceofnrocesses<Pi,P2, .....Pniisasafesequenceforthecunentallocationstate
rrart!-rrnFl
Ptop h
Prof,. Sreelatha P K, CSE, SVIT
' ,' Unit IV : Deadlocks
ilfor each Pi process request can be satisfied by the currentiy available resources.
. If the resources that Pi requests are not cunently available then Pi can obtain all of its
needed resource to complete its designated task'
*t
XWefrffi/#
This,algorithm is used only if we ha''re one instance of a resource type. In addition to the
request edge and the assignment edge anew edge called claimedge is used. For eg:-A claim
edge Pi---- >Rj indicates that process Pi may request Rj
in future. The claim edge is
represented by a dotted line.
t>
+)^-tl
fT6rtLz.bA I't & Yz na$ C/@J/
"s[ "-K -n-/-^D\rtof)
ac!,qn- lO Z ':)
/( li * rz
ftz a
hery)rrk fd, Ry 'n
When aprocess Pi requests the resource Rj, the claim edge is converted to arequest edge.
When resour8e Rj is released by :rocess Pi, the assignment edge Rj.di
is replaced by the
Prof. Sreelatha P K, CSE, SVIT Psoe 7
Unit IV : Deadlocks i
rrccrr rrr -.",n rrrr r .rrrra-alrtrr" rttr ru l
E?nkpr's Alqorithm:
This aigoritlm is applicable to the system i,vith multiple instances of each tesource types.
The Banker's Algorithm gets its name because it is a methocl.that bankers could use to assure
that when they leld out resources they rvill stiil be able to satisfy ali their clients. ( A bantr<er
rvon't ioan out a little money to start building a house unless they aie asstred that they will later
be abie to loan out the rest of the money to finish the house. )
When a process starts up. it must state in dcivance the maximum allocation of resources it ma-v*
When a requ.est is made, ihe scheduler determines whether glarrting the request would leave the
system in a saf'e state. If not, then the process must wait r.rntil the lequest can be granted safely.
2 22
',
o az + 3?
YIw\'UIYY ryJ
'.
B^,Jd,tAlfld'h.^
=lib 2
Prof. Sreelatha P K, CSE, SVIT /. Paq,e 9
) 9=3IQ't'2 ,"Co
e-1 -'l
5 44
preoJ 3 dsLk
gr,3
:L,,"i4 Lsts,4
=
3r ,e"r,Eb
Y-
r, )oI.L " U eoljt t
=Pt;A'i'd,u,6 ="f;, s'€
Alloailt>'- C?'J'
X
r\D?'J '
?r' eztc'ePtb
"-"'*f
Page 1.0
Prof. Sreelatha P K, CSE, SVIT
|l 22 waut'
ilo\A
h),i P Ps t ueldc?r1
NelACf]
'C",;;13 =
aFP,zJ a
fu Q@7 * la<a6r*_6
=-u + AI
'Jfr,+$J
'"J
"r"L + /^2,
fs,e,tl t ril {a^,th[W'ttu
Unit IV ,: Deadlocks
< Pd.,l?
3 F,ofJ
e, k bt'"ft* fgl
it'
4L' s"(z
]
.3i -
> D, o., zJ
i
ALIor_aIo,*5e -t t yNC' I
i.
(u' o oZ t+ 3 g /+31
,hvd]^ILU,std
a.
= ilek
j 'i '*'-ou-='E,"rol+[3,o,?J=
i i l', ' rri , ' Fi*,>Lr6g =bE
.-\ t
;
s) L-- 12 t l>,
i Nrd7il = t^tetA
:
.l le ,o:oJ tL'l'-g.3 ,zl /x ,-
Yz eP-n'f'gx- oguJ '
'o/x!/uJtLQ'
'\ . a^)
I.+) L =t/z , NuA rpJ * t^Je\,/
l, Lo,t,J3fs'sr4/
?: ?TQ^L&-,
"
I
;il;-rl d,k+ ttlot-'co'^tr t7
i
:'
Fir.,il,--CP{= try
I=l
:)tt, L -
n
'Yt
.t L) -
n
NetaSiu",'
<;----
<NolA
LarltA I D/b,{ / a
lq yl^ftg ,
fi,wzLfr^J_:t-*-
a) l= Po,, Ur-"lCp"J= u*L
-"*--a '' ,
1,
U t ',
tr,4 3 ft,+ ,,sJ /
Prof. Sreelatha P K, CSE, SVIT . Page p
uolk = F,4 ,sJf b,, , d = fr,srdJ
lii
.i'1 t,,
,i'.,i :,i
"
,iiir ti
In
i;l
Unit IV : Deadlocks
Availa\le = Available
Allocati\(i) =Allocati i) + Request(i)
Need(i) =\ed(i) - Req
rrithm f to
tfansactlon is"complgQ aqd f the new stat e
ilet) ' ,
'Larz)
|l'.'/.' :.i-'?n. n
1tletAgezJ a wu\R
-|
ti
,;'l:li-' ju,', ,,' , '' < brSrSJ
,i'I;,.,..t,: t'', LG,,O,()1
; ,l
ri' ,, '': L)ek" LJffikI Av'.iliia"-6'4
: = S,ersJ t pzotzJ
l' ,'. , ,. ,r ,,, , , r,
. ; 5_o r,,ir.1
' " ,jl lt 't-;-W"ffA=t*t-
''' ,An *U,ft*is,l*Cf:-PrTLte'
,,iffi\,.ffi h"o'*-p^-ffuJ
' +L suplt^* A )tu sa61 uta'/ta''
;.'. 0 \^ -/D ?^
Ifalltheresourcthenwecandefinedeadlockdetectionalgorithm
that uses avariant ofresourie alloc"eion
by removing the nodes oftype ,riourr." slrrrr;
;riill,i;#rit fgrgraph. This graph is obtained
and removing appr'prlare edges.
An from Pi to Pj in wait fbrgraph irnplies
edge
pi is waiting forpj
that ro release
that Pi needs. ares'urce
€t: i
(b)
P ro f. S re el a th a p K cs EJ VI T----
Page 1.5.
lesl I n Snsss.-qla. R.s-qurFJJp-s
-Tf;.ffiabletoonlyasingl9instanceofaresourcetype.The
:
SieP 4' if
Finishfi.i = false
in adeadlcck staie. f'his algcrithm
forsome i where m>=i>=l. when asystem is state or
detect whether the svslem is in deadlock
needs an order of m*n square operations to
not.
,, n ll*; ,4
0l ct {frr oL on
'd^u
5k ^+*"Y3
C"4sto{uh t\
- t\
L&6^14 fLN'e- LroY /rNL#. 1
t,
vr- rt l'
Arrcr.[g,lfla
,l\
KU]p/J>//l-
AlLsoa$-rtn
,--:-; :-7;-1
AV!- L? U
/\ l'\ -\-
tc
IJ
!_?
0t\-/ ; CO
a>
l/
ll z oc)
n
v
() \J
4--
Y-
lc- -a-,i
.tv rA(a
l'l .-11
Z_
lL-'v
An2
o e)v
tt,
-11 Ol-'/
t Jouio'
,\rso4tls "A"*itA c-Lr'a'
vageL6
ffi bL 'ti- s"-la'
r,A Yhn'x+.- &"qfaw'
Unit IV : Deadlocks
LJd*---(c,o t o) , {'r*tx U" -?rJ= ; a3 oJl pr'atuZ
G.}s ,irh- fie->auk4-o)
O-l.t
"",kd
r) L-- Po ) t?rql4">L; a t,^Jdlb
(e,u,Qt(c,a,o)
bJ€"b. -- AE.h+ft|t"*I"*
wvv'! . ;
=G",j,q"Ieot,tto) = (o,t,d
r a
(c,o,o) <
^\
L/ L-- V^
) t#f;j=trc'
UcfrA = (o , , o)t (= ,o,9 -- (t,
t t rO
fr"*"{d =tn o
3) i 'Pu
t-+/ L-- i i.
It't
'',: /)
'\..-
/l-l)
L* l1
-/
frs -,r-n^J')
tri^^r4l^e(" - )-u J =, fIYv u\e
rr.o
,) er2/fr^ )s )^ s4 bfah
I t,
A-ps-ot rs,ca.su **-
/._-fun fA "/lsc^hl
41 r si) n D p fl ...-
l|.A- t2 {-fot (z>(s,fq;H }
Prof. Sreelatha p'K, CSE, SVIT
,.
(
I
' In the latter case there are many factors that can go into deciding which processes t.
terminate next:
. 1. Process priorities.
2' How long the process has been running, ancl how close it is to finishing.
3" How many and what fype of resoulces is the process holding. ( Are they easy to
preempt and restore? )
4" Hcv" 6iipy more resources does the process need to complete.
5. FIo.w rnany pt.ocesses rn,ijl neecl to be terrninated
6. Whether the process is interactive or batch.
i. ( Whether or not the process has made non.-restorable changes to any rescurce. )
7.7.2 Resource Preemption
' When preempting resources to relieve deadlock, there are three important issues to be
addressed:
1. Selecting a victim - Deciding which resources to preempt from which processes
involves many of the same decision criteria outlinecl above. -.e" ': t
i l.
2. Rollback - ideally one would.like to roll baek a preempfed prodess to a safe state
prior to tire point at which that resourcawa*originaJly allocated to the process.
Unforhrnately it can be difficult or impossi6te to deiermine what such i safe state is,
and so the only safe rollback is to roll back all the way back to rhe beginning. ( i.e.
abort the process and make it starl over. )
3. Starvation - Thele are chances.that the same lesource is picked from proc.ug{f u
victim,. every time the deadlock occul's and this continues. This is starvation. n
can be'kept on number of rollback of a process and the process has to be a victim for
Sunt
' finite number of times onlv.
Modulue III
Main Memory Management Strategies
• Every program to be executed has to be executed must be in memory. The instruction must be fetched
from memory before it is executed.
• In multi-tasking OS memory management is complex, because as processes are swapped in and out of
the CPU, their code and data must be swapped in and out of memory.
Basic Hardware
Main memory, cache and CPU registers in the processors are the only storage spaces that CPU can
access directly.
The program and data must be bought into the memory from the disk, for the process to run. Each
process has a separate memory space and must access only this range of legal addresses. Protection of
memory is required to ensure correct operation. This prevention is provided by hardware
implementation. Two registers are used - a base register and a limit register. The base register holds the
smallest legal physical memory address; the limit register specifies the size of the range.
For example, if the base register holds 1000 and limit register is 500, then the program can legally
access all addresses from 1000 through 1500 (inclusive).
Protection of memory space is done. Any attempt by an executing program to access operating-
system memory or other program memory results in a trap to the operating system, which treats the
attempt as a fatal error. This scheme prevents a user program from (accidentally or deliberately)
modifying the code or data structures of either the operating system or other users.
The base and limit registers can be loaded only by the operating system, which uses a special
privileged instruction. Since privileged instructions can be executed only in kernel mode only the
operating system can load the base and limit registers.
Address Binding
• User programs typically refer to memory addresses with symbolic names such as "i", "count", and
"average Temperature". These symbolic names must be mapped or bound to physical memory
addresses, which typically occurs in several stages:
o Compile Time - If it is known at compile time where a program will reside in physical memory,
then absolute code can be generated by the compiler, containing actual physical addresses. However
if the load address changes at some later time, then the program will have to be recompiled.
o Load Time - If the location at which a program will be loaded is not known at compile time, then
the compiler must generate relocatable code, which references addresses relative to the start of the
program. If that starting address changes, then the program must be reloaded but not recompiled.
o Execution Time - If a program can be moved around in memory during the course of its execution,
then binding must be delayed until execution time.
• Figure 8.3 shows the various stages of the binding processes and the units involved in each stage:
• The address generated by the CPU is a logical address, whereas the memory address where programs
are actually stored is a physical address.
• The set of all logical addresses used by a program composes the logical address space, and the set of all
corresponding physical addresses composes the physical address space.
• The run time mapping of logical to physical addresses is handled by the memory-management unit,
MMU.
o The MMU can take on many forms. One of the simplest is a modification of the base-register
scheme described earlier.
o The base register is now termed a relocation register, whose value is added to every memory
request at the hardware level.
• Rather than loading an entire program into memory at once, dynamic loading loads up each routine as it
is called. The advantage is that unused routines need not be loaded, thus reducing total memory usage
and generating faster program startup times. The disadvantage is the added complexity and overhead of
checking to see if a routine is loaded every time it is called and then loading it up if it is not already
loaded.
• With static linking library modules get fully included in executable modules, wasting both disk space
and main memory usage, because every program that included a certain routine from the library would
have to have their own copy of that routine linked into their executable code.
• With dynamic linking, however, only a stub is linked into the executable module, containing references
to the actual library module linked in at run time.
o This method saves disk space, because the library routines do not need to be fully included in the
executable modules, only the stubs.
o An added benefit of dynamically linked libraries (DLLs, also known as shared libraries or shared
objects on UNIX systems) involves easy upgrades and updates.
8.2 Swapping
where the interrupt vectors are located). Here each process is contained in a single contiguous section of
memory.
• The system shown in figure below allows protection against user programs accessing areas that they
should not, allows programs to be relocated to different memory starting addresses as needed, and
allows the memory space devoted to the OS to grow or shrink dynamically as needs change.
• One method of allocating contiguous memory is to divide all available memory into equal sized
partitions, and to assign each process to their own partition (called as MFT). This restricts both the
number of simultaneous processes and the maximum size of each process, and is no longer used.
• An alternate approach is to keep a list of unused (free) memory blocks ( holes ), and to find a hole of a
suitable size whenever a process needs to be loaded into memory (called as MVT). There are many
different strategies for finding the "best" allocation of memory to processes, including the three most
commonly discussed:
1. First fit - Search the list of holes until one is found that is big enough to satisfy the request,
and assign a portion of that hole to that process. Whatever fraction of the hole not needed by
the request is left on the free list as a smaller hole. Subsequent requests may start looking
either from the beginning of the list or from the point at which this search ended.
2. Best fit - Allocate the smallest hole that is big enough to satisfy the request. This saves large
holes for other process requests that may need them later, but the resulting unused portions of
holes may be too small to be of any use, and will therefore be wasted. Keeping the free list
sorted can speed up the process of finding the right hole.
3. Worst fit - Allocate the largest hole available, thereby increasing the likelihood that the
remaining portion will be usable for satisfying future requests.
• Simulations show that either first or best fit are better than worst fit in terms of both time and storage
utilization. First and best fits are about equal in terms of storage utilization, but first fit is faster.
8.3.3. Fragmentation
The allocation of memory to process leads to fragmentation of memory. A hole is the free space
available within memory. The two types of fragmentation are –
• Internal fragmentation occurs with all memory allocation strategies. This is caused by the fact that
memory is allocated in blocks of a fixed size, whereas the actual memory needed will rarely be that
exact size.
• If the programs in memory are relocatable, ( using execution-time address binding ), then the external
fragmentation problem can be reduced via compaction, i.e. moving all processes down to one end of
physical memory so as to place all free memory together to get a large free block. This only involves
updating the relocation register for each process, as all internal work is done using logical addresses.
• Another solution to external fragmentation is to allow processes to use non-contiguous blocks of
physical memory- Paging and Segmentation.
8.4 Paging
• Paging is a memory management scheme that allows processes to be stored in physical memory
discontinuously. It eliminates problems with fragmentation by allocating memory in equal sized blocks
known as pages.
• Paging eliminates most of the problems of the other methods discussed previously, and is the
predominant memory management technique used today.
• The basic idea behind paging is to divide physical memory into a number of equal sized blocks
called frames, and to divide a program’s logical memory space into blocks of the same size
called pages.
• Any page ( from any process ) can be placed into any
available frame.
• The page table is used to look up which frame a
particular page is stored in at the moment. In the
following example, for instance, page 2 of the program's
logical memory is currently stored in frame 3 of physical
memory.
maximum size of each page, and should correspond to the system frame size. )
• The page table maps the page number to a frame number, to yield a physical address which also has two
parts: The frame number and the offset within that frame. The number of bits in the frame number
determines how many frames the system can address, and the number of bits in the offset determines the
size of each frame.
• Page numbers, frame numbers, and frame sizes are determined by the architecture, but are typically
powers of two, allowing addresses to be split at a certain number of bits. For example, if the logical
address size is 2^m and the page size is 2^n, then the high-order m-n bits of a logical address designate
the page number and the remaining n bits represent the offset.
• Note that paging is like having a table of relocation registers, one for each page of the logical memory.
• There is no external fragmentation with paging. All blocks of physical memory are used, and there are
no gaps in between and no problems with finding the right sized hole for a particular chunk of memory.
• There is, however, internal fragmentation. Memory is allocated in chunks the size of a page, and on the
average, the last page will only be half full, wasting on the average half a page of memory per process.
• Larger page sizes waste more memory, but are more efficient in terms of overhead. Modern trends have
been to increase page sizes, and some systems even have multiple size pages to try and make the best of
both worlds.
• Consider the following example, in which a process has 16 bytes of logical memory, mapped in 4 byte
pages into 32 bytes of physical memory. (Presumably some other processes would be consuming the
remaining 16 bytes of physical memory. )
• When a process requests memory ( e.g. when its code is loaded in from disk ), free frames are allocated
from a free-frame list, and inserted into that process's page table.
• Processes are blocked from accessing anyone else's memory because all of their memory requests are
mapped through their page table. There is no way for them to generate an address that maps into any
other process's memory space.
• The operating system must keep track of each individual process's page table, updating it whenever the
process's pages get moved in and out of memory, and applying the correct page table when processing
system calls for a particular process. This all increases the overhead involved when swapping processes
in and out of the CPU. ( The currently active page table must be updated to reflect the process that is
currently running. )
• Page lookups must be done for every memory reference, and whenever a process gets swapped
in or out of the CPU, its page table must be swapped in and out too, along with the instruction
registers, etc. It is therefore appropriate to provide hardware support for this operation, in order
to make it as fast as possible and to make process switches as fast as possible also.
• One option is to use a set of dedicated registers for the page table. Here each register content is
loaded, when the program is loaded into memory. For example, the DEC PDP-11 uses 16-bit
addressing and 8 KB pages, resulting in only 8 pages per process. ( It takes 13 bits to address 8
KB of offset, leaving only 3 bits to define a page number. )
CPU
Dedicated Registers
• An alternate option is to store the page table in main memory, and to use a single register ( called
the page-table base register, PTBR ) to record the address of the page table is memory.
• Process switching is fast, because only the single register needs to be changed.
• However memory access is slow, because every memory access now
requires two memory accesses - One to fetch the frame number from memory and then
another one to access the desired memory location.
Memory
CPU
Page1
PTBR
Page 2
• The solution to this problem is to use a very special high-speed memory device called
the translation look-aside buffer, TLB.
▪ The benefit of the TLB is that it can search an entire table for a key value in parallel, and
if it is found anywhere in the table, then the corresponding lookup value is returned.
▪ It is used as a cache device.
▪ Addresses are first checked against the TLB, and if the page is not there ( a TLB miss ),
then the frame is looked up from main memory and the TLB is updated.
▪ If the TLB is full, then replacement strategies range from least-recently used, LRU to
random.
▪ Some TLBs allow some entries to be wired down, which means that they cannot be
removed from the TLB. Typically these would be kernel frames.
▪ Some TLBs store address-space identifiers, ASIDs, to keep track of which process "owns" a particular
entry in the TLB. This allows entries from multiple processes to be stored simultaneously in the TLB
without granting one process access to some other process's memory location. Without this feature the
TLB has to be flushed clean with every process switch.
▪ The percentage of time that the desired information is found in the TLB is termed the hit ratio.
▪ For example, suppose that it takes 100 nanoseconds to access main memory, and only 20 nanoseconds
to search the TLB. So a TLB hit takes 120 nanoseconds total ( 20 to find the frame number and
then another 100 to go get the data ), and a TLB miss takes 220 ( 20 to search the TLB, 100 to go
get the frame number, and then another 100 to go get the data. ) So with an 80% TLB hit ratio, the
average memory access time would be:
0.80 * 120 + 0.20 * 220 = 140 nanoseconds
for a 40% slowdown to get the frame number. A 98% hit rate would yield 122 nanoseconds
average access time ( you should verify this ), for a 22% slowdown.
8.4.3 Protection
• The page table can also help to protect processes from accessing memory.
• A bit can be added to the page table.
Valid / invalid bits can be added to
the page table. The valid bit ‘V’
shows that the page is valid and
updated, and the invalid bit ‘i’ shows
that the page is not valid and updated
page is not in the physical memory.
• Note that the valid / invalid bits
described above cannot block all
illegal memory accesses, due to the
internal fragmentation. Many
processes do not use all the page table
entries available to them.
• Addresses of the pages 0,1,2,3,4 and
5 are mapped using the page table as
they are valid.
• Addresses of the pages 6 and 7 are
invalid and cannot be mapped. Any
attempt to access those pages will
send a trap to the OS.
• Paging systems can make it very easy to share blocks of memory, by simply duplicating page
numbers in multiple page frames. This may be done with either code or data.
• If code is reentrant(read-only files) that means that it does not write to or change the code in any
way. More importantly, it means the code can be shared by multiple processes, so long as each
has their own copy of the data and registers, including the instruction register.
• In the example given below, three different users are running the editor simultaneously, but the
code is only loaded into memory ( in the page frames ) one time.
• Some systems also implement shared memory in this fashion.
• This structure supports two or more page tables at different levels (tiers).
• Most modern computer systems support logical address spaces of 2^32 to 2^64.
• VAX Architecture divides 32-bit addresses into 4 equal sized sections, and each page is 512 bytes,
yielding an address form of:
• With a 64-bit logical address space and 4K pages, there are 52 bits worth of page numbers, which is still
too many even for two-level paging. One could increase the paging level, but with 10-bit page tables it
would take 7 levels of indirection, which would be prohibitively slow memory access. So some other
approach must be used.
• One common data structure for accessing data that is sparsely distributed over a broad range of
possible values is with hash tables. Figure 8.16 below illustrates a hashed page table using
chain-and-bucket hashing:
• Another approach is to use an inverted page table. Instead of a table listing all of the pages for a
particular process, an inverted page table lists all of the pages currently loaded in memory, for all
processes. ( i.e. there is one entry per frame instead of one entry per page. )
• Access to an inverted page table can be slow, as it may be necessary to search the entire table in order to
find the desired page .
• The ‘id’ of process running in each frame and its corresponding page number is stored in the page table.
8.6 Segmentation
• Most users (programmers) do not think of their programs as existing in one continuous linear
address space.
• Rather they tend to think of their memory in
multiple segments, each dedicated to a particular use, such
as code, data, the stack, the heap, etc.
• Memory segmentation supports this view by providing
addresses with a segment number ( mapped to a segment
base address ) and an offset from the beginning of that
segment.
• The logical address consists of 2 tuples:
<segment-number, offset>
8.6.2 Hardware
• A segment table maps segment-offset addresses to physical addresses, and simultaneously checks for
invalid addresses.
• Each entry in the segment table has a segment base and a segment limit. The segment base contains the
starting physical address where the segment resides in memory, whereas the segment limit specifies
the length of the segment.
• A logical address consists of two parts: a segment number, s, and an offset into that segment, d. The
segment number is used as an index to the segment table. The offset d of the logical address must be
between 0 and the segment limit. When an offset is legal, it is added to the segment base to produce the
address in physical memory of the desired byte. The segment table is thus essentially an array of base
and limit register pairs.