Operating System (5th Semester) : Prepared by Sanjit Kumar Barik (Asst Prof, Cse) Module-Iii
Operating System (5th Semester) : Prepared by Sanjit Kumar Barik (Asst Prof, Cse) Module-Iii
Operating System (5th Semester) : Prepared by Sanjit Kumar Barik (Asst Prof, Cse) Module-Iii
MODULE-III
TEXT BOOK:
DISCLAIMER:
“THIS DOCUMENT DOES NOT CLAIM ANY ORIGINALITY AND CANNOT BE USED AS A
SUBSTITUTE FOR PRESCRIBED TEXTBOOKS. THE INFORMATION PRESENTED HERE IS
MERELY A COLLECTION FROM DIFFERENT REFERENCE BOOKS AND INTERNET
CONTENTS. THE OWNERSHIP OF THE INFORMATION LIES WITH THE RESPECTIVE
AUTHORS OR INSTITUTIONS.”
Memory Management
Memory protection:
We can provide protection by using two registers:
Base register holds the smallest legal physical memory address.
Limit register specifies the size of range of the process.
Ex: if the base register holds 300040 and the limit register is 120900,then the
program can legally access all the address from 300040 through 420939(inclusive)
OS
256000
Process base
300040 300040
Process
420940 120900
process
880000
Memory Management Basic Concepts:
Address Binding:
In a system, the user program have symbolic name for address like
ptr,addr,and so on. When the program gets compiled, the compiler bind
symbolic addresses to relocate-able addresses and then the linkage editor
or loader will bind relocateble address to absolute address. At each step
binding is mapping from one address space to another space ,the binding
of user program(instructions and data) to memory address can be done at
any time- compile time , load time or execution time.
A.Compile-time binding:
If the location of user program in memory is not known at compile time, the
compiler generates relocateble code .If location of user program changes,
then program need not to be recompiled , only user code need to be
reloaded to integrates changes.
Programs (or process) may need to be relocated during runtime from one
memory segment to another. Run-time (execution time) binding is most
popular and flexible scheme, providing we have the required H/W support
available in the system.
Logical and Physical address:
At compile time and load time address binding the logical and physical
address are generated are same but at execution time address binding of
logical and physical addresses are different.
The space set aside for set of all logical address defined and referenced by
user in their program are called logical address space and the space set
aside for set of all physical address corresponding to these logical address
is known as physical address space.
In this scheme there are two registers base registers and limit registers.
The base register (also called relocation registers) contain the starting
(base) address from where the user programs start in memory.
EX: If user programs starts at 1000 in memory, then base register contains
the value 1000.The limit register contains the value that specifies the
range(or boundary) so that user program can access address within that
range only. In other-word limit register specify the range of logical address
to be used by the user programs.
Ex: if limit register contains the value 1500,and base register contains the
value 1000 then, user program can access address from range 1000 to
2499
Limit
1500 register
Base register
1000
+
Generates
logic
address
250 250+1000=1250 Yes= 1250
CPU
+ < Computer Memory
Corresponding to
physical address
Managed by
MMU
1. Internal fragmentation
2. Limit in process size(32 MB of process cannot be accommodated) 8mb
Earlier in multiprogramming system the main memory was divided into fixed
partitions where each partition was capable of holding one process. Fixed
partitions can be of equal sizes or unequal sizes .but their size once fixed cannot be
changed means staring and an end address of each partition is fixed in advance.
In case of equal size fixed partition, there is a single process queue in which all the
processes waiting to come in main memory for execution are maintained. If there
is partition available, the process is loaded into that partition. Since all partitions
are equal size, it does not matter which partition is used.
os
New process
that enters the Partitions are
system is stored partition2 equal size
in this queue
partion3
In case of unequal size fixed partition, the size of each partition is fixed once in
starting and that size cannot be changed, it is fixed.
Ex: Suppose system has memory of size 64 kb. Out of this 16kb is occupied by OS
and rest 48 kb is there for user process. This is divided into five partitions as:
16kb
New Partition
process that can be of
enters the Partition- 1 un-equal
Process queue for partion-2
system can size but
be stored in 4Kb partition
any of the size once
queue fixes
depending cannot be
upon the Process queue for partion-3
Partition-2 changed
size of the
partition it 4 Kb
best fit into
Partition-3
Process queue for partion-4
8kb
16 kb
Partition-5
16kb
In both cases [equal or unequal size partition] when a program completes its
execution and terminates, its allocated memory partition is free for another
program waiting in a queue. But this multiprogramming with restriction of fixed
size causes wastage of memory and lead to internal fragmentation.
Advantage:
There is a process queue in which all the processes waiting to come in main
memory for execution are maintained .The OS uses scheduling algorithm to
dispatch process one by one from the process queue. If there is partition
available large enough to accommodate the process, OS loads the process into
that partition. When a program completes its execution and terminates, its
allocated memory area is free. The various available memory areas are called
holes. As the memory is allocated and de-allocated , holes will appear in the
memory .Holes of various sizes are spread throughout the memory.
P1
complet
es so its
allocate
d
OS OS memory OS
area is
free
P1 P1 Hole
P3 completes so its
allocated memory area
P2 is free
P2 P2
P3 Hole Hole
P4 P4 P4
P5 P5 P5
OS
OS OS
OS merged
together adjacent
P1
P1 P1
free holes to form
P3 big hole Hole 6MB
Hole 2 MB Hole 2MB
completes
1. First fit: There may be many holes available in the memory. In this
approach the OS did not waste time in searching the free hole large enough
to accommodate the given process , instead it allocates the first hole it find
large enough to satisfy the request.
2. Best fit: In this approach OS maintains ordered list holes. When a process
arrives ,the OS allocates the smallest hole from that available holes
(arranged order) that is large enough to accommodate the given process
.Using this method , the memory has the smallest leftover.(allocate the
smallest hole that is big enough).
3. Worst fit: When a process arrives, the OS allocate the largest hole (from the
list of available holes) that is large enough to accommodate the given
process. Using this method , the method has the largest leftover hole, this
largest leftover hole may be more useful than smallest leftover hole because
that largest left over hole may be too large to accommodate any process(
allocate the largest hole).
OS
P1
New process P7 =7 MB
Hole 2 MB
These scattered holes can accommodate process
P3 p7 but since they are not contiguous, they are of
no use.
P4
Hole 6MB
P6
[External fragmentation]
P10
EX: first-Fit, Best-fit, worst –fit
P1=15K first fit P1(hole)
p1=15k 25K
P11
Hole 40k
First fit=simple/fast
P12
Best fit=slow/internal fragmentation will be less
Hole
Worst fit= slow
100k
P13
hole 20k
P4
10k
P15
1. Much of CPU time and other system resources got wasted in compaction
procedure.
2. During compaction some compaction tasks need to be halted which can
create problematic situation especially in interactive and real time systems.
3. Compaction requires relocation of process i.e moving process and its
associated data to new address. This will changes base address of process
and requires changing the base register value to hold new changed value.
Hence system need to maintain all this relocation information which itself is
a big overhead. Also if relocation is static means done at load time
compaction is not possible. Compaction is possible when relocation is
dynamic means done at run time.
P3=4mb
Hole
Draw back:
1. External fragmentation
2. Allocation and Deallocation is complex
PAGING:
1. Basic paging Method
2. Paging H/W-TLB(Translation Look –aside buffer)
3. Different types of page table
a. Hierarchical paging
b. Hashed paged Table
c. Inverted Page table
Paging is one of the memory management scheme used generally in most of the
OS. Paging is good solution to problem of fragmentation. In paging both physical
and virtual memory are divided into fixed sized blocks. These fixed sized blocks
are called frames in physical memory and pages in logical memory. Both frame
and pages are of same size.
Page size is decided by the paging H/W.Page size is of the form 2 x ( power of 2 i.e
24 =16 and so on). .If suppose logical address is of size 2m (unit can be bits , bytes or
word) in which page size is 2n , then higher order (m-n) bits of logical address
denotes page number and ‘n’ lower bits denotes page offset.
m-n n
15 65 0
M=16
Consider an Example:
Physical memory is of size 16 bytes. Logical memory has pages of size 2 bytes
each. So total number of pages can be stored in physical memory = Physical
memory size/logical memory size (page size) =16/2=8 pages.
3 1
The formula to find out physical address corresponding to logical address is:
Ex: logical address 0 is page no. 0 , page offset 0 and frame no. 4
Physical address =(4*2)+0=8
Logical address 0 maps to physical address 8 and the data here is “ab”
EXAMPLE:
Assignment:
No. of pages=?
No. of frames=?
So to overcome this problem, the solution is to use small and fast to access
hardware cache known as TLB(Translation Look –aside Buffer).TLB is the part of
MMU and is used for virtual to physical address translation.
TLB construction:
TLB is in the form of table where each row of TLB consists of a page number and
its associated frame number. We know that the logical address generated by CPU
has two parts-page number and page offset. So whenever CPU generates logical
address, page number part of this address is searched in the TLB. If the required
page number is found in the TLB, this is known as TLB hit and its associated
frame number is used to map the logical address to the physical address.
If the page number is not found in the TLB, this is known as TLB miss and then
the page table is referenced. From the page table frame number is obtained which
is for mapping and also this page number and frame number entry is done in the
TLB so that they can be found when TLB is referenced next time. If the TLB is
already full then OS uses page replacement algorithm (policies) like LRU,FIFO
and so on to transfer one old entry to disk storage so as to make for new entry that
shows in figure paging using TLB.
There is one important thing that needs to be taken care with TLBs .The TLB
contains entry for virtual address to physical translations that are only valid for the
currently executing process and these translations entries in TLB are of no use for
other processes. Therefore when context switching takes place from one process to
another, the paging H/W and OS should make sure that processes that gives to be
executed next and using the TLB doesn’t accidentally use TLB entry of some
previously executed process. To solve this problem there are two approaches:
1. Flush(Erase):
Flush the TLB each time there is a context switch means clearing off the
TLB entries before processes that is going to be executed and using the TLB
starts its execution.
2. Address space identifiers:(ASID)
Add one more field in the TLB called Address space identifiers(ASID)
along with each page number and frame number entry in the TLB.ASID
uniquely identifies a process and is used to differentiates one process from
the other. ASID is somewhat like a process identifier (PID), but usually
ASID has fewer bits as compared to PID. Hence, with ASID the TLB can
have translation entries of different processes at the same without any
problem, when page number entry is found in TLB, then ASID associated it
with is also matched with currently executing process ASID
If TLB ASID entry matches with currently executing process ASID, it is a
TLB hit other TLB miss.
TLB problem:
Assume that the TLB search time is 9s and memory access time is 100ns.If TLB is
not used , then memory access time= 2 * 100 ns=200ns.
Let TLB hit ratio =80%i.e means for 80% of time the page number is found in the
TLB and 20% of time the page number not found in the TLB i.e 20% of TLB miss
ratio.
General formula=
ETA=H * (T+t)+(1-H)*(2T+t)
Where H=probability that an intended frame number would be found in TLB itself (H
proportional to size of TLB)
T=RAM access time
t=TLB access time
While effective memory access time (without TLB) =2T.
Q.A paging scheme using TLB. TLB access time 10ns and main memory access time takes 50 ns.
What is the effective memory access time( in ns)if TLB hit ratio is 90% and there is no page fault.
ETA= 90%(50+10)+10%(2*50+10)=65ns
In segmentation, each segment in the logical address space has a specific number
and the length associated with it. Each segment has starting address (base) and
limit that seats the range that seats the range of that segment. In paging, user
specifies the logical address which was divides into page number and page offset
by paging H/W, but in segmentation the user specifies the logical address using
two dimensions:
1. Base that points the starting address of the segment in physical memory.
2. Limit that specifies the length of the segment.
Segmentation Disadvantage:
To take advantage both paging and segmentation some system combines both of
these approaches. This approach is called paged segmentation or we can say
segmentation with paging. In this technique, segment is viewed as a collection of
pages. Logical address generated by CPU is divided into three parts- the segment,
the page and the offset, this is shown in figure.
Virtual memory
(Introduction)
The OS implement virtual memory concept by loading only that portion of the
user program at a time from disk storage(2 ndry memory) into main memory that is
currently need to be executed instead of loading full program in main memory.
Also that portion of user program that is not currently required is temporarily full
program in main memory. Also that portion of user program that is not currently
required is temporarily transferred back from main memory to disk storage (2ndry
memory) to make space or other program. For example: following are situations
when entire program is not required to be loaded fully in main memory.
Parts of the program called error handing routines are used when an error
occurs.
Some functions and procedures of a program may be used seldom.
Many data structures like arrays, structures, tables etc are assigned a fixed
size of memory space in user programs but actually only a small amount of
memory assigned to these data structures is used.
Thus virtual memory gives the facility to execute a program by loading only
portion of the program in main memory instead of entire program. This concept
provides many benefits:
Demand Paging:
The concept of virtual memory is generally implemented by demand paging .A
demand paging system is pretty like a paging system but with additional feature
of swapping. The user process resides in 2ndry memory and is a considered as a set
of pages. The basic concept of demand paging is when we want to execute a
process, than instead of bringing (swapping in) the entire process from 2ndry
memory into main memory, we use a lazy swapper called pager which swaps only
those pages into memory that are needed currently. Thus, pages that are not
needed in process execution are not brought into main memory. This
considerably reduces the time required for swapping and the amount of physical
memory needed by a process.
To implement demand paging Hardware support is required to identify which
pages are in memory and which pages is on the disk. To do this page
identification, method of valid –invalid bit is used. Each page in main memory has
associated bit with it. The pages that are currently in main memory and are legally
valid, their corresponding bit is set to “valid” (V). The pages that are currently not
in main memory (it is in 2ndry memory) or legally not valid, their corresponding bit
is set to “Invalid”(I).A page is invalid if it is not present in logical address space.
The 2ndary memory holds the pages that are not in main memory and swapping
in and out of pages are done between main and 2ndary memory.
Step6. The instruction that was interrupted due to page fault is now restarted as
the required page is now available in the main memory.
Performance of Demand Paging:
Page replacement Algorithms
Introduction
FIFO Page replacement Algorithm
Optimal page replacement algorithm
LRU page replacement algorithm
LRU –Approximation Page replacement
Counting –Based page replacement
Page-buffering Algorithms.
Introduction:
FIFO PAGE REPLACEMNT ALGORITHM:
The First-in First –Out (FIFO) page replacement algorithm is one of the easy to
implement algorithm. As the name suggest the operating system maintains a FIFO
queue to store pages in memory. The pages are arranged in FIFO queue according
to their arrival. As the new page is brought from 2ndary memory, it is placed at the
tail (end) of the queue. When a page needs to be swapped out, the page at the
head (front) of the queue (the oldest page) is selected.
Optimal page replacement algorithm:
An optimal page replacement algorithm (also known as MIN) has the least
possible page fault rate among all the page replacement algorithms. When a page
needs to be swapped-out so as to swap in required page from 2ndary memory, the
OS choose that page for swapping out, whose next use is predicted late in future.
The problem with this algorithm is that it is not practicable to implement in real
situations. This is because OS need to know in advance after how much time the
page will be referenced again, means OS should know reference string
beforehand. Optimal page replacement algorithm does not suffer from Belady’s
anomaly phenomenon.
Numerical on Optimal and FIFO:
Request 0 1 5 3 0 1 4 0 1 5 3 4
Frame 5 5 5 1 1 1 1 1 3 3
3
Frame 1 1 1 0 0 0 0 0 5 5 5
2
Frame 0 0 0 3 3 3 4 4 4 4 4 4
1
Miss/Hit Miss Miss Miss Miss Miss Miss Miss Hit Hit Miss Miss Hit
Request 0 1 5 3 0 1 4 0 1 5 3 4
Frame 3 3 3 3 3 3 5 5 5
4
Frame 5 5 5 5 5 5 1 1 1 1
3
Frame 1 1 1 1 1 1 0 0 0 0 4
2
Frame 0 0 0 0 0 0 4 4 4 4 3 3
1
Miss/Hit Miss Miss Miss Miss Hit Hit Miss Miss Miss Miss Miss Miss
Ans:
Request 2 Set 3 Set 2 Set 1 Set 5 Set 2 Set 4 Set 5 Set 3 Set 2 Set 5 Se 2
bit bit bit bit bit bit bit bit bit bit t
bit
F1 2 0 2 0 2 1 2 1 2 0 2 1 2 0 2 0 3 0 3 0 3 0 3
F2 - 3 0 3 0 3 0 5 0 5 0 5 0 5 1 5 1 5 0 5 1 5
F3 - - - 1 0 1 0 1 0 4 0 4 0 4 0 2 0 2 0 2