Memory Management
Memory Management
Memory Management
In a mono programming or uni programming system, the main memory is divided into two parts,
one part for operating system and another for a job which is currently executing, consider the
figure1 for better under standing
Figure 2
The runtime mapping from logical to physical address is done by the memory management
unit (MMU), which is not a hardware device. The base register is also called the relocation register.
The value in the relocation register is added to every address generated by a user process at the
time it is sent to the memory .For example the base is at 2100, then an attempt by the user to
address location 0 is dynamically related to location 2100. An access to location 100 is mapped to
location 2200.
SWAPPING
Swapping is a method to improve the main memory utilization. For example the main memory
consisting of 10 process, assume that it is the maximum capacity, and the CPU currently executing
the process no: 9 in the middle of the execution the process no: 9 need an I/O, then the CPU sitch
to another job and process no: 9 is moved to disk and another process is loaded in main memory
in the place of process no:9 . when the process no.:9 is completed its I/O operation then the
processor moved in to main memory from the disk. Switch a process from main memory to disk
is said to be Swap out and switch from disk to main memory is said to be Swap in. This of
mechanism is said to be Swapping. We can achieve that efficient memory utilization with
swapping.
Swapping require Backing Store. The backing store is commonly a fast disk. It must be
large enough to accommodate the copies of all process images for all users, when a process is
swapped out, its executable image is copied into backing store. When it is swapped in, it is copied
in to the new block allocated by memory manager.
Memory Management Requirements
There are so many methods, policies for memory management to observe these methods suggests
5 requirements.
Relocation
Protection
Sharing
Logical Organization
Physical Organization
Relocation:
Relocation is a mechanism to convert the logical address in to physical address. An address is
generated by CPU is said to be logical address. An address is generated by memory manager is
said to be physical address.
Physical address= Contents of Relocation register + logical address.
Relocation is necessary at the time of swap in a process from one backing store to main
memory. All most all times the process occupies the same location at the time of swap in. But
sometimes it is not possible, at the time of relocation is required.
Figure 3 Relocation
Protection
The word protection means provide security from unauthorized usage of memory. The operating
system can protect the memory with the help of base and limit registers. Base registers consisting
the starting address of the next process. The limit register specifies the boundary of that job, so the
limit register is also said to be fencing register. For better understanding consider the figure.
The base register holds the srriit1Rfs legal physical memory address, the limit register
contains the size of the process. Consider the figure 6.6 it depicts the hardware protection
mechanism with base and limit registers. If the logical address is greater than the contents of base
register it is the authorized access, otherwise it is trap to operating system. The physical address
(Logical + Base) is less than the base limit it causes no problems, otherwise it is trap to the
operating system.
Sharing
Generally protection mechanisms are required at the
time of access the same portion of main memory by
several processes. Accessing the same portion of main
memory by number of process is said to be "Sharing".
Suppose to say if number of process are executing same
program, it is advantageous to allow each process to
access same copy of the program rather than its have its
own copy. If each process maintain a separate copy of
that program, a lot of main memory will waste.
For example, 3 users want to take their resumes
using word processor, then three processing users share
the word processor in the server, in the place of
individual copy of word processor. Consider the figure,
it depicts the sharing of word processor.
Figure 6 Memory Sharing
Logical organization
We know that main memory in a computer system is organized as a linear, or one-dimensional, address
space that consists of a sequence of bytes or words. Secondary memory at its physical level, is similarly
organized, although this organization closely mirrors the actual machine hardware, it does not correspond
to the way in which programs are typically constructed. Most programs are organized into modules, some
of which are unmodifiable (Read only, execute-only) and some of which contain data that may be modified.
If the operating system and computer hardware can effectively deal with user programs and data in the form
of modules of some sort, then a number of advantages can be realized.
Physical organization
A computer memory is organized in to at least two levels: main memory and secondary memory. Main
memory provides fast access at relatively high cost. In addition, main memory is volatile: that is, it-does
not provide permanent storage. Secondary memory is slower and cheaper than main memory, and it is
usually not volatile. Thus secondary memory of large capacity can be provided to allow for long-tern
storage of programs and data, while a smaller main memory holds programs and data currently in use.
DYNAMIC LOADING AND DYNAMIC LINKING
The word 'loading' means load the program (or) Module from secondary storage devices (disk) to main
memory. Loadings are two types. One is the compile time loading and second one is runtime loading. All
the routines are loaded in the main memory at the time of compilation is said to be Static loading (or)
Compile time loading. The routines are loaded in the main memory at the time of execution (or) Running
is said to be dynamic loading. For example consider a small program in 'C Language.
#include<stdio.h>
main ()
{
Linking the library files before the execution is said to be static linking, at the time of execution is
said to be dynamic linking. You can observe these linking in the figure 7. Most operating systems supports
only static linking. We can say the definition of dynamic linking in a single statement that is "The linking
is postponed until execution time".
MEMORY ALLOCATION METHOD
The main memory must be accommodate both operating system and
various user processes. The operating system place in either low
memory (or) high memory. It is common to place the operating system
in low memory. Consider the below figure.
There are so many methods available for memory allocation. We are
going to discuss about these methods detailed.
Figure 8
Sizes
450KB
1000KB
1024KB
1500KB
500KB
Disadvantages
1.
2.
3.
4.
Fixed variable partitions MMS: In this scheme the user space of main memory is divided into number of
partitions, but the partition sizes are different length. The operating system keep a table indicating which
partition of memory-are available and which are occupied. When a process arrives and needs memory. We
search for a partition large enough for this process, if we find one, allocate the partition to that process. For
example assume that we have 4000KB of main memory available, and operating system occupies 500KB.
The remaining 3500KB leaves for user processes as shown in figures below
Job
J1
J2
J3
J4
J5
Job Queue
Size
825KB
600KB
1200KB
450KB
650KB
J4
Arrival time
10 MS
5
20
30
15
J3
J5
Partitions
Partition
Size
P1
700KB
P2
400KB
P3
525KB
P4
900KB
P5
350KB
P6
625KB
J1
J2
Ready queue
1. Out of 5 jobs J2 arrives first (5ms), the size of J2 is 600KB, so search the all partitions from low
memory to high memory for large enough partition P1 is greater than J2, so load the J2 in P1.
2. In rest of jobs, J1 arriving next (10 ms). The size is 825 KB, So search the enough partition, P4 is
large partition, so load the J1 in to P4.
3. J5 arrives next, the size is 650 KB, and there is no large enough partition to load that job, so J5 has
to wait until enough partition is available.
4. J3 arrives next, the size is 1200 KB there is no large enough partition to load that one.
5. J4 arrives last, the size is 450KB, and partition P3 is large enough to load that one. So load J4 in
Partitions P1, P2, P5, P6 are totally free, there is no processes in these partitions. These
wasted memory is said to be external fragmentation. The total external fragmentation is 1375. (400
+ 350 + 625). The total internal fragmentation is
(700 - 600) + (525 - 450) + (900 - 825) = 250.
The readers may have a doubt, that is size of J2 is 600 KB, it is loaded in partition P1 in the figure.
Then the internal fragmentation is 100KB. But if it loaded in P6, then internal fragmentation is only 25 KB.
Why it is loaded in P1 three algorithms are available to answer this questions. These are First-fit, Best- fit,
Worst-fit algorithms.
First-fit Allocate the partition that is big enough. Searching can start either from low memory or
high memory, we can stop searching as soon .as we find a free partition that is large enough.
Best-fit Allocate the smallest partition that is big enough (or) select a partition, which having the
least internal fragmentation.
Worst-fit Search the entire partitions and select a partition which is the largest of all. (or) select a
partition which having the maximum internal fragmentation.
Advantages
1. This scheme supports multiprogramming.
2. Efficient processor utilization and memory utilization is possible.
3. Simple and easy to implement.
Disadvantages
1. This scheme is suffering from internal and external fragmentation.
2. Possible for large external fragmentation.
Dynamic partitions memory management systems: To eliminate the some of the problems with fixed
partitions, an approach known as Dynamic partitioning developed. In this method partitions are created
dynamically, so that each process is loaded in to partition of exactly the same size at that process. In this
scheme the entire user space is treated as a 'Big hole'. The boundaries of partitions are dynamically changed,
and the boundaries are depend on the size of the processes. Consider the previous example.
Job Queue
Size
825KB
600KB
1200KB
450KB
650KB
Job
J1
J2
J3
J4
J5
Arrival time
10 MS
5
20
30
15
J3
J5
J1
J2
Job J2 arrives first, so load the J2 into memory. Next J1 arrives. Load the J1 into memory, next
J5, J3, J4. Consider the figure above for better understating.
In figure 6.15 (a), (b), (c), (d) jobs J2, J1, J5, J3 are loaded. The last job is J4, the size of J4 is
450KB, but the available memory is 225KB. It is not enough to load J4, so the job J4 has to wait until the
memory available. Assume that after sometime J5 has finished and it is released its memory. After the
available memory is 225 + 650 = 875 KB. It is enough to load J4. Consider the figure 6.15(e) & (f)
Advantages
1. In this scheme partitions are changed dynamically.
So that scheme doesn't suffer from internal fragmentation.
2. Efficient memory and processor utilization.
Disadvantages
1. This scheme suffering from external fragmentation.
Compaction
Compaction is a technique of collect all the free spaces together in one block, this block (or) partition can
be allotted to some other jobs. For example consider the figure, it is the memory allocation figure for
previous example.
The total internal fragmentation in this scheme is (100 + 75 + 75 = 250). The total external
fragmentation is 400 + 350 + 625 = 1375KB. Collect the internal and external fragmentation together in
one block (250 + 1375 = 1625KB). This type of mechanism is said to be compaction. Now the Compacted
memory is 1625 KB. Consider the figure.
Now the scheduler can load job3 (1200) in compacted memory, So efficient memory utilization is
possible using compaction.
Figure 12
Paging
Paging is an efficient memory management scheme because it is non-contiguous memory allocation
method. The previous (Single partition, Multiple partition) methods support the continues memory
allocation, the entire process loaded in the same partition. But in the paging the process is divide in to small
parts, these are loaded into elsewhere in the main memory.
The basic idea of paging is the physical memory (Main memory) is divided into fixed sized blocks
called frames, the logical address space (User job) is divided into fixed sized blocks, called pages, but page
size and frame size should be equal. The size of the frame or a page is depending on operating system.
Generally the page size is 4KB.
In this scheme the operating system maintains a data structure, that is page table, it is used for
mapping purpose. The page table specifies some useful information, it tells which frames are allocated,
which frames are available, and how many total frames are there and so on. The general page table
consisting of two fields, one is page number and other one is frame number. Each operating system has its
own methods for storing page tables, most allocate a page table for each process.
Every address generated by the CPU divided in to two parts, one is 'Page number' and second is
'Page offset' or displacement. The page number is used index in page table. Better understanding consider
the figure 6.17 .The logical address space that is CPU generated address space is divided in to pages, each
page having the page number (P) and Displacement (D). The pages are loaded in to available free frames
in the physical memory.
The mapping between page number and frame number is done by page map table. The page map
table specifies which page is loaded in which frame, but displacement is common. For better understanding
consider the below example. There are two jobs in the ready queue, the jobs sizes are 16KB and 24KB, the
page size is 4KB. The available main memory is 72 KB (18 frames), so job I divided into 4 pages and job2
divided into 6 pages. Each process maintain a program table. Consider the figure for better understanding.
Four pages of job l are loaded in different locations in main memory. The O.S provides a page table
for each process, the page table specifies the location in main memory. The capacities of main memory in
our example is 18 frames. But the available Jobs are Two (10 Pages), so the remaining 8 frames are free.
The scheduler can use this free frames to some other jobs.
Advantages
1.
2.
3.
4.
Disadvantages
1. This scheme may suffer 'Page breaks'. For example the logical address space is 17KB, the page
size is 4KB. So this job requires 5 frames. But the fifth frame consisting of only one KB. So the
remaining 3KB is wasted, It is said to be page breaks.
Segmentation
A segment can be defined as a logical grouping of instructions, such as a subroutine, array, or a data area.
Every program (job) is a collection of these segments. Segmentation is the technique for managing these
segments. For example consider the figure.
Each segment has a name and a length. The address of the segment specifies both segment name
and the offset with in the segment. For example the length of the segment 'Main' is 100 KB, 'Main' is the
name of the segment. The operating system search the entire main memory for free space to load a segment,
this mapping done by segment table. The segment table is a table, each entry of the segment table has a
segment' Base' and a segment 'Limit'.
The segment base contains the starting physical address where the segment resides in memory. The
segment limit specifies the length of the segment. Figure for segmentation hardware.
The logical address consists of 2 parts: a segment number(s), and an offset in to that segment (d). The
segment number(s) is used as an index into the segment table. For example consider the figure.
The logical address space (a job) is divided in to 4 segments, Numbered from 0 to 3. Each segment has an
entry in the segment table. The limit specifies the size of the segment and the' Base' specifies the starting
address of the segment. For example segment '0' loaded in the main memory from 1500 KB to 2500 KB,
so 1500 KB is the base and 2500-1500= 1000 KB is the limit.
Advantages of segmentation
Protection
The word 'protection' means security from unauthorized references from main memory. To provide
protection the operating system follows different type of mechanisms. In paging memory
management scheme, the operating system provide protection bits that are associated with each
frame. Generally these bits are kept in page table. Sometimes these bits are said to be valid/not
valid bit.
Consider the below figure for better understanding.
The page map table consists a specified field that is valid/in valid bit. If the bit shows valid it
means the particular page loaded in main memory. It shows invalid it means the page is not in the
processes logical address space. Illegal addresses are trapped by using the valid / invalid bit. The
operating system sets this bit for each page to allow or disallow access to that page.
3
4
6
7
Paging
The main memory partitioned into
frames (or) blocks.
The logical address space divided by
compiler (or) memory management unit
(MMU)
This scheme suffering from internal
fragmentation or page.
The operating system maintains a free
frames list need not to search for free
frame.
The operating system maintains a page
map table for mapping segment map
table for mapping between frames and
pages. purpose.
This scheme does not support user view
of memory.
Processor uses page no. and
displacement to calculate the absolute
address (P, D)
Multilevel paging is possible
Segmentation
The main memory partitioned into
segments.
The logical address space divided into
segments, specified by the programmer.
This scheme suffering from external
fragmentation.
The operating system maintains the
particulars of available memory.
The operating system maintains a map
table for mapping purpose.