Memory Management
Memory Management
1 Symbolic addresses
The addresses used in a source code. The variable names, constants, and
instruction labels are the basic elements of the symbolic address space.
2 Relative addresses
At the time of compilation, a compiler converts symbolic addresses into
relative addresses.
3 Physical addresses
The loader generates these addresses at the time when a program is loaded
into main memory.
Virtual and physical addresses are the same in compile-time and load-
time address-binding schemes. Virtual and physical addresses differ in
execution-time address-binding scheme.
The set of all logical addresses generated by a program is referred to as
a logical address space. The set of all physical addresses corresponding
to these logical addresses is referred to as a physical address space.
The runtime mapping from virtual to physical address is done by the
memory management unit (MMU) which is a hardware device. MMU
uses following mechanism to convert virtual address to physical address.
The value in the base register is added to every address generated
by a user process, which is treated as offset at the time it is sent to
memory. For example, if the base registers value is 10000, then an
attempt by the user to use address location 100 will be
dynamically reallocated to location 10100.
The user program deals with virtual addresses; it never sees the
real physical addresses.
Swapping
Swapping is a mechanism in which a process can be swapped
temporarily out of main memory (or move) to secondary storage (disk)
and make that memory available to other processes. At some later time,
the system swaps back the process from the secondary storage to main
memory.
Though performance is usually affected by swapping process but it helps
in running multiple and big processes in parallel and that's the
reason Swapping is also known as a technique for memory
compaction.
The total time taken by swapping process includes the time it takes to
move the entire process to a secondary disk and then to copy the process
back to memory, as well as the time the process takes to regain main
memory.
Let us assume that the user process is of size 2048KB and on a standard
hard disk where swapping will take place has a data transfer rate around
1 MB per second. The actual transfer of the 1000K process to or from
memory will take
Memory is a large array of bytes, where each byte has its own address.
The memory allocation can be classified into two methods contiguous
memory allocation and non-contiguous memory allocation. The major
difference between Contiguous and Noncontiguous memory allocation is
that the contiguous memory allocation assigns the consecutive blocks of
memory to a process requesting for memory whereas, the noncontiguous
memory allocation assigns the separate memory blocks at the different
location in memory space in a nonconsecutive manner to a process
requesting for memory. We will discuss some more differences between
contiguous and non-contiguous memory allocation with the help of
comparison chart shown below.
Solution The memory space must be divided Divide the process int
into the fixed-sized partition and each place them in different
partition is allocated to a single according to the ava
process only. space available.
Comparison Chart
Allocation
The operating system and the user’s processes both must be
accommodated in the main memory. Hence the main memory is divided
into two partitions: at one partition the operating system resides and at
other the user processes reside. In usual conditions, the several user
processes must reside in the memory at the same time, and therefore, it is
important to consider the allocation of memory to the processes.
The Contiguous memory allocation is one of the methods of memory
allocation. In contiguous memory allocation, when a process requests for
the memory, a single contiguous section of memory blocks is assigned to
the process according to its requirement.
The contiguous memory allocation can be achieved by dividing the
memory into the fixed-sized partition and allocate each partition to a
single process only. But this will cause the degree of multiprogramming,
bounding to the number of fixed partition done in the memory. The
contiguous memory allocation also leads to the internal fragmentation.
Like, if a fixed sized memory block allocated to a process is slightly
larger than its requirement then the left over memory space in the block is
called internal fragmentation. When the process residing in the partition
terminates the partition becomes available for the another process.
In the variable partitioning scheme, the operating system maintains a
table which indicates, which partition of the memory is free and which
occupied by the processes. The contiguous memory allocation fastens the
execution of a process by reducing the overheads of address translation.
Definition Non-Contiguous Memory Allocation
The Non-contiguous memory allocation allows a process to acquire the
several memory blocks at the different location in the memory according
to its requirement. The noncontiguous memory allocation also reduces the
memory wastage caused due to internal and external fragmentation. As it
utilizes the memory holes, created during internal and external
fragmentation.
Paging and segmentation are the two ways which allow a process’s
physical address space to be non-contiguous. In non-contiguous memory
allocation, the process is divided into blocks (pages or segments) which
are placed into the different area of memory space according to the
availability of the memory.
The noncontiguous memory allocation has an advantage of reducing
memory wastage but, but it increases the overheads of address translation.
As the parts of the process are placed in a different location in memory, it
slows the execution of the memory because time is consumed in address
translation.
Here, the operating system needs to maintain the table for each process
which contains the base address of the each block which is acquired by
the process in memory space.
Conclusion:
Contiguous memory allocation does not create any overheads and
fastens the execution speed of the process but increases memory
wastage. In turn noncontiguous memory allocation creates overheads of
address translation, reduces execution speed of a process but, increases
memory utilization. So there are pros and cons of both allocation
methods.
Memory Allocation
Main memory usually has two partitions −
Low Memory − Operating system resides in this memory.
High Memory − User processes are held in high memory.
1 Single-partition allocation
In this type of allocation, relocation-register scheme is used to protect
user processes from each other, and from changing operating-system
code and data. Relocation register contains value of smallest physical
address whereas limit register contains range of logical addresses.
Each logical address must be less than the limit register.
2 Multiple-partition allocation
In this type of allocation, main memory is divided into a number of
fixed-sized partitions where each partition should contain only one
process. When a partition is free, a process is selected from the input
queue and is loaded into the free partition. When the process
terminates, the partition becomes available for another process.
Fragmentation
As processes are loaded and removed from memory, the free memory
space is broken into little pieces. It happens after sometimes that
processes cannot be allocated to memory blocks considering their small
size and memory blocks remains unused. This problem is known as
Fragmentation.
Fragmentation is of two types −
1 External fragmentation
Total memory space is enough to satisfy a request or to reside a
process in it, but it is not contiguous, so it cannot be used.
2 Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory
is left unused, as it cannot be used by another process.
Paging
A computer can address more memory than the amount physically
installed on the system. This extra memory is actually called virtual
memory and it is a section of a hard that's set up to emulate the
computer's RAM. Paging technique plays an important role in
implementing virtual memory.
Paging is a memory management technique in which process address
space is broken into blocks of the same size called pages (size is power
of 2, between 512 bytes and 8192 bytes). The size of the process is
measured in the number of pages.
Similarly, main memory is divided into small fixed-sized blocks of
(physical) memory called frames and the size of a frame is kept the
same as that of a page to have optimum utilization of the main memory
and to avoid external fragmentation.
Address Translation
Page address is called logical address and represented by page number
and the offset.