Day 18
Day 18
Day 18
Objective:
Outcomes:
Theory:
Reference Link:
Contiguous memory allocation (CMA) is needed for I/O devices that can only work with
contiguous ranges of physical memory. I/O devices that can only work with continuous ranges
are built that way in order to simplify the design of the device.
On systems with an I/O memory management unit (IOMMU), this would not be an issue
because a buffer that is contiguous in the device address space can be mapped by the IOMMU to
non-contiguous regions of physical memory.
● The device driver can allocate a chunk of physical memory at boot-time. This is reliable
because most of the physical memory would be available at boot-time. However, if the
I/O device is not used, then the allocated physical memory is just wasted.
● A chunk of physical memory can be allocated on demand, but it may be difficult to
find a contiguous free range of the required size. The advantage, though, is that
memory is only allocated when needed.
CMA solves this exact problem by providing the advantages of both of these approaches with
none of their downsides. The basic idea is to make it possible to migrate allocated physical pages
to create enough space for a contiguous buffer.
II. MEMORY ALLOCATION:
Reference Link:
Linux provides a variety of APIs for memory allocation. It can allocate small chunks using
kmalloc or kmem_cache_alloc families, large virtually contiguous areas using kmalloc and its
derivatives, or you can directly request pages from the page allocator with alloc_pages. It is also
possible to use more specialized allocators, for instance cma_alloc or zs_malloc.
Most of the memory allocation APIs use GFP flags to express how that memory should be
allocated. The GFP acronym stands for “get free pages”, the underlying memory allocation
function.
kzalloc(<size>, GFP_KERNEL);
Of course there are cases when other allocation APIs and different GFP flags must be used.
The most straightforward way to allocate memory is to use a function from the kmalloc()
family. And, to be on the safe side it’s best to use routines that set memory to zero, like
kzalloc(). If you need to allocate memory for an array, there are kmalloc_array() and kcalloc()
helpers. The helpers struct_size(), array_size() and array3_size() can be used to safely
calculate object sizes without overflowing.
The maximal size of a chunk that can be allocated with kmalloc is limited. The actual limit
depends on the hardware and the kernel configuration, but it is a good practice to use kmalloc for
objects smaller than page size.
Chunks allocated with kmalloc() can be resized with krealloc(). Similarly to kmalloc_array(): a
helper for resizing arrays is provided in the form of krealloc_array().
For large allocations you can use vmalloc() and vzalloc(), or directly request pages from the
page allocator. The memory allocated by vmalloc and related functions is not physically
contiguous.
If you are not sure whether the allocation size is too large for kmalloc, it is possible to use
kvmalloc() and its derivatives. It will try to allocate memory with kmalloc and if the allocation
fails it will be retried with vmalloc. There are restrictions on which GFP flags can be used with
kvmalloc; please see kvmalloc_node() reference documentation. Note that kvmalloc may return
memory that is not physically contiguous.
If you need to allocate many identical objects you can use the slab cache allocator. The cache
should be set up with kmem_cache_create() or kmem_cache_create_usercopy() before it can
be used. The second function should be used if a part of the cache might be copied to the
userspace. After the cache is created kmem_cache_alloc() and its convenience wrappers can
allocate memory from that cache.
When the allocated memory is no longer needed it must be freed. You can use kvfree() for the
memory allocated with kmalloc, vmalloc and kvmalloc. The slab caches should be freed with
kmem_cache_free(). And don’t forget to destroy the cache with kmem_cache_destroy().
break;
}
}
}
int main()
{
int blockSize[] = {100, 50, 30, 120, 35};
int processSize[] = {20, 60, 70, 40};
int m = sizeof(blockSize)/sizeof(blockSize[0]);
int n = sizeof(processSize)/sizeof(processSize[0]);
implimentFirstFit(blockSize, m, processSize, n);
return 0;
}
Procedure:
Output:
III. FRAGMENTATION
Fragmentation in Linux occurs when free memory space is broken up into smaller,
unusable pieces that cannot be used to satisfy larger memory allocation requests. This can lead to
inefficient memory usage and reduced system performance.
There are two main types of fragmentation in Linux: internal fragmentation and external
fragmentation.
Internal fragmentation: This occurs when a process is allocated more memory than it actually
needs. The unused portion of the memory becomes unavailable for other processes, leading to
waste of memory resources.
External fragmentation: This occurs when there is enough free memory to satisfy a memory
request, but the available free memory is divided into small, scattered pieces that cannot be used
to fulfill a larger memory request. This type of fragmentation is more common in systems with
dynamic memory allocation, where memory is allocated and freed frequently.
To mitigate fragmentation, Linux uses various memory allocation algorithms, such as First Fit,
Best Fit, and Worst Fit, to allocate memory to processes. In addition, memory compaction
techniques, such as defragmentation, can also be used to consolidate fragmented memory blocks
and free up unused memory. Memory compaction involves moving allocated memory blocks so
that they are contiguous, and freeing up any unused memory at the end of the memory space.
Overall, efficient memory usage and management is essential for optimal system performance in
Linux.In Linux, you can use various commands to check the fragmentation in memory
allocation. Here are some commonly used commands:
● pmap: This command shows the memory map of a process, including the amount of
memory allocated to each section of the process.
● smem: This command provides a summary of memory usage by each process and can
help you identify processes that are consuming excessive memory.
● Smem installation: sudo apt-get install smem
● Compilation: g++ -g firstfit.cpp -o firstfit
● Run: smem ./firstfit
● When using the smem command for memory management, Swap refers to the space on
the hard drive used to store data not currently being used in RAM, while PSS, RSS, and
USS are different metrics used to measure a process's memory usage.
● PSS stands for "Proportional Set Size". It is a memory usage metric that takes into
account the shared memory used by a process.
● RSS: RSS stands for "Resident Set Size". It is the portion of a process's memory that is
held in RAM.
● USS: USS stands for "Unique Set Size". It is the portion of a process's memory that is
dedicated solely to that process and is not shared with any other process.
To use various options to customize the output of the smem command. For example, you might
use the -p option to display memory usage by process ID:
To get help with the smem command and see a full list of options, you can use the --help option:
● valgrind: This is a tool for memory profiling and debugging that can help identify
memory leaks and other memory-related issues.
Procedure:
1. Install Valgrind on your Linux system. You can install it using your system's
package manager. For example, on Ubuntu or Debian, you can install Valgrind by
running the command: sudo apt-get install valgrind.
2. Compile your C++ program with debugging symbols using the -g flag. For
example, if your program is named myprogram.cpp, compile it using the
following command: g++ -g myprogram.cpp -o myprogram.
3. Run Valgrind on your compiled program using the following command: valgrind
./myprogram. This will start your program under Valgrind's supervision and
allow Valgrind to monitor its memory usage.
4. Valgrind will report any memory leaks or other memory-related issues it finds in
your program. You can review these issues and take corrective actions to fix
them. Valgrind provides detailed information about each issue, including the
location in your code where the issue occurs and a stack trace that shows the call
stack leading up to the memory allocation or deallocation.
● strace: This command allows you to trace system calls made by a process, including
calls to memory allocation functions such as malloc and free.
Procedure:
● heaptrack: This is a tool for memory profiling that can help you identify memory
allocation hotspots and optimize memory usage.
Procedure:
1. Install heaptrack on your Linux system. You can install it using your system's
package manager. For example, on Ubuntu or Debian, you can install heaptrack
by running the command: sudo apt-get install heaptrack.
2. Compile your C++ program with debugging symbols using the -g flag. For
example, if your program is named myprogram.cpp, compile it using the
following command: g++ -g myprogram.cpp -o myprogram.
3. Run heaptrack on your compiled program using the following command:
heaptrack ./myprogram. This will start your program under heaptrack's
supervision and allow heaptrack to monitor its memory usage.
4. Once your program has completed execution, heaptrack will generate a report file
that provides detailed information about the memory usage of your program,
including memory allocations and deallocations made by your program and any
memory leaks or other memory-related issues that were detected.
5. You can review the report file to identify any memory-related issues in your
program and take corrective actions to fix them. The report file generated by
heaptrack provides detailed information about each memory allocation and
deallocation, including the location in your code where the allocation or
deallocation occurred, the size of the allocation, and the call stack leading up to
the allocation or deallocation.
● gdb: This is a debugger that allows you to debug memory-related issues in your program,
including memory allocation and deallocation errors.
Procedure:
By using memory allocation algorithms such as Best Fit, Worst Fit, and First Fit, you can reduce
Assessment Pattern:
Sl. No. Aspect Aspect of description Extra aspect of Requirement Max.
Type description (Only for marks
(M or M)
J)
0 - Exceeded 90 mins
1 - Completed within 60
mins
Total Marks 10