Booting and Dual Booting of Operating System
Booting and Dual Booting of Operating System
Booting and Dual Booting of Operating System
Process
program usder execution is called process.
Mode Switch
Context Switch
Key differences:
- Mode switch changes CPU privilege level, while context switch changes the executing process.
- Mode switch is a prerequisite for context switch.
- Only the kernel can initiate a context switch, while a mode switch can occur due to user process
actions (system calls, faults).
The kernel is like the car's owner, controlling who drives (executes) and when to shift gears (change
privilege level).
CPU-Bound vs I/O-Bound Processes
A CPU-bound process requires more CPU time or spends more time in the running state. An I/O-
bound process requires more I/O time and less CPU time. An I/O-bound process spends more time
in the waiting state.
4. Priority Scheduling:
• Definition: Each process is assigned a priority, and the CPU is allocated to the process with
the highest priority.
• Characteristics:
• Can be preemptive or non-preemptive.
• Processes with the same priority are scheduled using FCFS.
• Pros:
• Reduces waiting time for high-priority processes.
• Cons:
• Can lead to starvation of low-priority processes.
• May require aging to prevent starvation.
Memory Management
Main memory
Main memory, also known as RAM (Random Access Memory), is a computer's primary storage
used to temporarily store data and instructions that the CPU needs to access quickly. It is volatile,
meaning its contents are lost when the computer is powered off.
Memory Management
Memory Management is a technique to efficiently utilize the fixed amount of memory to allocate it
to various processes for their execution. It is needed to reduce memory wastage and fragmentation
issues and to protect the processes from each other which are present in the main memory together.
• Static Linking: Combines all necessary program modules into a single executable at
compile time, with no runtime dependencies. The entire program, including libraries, is
bundled together.
• Dynamic Linking: The basic concept of dynamic linking is similar to dynamic loading. In
dynamic linking, “Stub” is included for each appropriate library routine reference. A stub is
a small piece of code. When the stub is executed, it checks whether the needed routine is
already in memory or not. If not available then the program loads the routine into memory.
Swapping
When a process is executed it must have resided in memory. Swapping is a process of swapping a
process temporarily into a secondary memory from the main memory, which is fast compared to
secondary memory and vice versa.
Contiguous Memory Allocation
Contiguous Memory Allocation is a technique in which a process is allocated a single contiguous
block or section of the memory according to its need.
1. First Fit - In this technique, the process is allocated to the first hole which has a size
greater than or equal to the process size.
2. Next Fit - This is similar to the First Fit. The only difference is it keeps a pointer on
the last allocated block and begins searching from there to allocate the process to a
hole.
3. Best Fit - In Best Fit, every block is checked for every process and the process is
allocated to that hole that has the least leftover space after the process allocation.
4. Worst Fit - This is opposite to the Best Fit and the process is allocated to that hole
that has the maximum leftover space after the process allocation.
Fragmentation is the phenomenon where free memory spaces are created in a system, but these
spaces are either too small or too scattered to be effectively used for new processes. This leads to
inefficient use of memory and can reduce the system's performance.
• Internal Fragmentation: Occurs when a process is allocated more memory than it needs,
leaving small, unused portions within allocated blocks.
• External Fragmentation: Occurs when free memory is divided into non-contiguous blocks,
preventing the allocation of memory to new processes even though sufficient total space
exists.
Solutions for External Fragmentation:
1. Compaction: Reorganizes memory by moving allocated blocks to consolidate free space
into one large contiguous block, making it available for new processes.
2. Paging and Segmentation: Allows the logical address space of processes to be non-
contiguous, enabling the allocation of physical memory in separate chunks, thus reducing
the impact of external fragmentation.
**TLB (Translation Lookaside Buffer):** A high-speed cache used to store recent translations of
virtual addresses to physical addresses, speeding up memory access by reducing the need to consult
the page table. Each TLB entry consists of a tag (representing the virtual address) and a value
(representing the corresponding physical address), and the TLB can quickly provide the physical
address if the virtual address is found in the cache.
• Page Number (p): The portion of the logical address that identifies a specific page within
the logical address space. It indicates which page a particular address belongs to.
• Page Offset (d): The portion of the logical address that specifies the exact location within a
page. It determines the specific word or byte within the page.
Physical Address is divided into:
• Frame Number (f): The portion of the physical address that identifies a specific frame
within the physical address space. It indicates which frame in physical memory contains the
data.
• Frame Offset (d): The portion of the physical address that specifies the exact location
within a frame. It determines the specific word or byte within the frame.
Paging and Segmentation are both memory management techniques used in operating
systems, but they differ in how they divide and manage a process's memory.
Summary:
• Paging focuses on dividing memory into fixed-size blocks for efficient management,
primarily dealing with hardware aspects.
• Segmentation divides memory into logical, variable-sized units that correspond to different
parts of a program, offering a more user-oriented view of memory.
Virtual Memory: This is a memory management technique that creates an "illusion" of a very large
memory space by using both physical RAM and disk storage. When the RAM is full, parts of the
data or programs that are not actively being used are temporarily moved to a space on the hard drive
or SSD called the page file or swap space. This allows the system to run larger programs or multiple
programs simultaneously without running out of RAM.
Demand Paging is a memory management scheme used in operating systems where pages of data
are loaded into memory only when they are needed, rather than preloading all pages in advance.
This technique allows the system to save memory and reduce load times by only bringing in the
portions of a program that are actively being used.
Page Fault: A page fault occurs when a program tries to access a page not currently in physical
memory, causing the operating system to retrieve the missing page from secondary storage.
Page Hit: A page hit happens when a program accesses a page that is already present in physical
memory, allowing the operation to proceed without delay.
8. Random
• Definition: The Random algorithm replaces a randomly selected page from the memory
without any regard for the page's age or frequency of use.
• Pros:
• Simple to implement and requires minimal bookkeeping.
• Often performs surprisingly well in practice, as it avoids the overhead of more
complex algorithms.
• Cons:
• May lead to suboptimal performance because it does not use any strategy to predict
future page references.
• Can result in high page fault rates if the randomly selected pages are still in use or
needed soon.
Process Synchronization
Race Around Condition
This condition occurs when more than one process tries to access the same data at the same time
and tries to manipulate it. The outcome of the execution depends on the order they access the data.
critical section is a segment of code that accesses shared resources and must be executed by only
one process or thread at a time to avoid conflicts or inconsistencies. Proper management of critical
sections is essential to ensure mutual exclusion and synchronization in concurrent programming.
Process synchronization in an operating system is a method used to ensure that multiple processes
or threads can work together without interfering with each other, especially when they share
resources like memory or files. It prevents issues like data corruption or inconsistency by
coordinating the execution order of processes to ensure they don't access shared resources at the
same time.
Mutex locks are synchronization tools used to ensure that only one thread or process can access a
shared resource or critical section at a time. They prevent concurrent access, thereby avoiding
conflicts and ensuring data consistency.
**semaphore** is a synchronization mechanism used to control access to shared resources by
multiple processes or threads. It uses integer values to signal and manage the availability of
resources, with operations typically including **wait** (decrement) and **signal** (increment) to
coordinate access.
Monitors are high-level synchronization constructs that provide a safe way to manage access to
shared resources in solving the critical section problem. They encapsulate shared resources and the
operations on them, ensuring that only one process can execute a monitor's procedures at a time,
preventing race conditions.Monitors offer a higher-level, safer approach to synchronization by
automatically managing access to shared resources.
Peterson's solution is a classic algorithm for achieving mutual exclusion in concurrent
programming, ensuring that only one of two processes can enter their critical sections at a time. It
uses two shared variables to communicate and coordinate between processes to prevent
simultaneous access.
The Producer-Consumer Problem is a classic synchronization issue where one or more producer
processes generate data and place it into a shared buffer, while one or more consumer processes
retrieve and use that data. The challenge is to manage access to the buffer to prevent data
corruption, ensure the buffer doesn’t overflow or underflow, and maintain proper coordination
between producers and consumers.
The **Reader-Writer Problem** is a synchronization problem that involves managing access to a
shared resource, such as a database, where multiple processes or threads are either reading from or
writing to the resource:
- **Readers:** These processes only read data and do not modify it. Multiple readers can access the
resource simultaneously as long as no writer is active.
**Writers:** These processes modify the data and require exclusive access to the resource. Only
one writer can access the resource at a time, and no readers can access the resource while a writer is
active.
The challenge is to ensure that readers and writers do not interfere with each other, preventing
issues such as data inconsistency or starvation of either readers or writers.
The Dining Philosophers Problem is a classic synchronization problem used to illustrate and study
issues related to resource sharing and deadlock in concurrent systems. The problem involves five
philosophers sitting around a circular table with a fork placed between each pair of adjacent
philosophers. Each philosopher alternates between thinking and eating. To eat, a philosopher needs
to use the two forks adjacent to them.
Key Challenges:
• Deadlock: All philosophers may simultaneously pick up one fork and wait for the other,
resulting in a situation where none can eat.
• Starvation: A philosopher might never get both forks if other philosophers continuously eat.
Solutions:
• Resource Hierarchy: Number the forks and ensure philosophers pick up the lower-
numbered fork first, then the higher-numbered one.
• Waiter Solution: Introduce a central authority (waiter) to control the access to forks and
avoid deadlock.
• Allow at most four philosophers to be sitting simultaneously at the table.
• Allow a philosopher to pick up her chopsticks only if both chopsticks are
available (to do this, she must pick them up in a critical section).
• Use an asymmetric solution—that is, an odd-numbered philosopher picks
up first her left chopstick and then her right chopstick, whereas an even-
numbered philosopher picks up her right chopstick and then her left
chopstick.
The Cigarette Smoker Problem is a synchronization problem that involves three smokers and a
central agent. Each smoker has a specific ingredient (tobacco, paper, or matches) and can only
smoke if they have all three ingredients. The central agent places two of the ingredients on the table,
and the smokers must collaborate to complete the set.
Problem Description:
• Smokers: There are three smokers, each with a unique ingredient. They can only smoke if
they have all three ingredients: tobacco, paper, and matches.
• Agent: The central agent places two of the three ingredients on the table at random.
• Goal: Smokers must figure out when they have the missing ingredient (based on what the
agent has placed) and access the table to pick it up to complete the set.
Key Challenges:
• Mutual Exclusion: Ensuring that only one smoker accesses the table at a time to pick up the
missing ingredient.
• Synchronization: Properly coordinating the placement of ingredients by the agent and the
smoking actions of the smokers.
Solutions:
• Semaphore-based Synchronization: Use semaphores to manage access to the table and
ensure that only one smoker can pick up the ingredients at a time.
• Mutexes and Condition Variables: Employ mutexes to protect shared resources and
condition variables to signal when a smoker can proceed with smoking.
The Cigarette Smoker Problem is used to demonstrate and address synchronization issues in
concurrent programming, particularly in resource allocation scenarios.
Deadlock
What is Deadlock?
Deadlock is a situation in computing where two or more processes are unable to proceed because
each is waiting for the other to release resources. Waiting is done is circular manner.
Deadlock can arise if the following four conditions hold simultaneously (Necessary
Conditions)
• Mutual Exclusion: Two or more resources are non-shareable (Only one process can use at a time)
Hold and Wait: A process is holding at least one resource and waiting for resources.
No Pre-emption: A resource cannot be taken from a process unless the process releases the
resource.
Circular Wait: A set of processes waiting for each other in circular form.
>read pdf
File System
The File System is a method or data structure that helps in storing the data or files in the system so
that it can be fetched easily when required.
File Directories
The File Directory is itself a file that contains information about the files. The File directory is the
collection of files. It contains a lot of information about the files like name, type, location,
protection, etc.
Single-Level Directory:
• Structure: There is just one directory that holds all the files for all users. Each file has a
unique name within this directory.
• Advantages: Simple to implement and manage; requires minimal overhead.
• Disadvantages: Limited scalability as the number of files and users grows. With many files,
it becomes difficult to manage and locate files, and the risk of name collisions increases.
Two-Level Directory:
• Structure: Consists of a root directory and a subdirectory for each user. The root directory
contains subdirectories named after users, and each user’s subdirectory contains their own
files.
• Advantages: Improves organization by segregating files by user, reducing the risk of name
collisions and making file management easier. Each file is identified by a path name, e.g.,
/user1/file1 or /user2/file2.
• Disadvantages: While it provides better organization, it still has limitations in terms of
scalability and flexibility compared to more complex structures.
Tree-Structured Directory:
• Structure: This organizes directories in a hierarchical tree format. There is a root directory,
and from it, multiple branches (subdirectories) can be created, leading to further
subdirectories and files.
• Advantages: Highly scalable and flexible. It supports grouping of related files and
directories, which makes file management, searching, and organization more efficient.
Allows for complex structures and easier navigation.
• Disadvantages: More complex to implement and manage compared to single-level and two-
level directories. Requires more overhead in terms of maintaining the directory structure and
handling paths.
Allocation of Files
Continuous Allocation:
• Structure: Files are stored in a single contiguous block of disk space.
• Advantages: Simple and efficient for sequential access since the entire file is stored in a
single place.
• Disadvantages: Can lead to fragmentation and inefficient use of disk space as files grow
and shrink.
Linked Allocation:
• Structure: Files are divided into blocks scattered throughout the disk. Each block contains a
pointer to the next block.
• Advantages: No need for contiguous disk space, which reduces fragmentation and makes
dynamic file size adjustments easier.
• Disadvantages: Slower access times due to the need to follow pointers, and the overhead of
managing pointers can be significant.
Indexed Allocation:
• Structure: Each file has an index block that contains pointers to all the data blocks of the
file.
• Advantages: Combines benefits of contiguous and linked allocations by allowing efficient
access and management of file blocks. Supports random access and avoids external
fragmentation.
• Disadvantages: Requires additional disk space for index blocks and can become complex if
multiple levels of indexing are used for very large files.
Disk Scheduling
It is the scheduling of the I/O requests made by a process for the disk.
In disk scheduling, "disk" refers to a storage device used for storing digital data, typically a hard
disk drive (HDD) or a solid-state drive (SSD).
Components of a Disk:
1. Platters: In an HDD, these are the circular disks coated with magnetic material where data
is stored.
2. Read/Write Heads: These are the mechanisms that move over the platters to read from or
write data to them.
3. Tracks: Concentric circles on the platters where data is recorded.
4. Sectors: Subdivisions of tracks, typically the smallest unit of storage on the disk.
5. Cylinders: The collection of tracks that are vertically aligned across all platters.
Important Terms related to Disk Scheduling Algorithms
• Seek Time - It is the time taken by the disk arm to locate the desired track to perform read
and write operations.
• Rotational Latency - The time taken by a desired sector of the disk to rotate itself to the
position where it can access the Read/Write heads is called Rotational Latency.
• Transfer Time - It is the time taken to transfer the data requested by the processes.
• Disk Access Time - Disk Access time is the sum of the Seek Time, Rotational Latency, and
Transfer Time.
• Disk Response Time: Response Time is the average of time spent by a request waiting to
perform its I/O operation. Average Response time is the response time of the all requests.