Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Booting and Dual Booting of Operating System

Download as odt, pdf, or txt
Download as odt, pdf, or txt
You are on page 1of 21

Operating Systems: It is the interface between the user and the computer hardware.

Types of Operating System (OS):


1. Batch OS – A set of similar jobs are stored in the main memory for execution. A job gets
assigned to the CPU, only when the execution of the previous job completes.
2. Multiprogramming OS – The main memory consists of jobs waiting for CPU time. The OS
selects one of the processes and assigns it to the CPU. Whenever the executing process
needs to wait for any other operation (like I/O), the OS selects another process from the job
queue and assigns it to the CPU. This way, the CPU is never kept idle and the user gets the
flavor of getting multiple tasks done at once.
3. Multitasking OS – Multitasking OS combines the benefits of Multiprogramming OS and
CPU scheduling to perform quick switches between jobs. The switch is so quick that the
user can interact with each program as it runs
4. Time Sharing OS – Time-sharing systems require interaction with the user to instruct the
OS to perform various tasks. The OS responds with an output. The instructions are usually
given through an input device like the keyboard.
5. Real Time OS – Real-Time OS are usually built for dedicated systems to accomplish a
specific set of tasks within deadlines.
What is system call(read in pdf)
In which mode system calls execute?kernal
System Software: System software serves as the interface between the hardware and the user
applications, providing essential functions to manage and control computer hardware. It includes
operating systems, device drivers, utility programs, and system services that ensure the computer
operates efficiently.
Application Software: Application software consists of programs designed to help users perform
specific tasks or applications, such as word processing, web browsing, media playing, and database
management. These are built on top of the system software and directly address user needs and
productivity.

Booting and Dual Booting of Operating


System
When a computer or any other computing device is in a powerless state, its operating system
remains stored in secondary storage like a hard disk or SSD. But, when the computer is started, the
operating system must be present in the main memory or RAM of the system.
What is Booting?
When a computer system is started, there is a mechanism in the system that loads the operating
system from the secondary storage into the main memory, or RAM, of the system. This is called the
booting process of the system.

What is Dual Booting?


When two operating systems are installed on a computer system, it is called dual booting. In fact,
multiple operating systems can be installed on such a system. But how does the system know which
operating system to boot? A boot loader that understands multiple file systems and multiple
operating systems can occupy the boot space. Once loaded, it can boot one of the operating systems
available on the disk. The disk can have multiple partitions, each containing a different type of
operating system. When a computer system turns on, a boot manager program displays a menu,
allowing the user to choose the operating system to use.

Process
program usder execution is called process.

What is Process Management?


Process management is a key part of an operating system. It controls how processes are carried out,
and controls how your computer runs by handling the active processes. This includes stopping
processes, setting which processes should get more attention, and many more

Context Switching of Process


The process of saving the context of one process and loading the context of another process is
known as Context Switching. In simple terms, it is like loading and unloading the process from the
running state to the ready state.

When Does Context Switching Happen?


Context Switching Happen:
• When a high-priority process comes to a ready state (i.e. with higher priority than the
running process).
• An Interrupt occurs.
• User and kernel-mode switch (It is not necessary though)
• Preemptive CPU scheduling is used.

Mode Switch

- Occurs when the CPU privilege level changes


- Happens when a system call is made or a fault occurs
- Kernel works in a more privileged mode than a standard user task
- Allows a user process to access kernel-only resources
- Currently executing process remains the same
- From user mode to kernel mode (e.g., when a system call is made or a fault occurs)
- From kernel mode to user mode (e.g., when the kernel finishes handling an interrupt or system
call)
- Typically occurs before a context switch

Context Switch

- Occurs when the currently executing process is changed


- Involves switching between two processes or threads
- Requires a mode switch to occur first (to access kernel-only resources)
- Only the kernel can cause a context switch
- Involves saving and restoring process state (registers, memory, etc.)

Key differences:

- Mode switch changes CPU privilege level, while context switch changes the executing process.
- Mode switch is a prerequisite for context switch.
- Only the kernel can initiate a context switch, while a mode switch can occur due to user process
actions (system calls, faults).

Think of it like a car:

- Mode switch is like shifting gears (changing privilege level).


- Context switch is like changing drivers (switching processes).

The kernel is like the car's owner, controlling who drives (executes) and when to shift gears (change
privilege level).
CPU-Bound vs I/O-Bound Processes
A CPU-bound process requires more CPU time or spends more time in the running state. An I/O-
bound process requires more I/O time and less CPU time. An I/O-bound process spends more time
in the waiting state.

What is a Process?program under execution


In computing, a process is the instance of a computer program that is being executed by one or
many threads.

Terminologies Used in CPU Scheduling


• Arrival Time: Time at which the process arrives in the ready queue.
• Completion Time: Time at which process completes its execution.
• Burst Time: Time required by a process for CPU execution.
• Turn Around Time: Time Difference between completion time and arrival time.
Or
The time elapsed between the arrival of a process and its completion is known as turnaround
time.
• Turn Around Time = Completion Time – Arrival Time
• Waiting Time(W.T): Time Difference between turn around time and burst time.
This is a process’s duration in the ready queue before it begins executing
• Waiting Time = Turn Around Time – Burst Time

Response Time;It is the duration between the arrival of a process and the first time it runs.
Response Time = Time it Started Executing – Arrival Time

1. First Come First Serve (FCFS):


• Definition: FCFS is the simplest scheduling algorithm where the process that arrives first is
executed first.
• Characteristics:
• Non-preemptive.
• Follows a First-In-First-Out (FIFO) queue.
• Pros:
• Simple and easy to implement.
• Cons:
• High average waiting time.
• Can lead to the convoy effect, where short processes wait for a long process to finish.
2. Shortest Job First (SJF):
• Definition: SJF schedules the process with the shortest execution time next, minimizing the
average waiting time.
• Characteristics:
• Can be preemptive or non-preemptive.
• Requires knowledge of the process's execution time.
• Pros:
• Minimizes average waiting time.
• Cons:
• May cause starvation if short processes keep arriving.
• Difficult to predict the exact execution time.

3. Longest Job First (LJF):


• Definition: LJF schedules the process with the longest execution time next, the opposite of
SJF.
• Characteristics:
• Generally non-preemptive.
• Prioritizes longer processes.
• Pros:
• Simple to implement.
• Cons:
• High average waiting time.
• May lead to the convoy effect.

4. Priority Scheduling:
• Definition: Each process is assigned a priority, and the CPU is allocated to the process with
the highest priority.
• Characteristics:
• Can be preemptive or non-preemptive.
• Processes with the same priority are scheduled using FCFS.
• Pros:
• Reduces waiting time for high-priority processes.
• Cons:
• Can lead to starvation of low-priority processes.
• May require aging to prevent starvation.

5. Round Robin (RR):


• Definition: RR is a preemptive scheduling algorithm where each process is assigned a fixed
time slot (quantum) and cycles through processes.
• Characteristics:
• Preemptive.
• Fair, as every process gets an equal share of CPU time.
• Pros:
• Simple and ensures no starvation.
• Cons:
• Context switching overhead can be high.
• Performance depends on the size of the time quantum.

6. Shortest Remaining Time First (SRTF):


• Definition: SRTF is a preemptive version of SJF where the process with the smallest
remaining time is executed next.
• Characteristics:
• Preemptive.
• Minimizes average waiting time.
• Pros:
• Efficient for minimizing turnaround time.
• Cons:
• May cause starvation of longer processes.
• Frequent context switching can be costly.

7. Longest Remaining Time First (LRTF):


• Definition: LRTF is the preemptive version of LJF where the process with the longest
remaining time is executed next.
• Characteristics:
• Preemptive.
• Prioritizes processes with the longest remaining time.
• Pros:
• Maximizes throughput for long processes.
• Cons:
• High waiting time and potential starvation for shorter processes.

8. Highest Response Ratio Next (HRRN):


• Definition: HRRN schedules the process with the highest response ratio, which considers
both waiting time and service time.
• Characteristics:
• Non-preemptive.
• Balances between SJF and FCFS to prevent starvation.
• Pros:
• Fairer than SJF, reduces starvation.
• Cons:
• Requires calculation of response ratio for each process.
• Implementation can be complex.

9. Multilevel Queue Scheduling:


• Definition: Processes are divided into different queues based on priority or process type,
and each queue has its own scheduling algorithm.
• Characteristics:
• Non-preemptive or preemptive.
• Different queues for different types of processes (e.g., system, interactive).
• Pros:
• Allows different scheduling policies for different process types.
• Cons:
• Can be inflexible and lead to starvation of lower-priority queues.

10. Multilevel Feedback Queue Scheduling (MLFQ):


• Definition: MLFQ allows processes to move between different queues based on their
behavior and age.
• Characteristics:
• Preemptive.
• Dynamic adjustment of process priority based on behavior.
• Pros:
• More flexible and fairer than static multilevel queue scheduling.
• Cons:
• Complex to implement and manage.
• Can lead to high context-switching overhead.

What trade-offs are involved in selecting a scheduling algorithm?

1.Throughput vs. Waiting Time


• Throughput refers to the number of processes that complete their execution per time unit.
Waiting Time is the amount of time a process spends waiting in the ready queue before
getting executed.
• Trade-off Explanation: If you choose a scheduling algorithm that prioritizes throughput
(like Shortest Job First), it might increase the waiting time for longer processes. For
example, in SJF, processes with shorter burst times are executed first, maximizing the
number of processes completed in a given time. However, if a long process arrives, it may
have to wait for several shorter processes to finish, leading to an increase in its waiting time.

2. Response Time vs. CPU Utilization


• Response Time is the time it takes from when a request was submitted until the first
response is produced. CPU Utilization refers to the percentage of time the CPU is actively
working.
• Trade-off Explanation: Algorithms like Round Robin improve response time by frequently
switching between processes, ensuring that no process waits too long before getting some
CPU time. However, this frequent context switching can reduce CPU utilization because
more time is spent switching between processes rather than executing them.

3. Fairness vs. Turnaround Time


• Fairness ensures that every process gets an equal share of the CPU. Turnaround Time is
the total time taken to execute a process, from submission to completion.
• Trade-off Explanation: An algorithm like Round Robin is fair because it allocates CPU
time evenly across all processes. However, this fairness can lead to increased turnaround
times, especially for short processes that could have completed quickly but are instead
repeatedly interrupted by other processes.

Memory Management

Here, we will cover the following memory management topics:

• What is Main Memory?


• What is Memory Management?
• Why Memory Management is Required?
• Logical Address Space and Physical Address Space
• Static and Dynamic Loading
• Static and Dynamic Linking
• Swapping
• Contiguous Memory Allocation
• Memory Allocation
• First Fit
• Best Fit
• Worst Fit
• Fragmentation
• Internal Fragmentation
• External Fragmentation
• Paging

Main memory
Main memory, also known as RAM (Random Access Memory), is a computer's primary storage
used to temporarily store data and instructions that the CPU needs to access quickly. It is volatile,
meaning its contents are lost when the computer is powered off.

Memory Management
Memory Management is a technique to efficiently utilize the fixed amount of memory to allocate it
to various processes for their execution. It is needed to reduce memory wastage and fragmentation
issues and to protect the processes from each other which are present in the main memory together.

Why Memory Management is Required?


• Allocate and de-allocate memory before and after process execution.
• To keep track of used memory space by processes.
• To minimize fragmentation issues.
• To proper utilization of main memory.
• To maintain data integrity while executing of process.
Logical and Physical Address Space
• Logical Address Space: An address generated by the CPU is known as a “Logical
Address”. It is also known as a Virtual address. Logical address space can be defined as the
size of the process. A logical address can be changed.
• The physical address space refers to the set of all physical addresses that correspond to the
logical addresses mapped by the Memory Management Unit (MMU). It represents the actual
locations in memory hardware where data is stored, and these addresses remain constant.

Static and Dynamic Loading


Loading a process into the main memory is done by a loader. There are two different types of
loading :
• Static Loading: Static Loading is basically loading the entire program into a fixed address.
It requires more memory space.
• Dynamic loading is a memory management technique where a routine or part of a program
is not loaded into memory until it is actually needed during execution. This approach
optimizes memory usage by ensuring that only the necessary code is loaded, leaving unused
portions on disk.
• A routine is a specific block of code within a program designed to perform a particular task
or function.

Static and Dynamic Linking


To perform a linking task a linker is used. A linker is a program that takes one or more object files
generated by a compiler and combines them into a single executable file.

• Static Linking: Combines all necessary program modules into a single executable at
compile time, with no runtime dependencies. The entire program, including libraries, is
bundled together.
• Dynamic Linking: The basic concept of dynamic linking is similar to dynamic loading. In
dynamic linking, “Stub” is included for each appropriate library routine reference. A stub is
a small piece of code. When the stub is executed, it checks whether the needed routine is
already in memory or not. If not available then the program loads the routine into memory.

Swapping
When a process is executed it must have resided in memory. Swapping is a process of swapping a
process temporarily into a secondary memory from the main memory, which is fast compared to
secondary memory and vice versa.
Contiguous Memory Allocation
Contiguous Memory Allocation is a technique in which a process is allocated a single contiguous
block or section of the memory according to its need.

Memory Partition Technique


• Fixed Size Partition - In the Fixed Size Partition technique, the memory is divided into equal
fixed-sized partitions.
• Dynamic Size Partition - The memory is divided dynamically according to the process size.
It uses memory more efficiently.

1. First Fit - In this technique, the process is allocated to the first hole which has a size
greater than or equal to the process size.
2. Next Fit - This is similar to the First Fit. The only difference is it keeps a pointer on
the last allocated block and begins searching from there to allocate the process to a
hole.
3. Best Fit - In Best Fit, every block is checked for every process and the process is
allocated to that hole that has the least leftover space after the process allocation.
4. Worst Fit - This is opposite to the Best Fit and the process is allocated to that hole
that has the maximum leftover space after the process allocation.
Fragmentation is the phenomenon where free memory spaces are created in a system, but these
spaces are either too small or too scattered to be effectively used for new processes. This leads to
inefficient use of memory and can reduce the system's performance.
• Internal Fragmentation: Occurs when a process is allocated more memory than it needs,
leaving small, unused portions within allocated blocks.
• External Fragmentation: Occurs when free memory is divided into non-contiguous blocks,
preventing the allocation of memory to new processes even though sufficient total space
exists.
Solutions for External Fragmentation:
1. Compaction: Reorganizes memory by moving allocated blocks to consolidate free space
into one large contiguous block, making it available for new processes.
2. Paging and Segmentation: Allows the logical address space of processes to be non-
contiguous, enabling the allocation of physical memory in separate chunks, thus reducing
the impact of external fragmentation.

Is malloc memory contiguous?


Answer:
Yes, A contiguous blocks of main memory is allocates by malloc and deallocates it
when no longer used.
What is a memory leak, and how does it affect system performance?
Answers:
A memory leak occurs when a program fails to release memory that it no longer needs,
resulting in wasted memory resources. Over time, if memory leaks accumulate, the
system’s available memory diminishes, leading to reduced performance and possibly
system crashes.

**TLB (Translation Lookaside Buffer):** A high-speed cache used to store recent translations of
virtual addresses to physical addresses, speeding up memory access by reducing the need to consult
the page table. Each TLB entry consists of a tag (representing the virtual address) and a value
(representing the corresponding physical address), and the TLB can quickly provide the physical
address if the virtual address is found in the cache.

Non continuous memory allocation


Page Replacement Algorithms in Operating Systems
Paging, in the context of Page Replacement Algorithms in Operating Systems, is a memory
management technique where the system divides the Virtual Memory into fixed-size blocks called
pages and the physical memory into blocks of the same size called frames. When a process is
executed, its pages are loaded into available frames in physical memory. If the required page is not
in memory (a page fault occurs), the system must decide which page to replace using a page
replacement algorithm, ensuring efficient memory usage and reducing the overhead of page faults.

What is a page table?


A page table is a data structure used in paging to keep track of the mapping between
logical addresses (pages) and physical addresses (page frames). Each process has its
own page table.

The address generated by the CPU is divided into:

• Page Number (p): The portion of the logical address that identifies a specific page within
the logical address space. It indicates which page a particular address belongs to.
• Page Offset (d): The portion of the logical address that specifies the exact location within a
page. It determines the specific word or byte within the page.
Physical Address is divided into:
• Frame Number (f): The portion of the physical address that identifies a specific frame
within the physical address space. It indicates which frame in physical memory contains the
data.
• Frame Offset (d): The portion of the physical address that specifies the exact location
within a frame. It determines the specific word or byte within the frame.
Paging and Segmentation are both memory management techniques used in operating
systems, but they differ in how they divide and manage a process's memory.

Key Differences Between Paging and Segmentation:


1. Division Basis:
• Paging: Divides the process's memory into fixed-size blocks called pages. The
physical memory is also divided into fixed-size blocks called frames. Pages and
frames are of equal size.
• Segmentation: Divides the process's memory into variable-sized blocks called
segments based on the logical divisions of a program, such as code, data, and stack
segments.
2. Size:
• Paging: All pages are of the same, fixed size, and so are the frames in physical
memory.
• Segmentation: Segments vary in size depending on the needs of the program.
3. Addressing:
• Paging: The logical address is divided into a page number and an offset. The page
number is used to index into the page table, which provides the corresponding frame
number in physical memory.
• Segmentation: The logical address is divided into a segment number and an offset.
The segment number is used to access the segment table, which provides the base
address and limit of the segment in physical memory.
4. Logical View:
• Paging: Does not provide a logical view of a process's memory; it is purely a
hardware-based technique for efficient memory management.
• Segmentation: Reflects the logical view of a process, as it divides memory
according to the logical structure of the program (e.g., separate segments for code,
data, stack).
5. Fragmentation:
• Paging: Eliminates external fragmentation but can cause internal fragmentation if the
last page of a process is not fully used.
• Segmentation: Eliminates internal fragmentation but can lead to external
fragmentation because segments are of variable size and need contiguous memory
allocation.
6. Protection:
• Paging: Provides simple protection mechanisms by associating protection bits with
each page.
• Segmentation: Offers more flexible and fine-grained protection by associating
different attributes (e.g., read, write, execute permissions) with each segment.
7. Usage:
• Paging: Typically used in systems requiring efficient memory management, such as
virtual memory systems.
• Segmentation: Used in systems where the logical structure of a program is
important, such as in systems with complex memory management requirements.

Summary:
• Paging focuses on dividing memory into fixed-size blocks for efficient management,
primarily dealing with hardware aspects.
• Segmentation divides memory into logical, variable-sized units that correspond to different
parts of a program, offering a more user-oriented view of memory.

Virtual Memory: This is a memory management technique that creates an "illusion" of a very large
memory space by using both physical RAM and disk storage. When the RAM is full, parts of the
data or programs that are not actively being used are temporarily moved to a space on the hard drive
or SSD called the page file or swap space. This allows the system to run larger programs or multiple
programs simultaneously without running out of RAM.
Demand Paging is a memory management scheme used in operating systems where pages of data
are loaded into memory only when they are needed, rather than preloading all pages in advance.
This technique allows the system to save memory and reduce load times by only bringing in the
portions of a program that are actively being used.

Page Fault: A page fault occurs when a program tries to access a page not currently in physical
memory, causing the operating system to retrieve the missing page from secondary storage.
Page Hit: A page hit happens when a program accesses a page that is already present in physical
memory, allowing the operation to proceed without delay.

1. FIFO (First-In, First-Out)


• Definition: FIFO replaces the oldest page in memory, i.e., the page that has been in memory
the longest.
• Pros:
• Simple to implement and understand.
• Minimal overhead.
• Cons:
• Can suffer from Belady’s Anomaly, where increasing the number of frames can
sometimes increase the number of page faults.
• Does not consider how frequently or recently a page has been used.

2. Optimal Page Replacement


• Definition: The Optimal algorithm replaces the page that will not be used for the longest
period of time in the future.
• Pros:
• Minimizes the number of page faults since it has perfect knowledge of future
requests.
• Cons:
• Not feasible to implement in practice because it requires knowledge of future page
references.
• More theoretical, used mainly as a benchmark for other algorithms.

3. LRU (Least Recently Used)


• Definition: LRU replaces the page that has not been used for the longest duration of time.
• Pros:
• Generally provides a good approximation of the Optimal algorithm.
• Effective in scenarios where recent past usage is a good indicator of future usage.
• Cons:
• Requires maintaining a history of page accesses, which can be complex and
overhead-intensive.
• Can be expensive to implement in hardware due to the need to track access times or
maintain an ordered list.

4. MFU (Most Frequently Used)


• Definition: MFU replaces the page that has been used the most frequently, based on the
assumption that pages used more often are less likely to be needed soon.
• Pros:
• Useful in scenarios where frequently used pages are likely to remain useful.
• Can be effective in systems where some pages are accessed very often.
• Cons:
• Assumes that the most frequently used pages are less likely to be needed soon, which
may not always be the case.
• Can be inefficient if frequently used pages are actually on the verge of being
replaced.

5. MRU (Most Recently Used)


• Definition: MRU replaces the page that was most recently used, under the assumption that it
is less likely to be used again soon.
• Pros:
• Simple to implement and understand.
• Can be useful in certain specific situations, like when recently used pages are
unlikely to be needed again soon.
• Cons:
• Often performs poorly in general scenarios since recently used pages may still be
needed.
• Not ideal for workloads where recently accessed pages are likely to be reused soon.
6. LFU (Least Frequently Used)
• Definition: LFU replaces the page that has been used the least frequently, assuming that
pages that are less frequently accessed are less likely to be needed in the future.
• Pros:
• Effective in scenarios where page access patterns show clear trends in usage.
• Prioritizes keeping frequently used pages in memory.
• Cons:
• Requires tracking and maintaining frequency counts, which can be complex and
resource-intensive.
• May suffer if access patterns change over time, as it may keep infrequently used
pages around longer than necessary.

7. LIFO (Last-In, First-Out)


• Definition: LIFO replaces the most recently loaded page in memory, i.e., the page that was
most recently brought into memory.
• Pros:
• Simple to implement with minimal overhead.
• Easy to understand and manage.
• Cons:
• Can lead to poor performance in many scenarios since recently loaded pages may
still be needed.
• Does not consider the frequency or recency of page accesses beyond the last page
loaded, potentially causing high page fault rates.

8. Random
• Definition: The Random algorithm replaces a randomly selected page from the memory
without any regard for the page's age or frequency of use.
• Pros:
• Simple to implement and requires minimal bookkeeping.
• Often performs surprisingly well in practice, as it avoids the overhead of more
complex algorithms.
• Cons:
• May lead to suboptimal performance because it does not use any strategy to predict
future page references.
• Can result in high page fault rates if the randomly selected pages are still in use or
needed soon.
Process Synchronization
Race Around Condition
This condition occurs when more than one process tries to access the same data at the same time
and tries to manipulate it. The outcome of the execution depends on the order they access the data.

critical section is a segment of code that accesses shared resources and must be executed by only
one process or thread at a time to avoid conflicts or inconsistencies. Proper management of critical
sections is essential to ensure mutual exclusion and synchronization in concurrent programming.

Process synchronization in an operating system is a method used to ensure that multiple processes
or threads can work together without interfering with each other, especially when they share
resources like memory or files. It prevents issues like data corruption or inconsistency by
coordinating the execution order of processes to ensure they don't access shared resources at the
same time.
Mutex locks are synchronization tools used to ensure that only one thread or process can access a
shared resource or critical section at a time. They prevent concurrent access, thereby avoiding
conflicts and ensuring data consistency.
**semaphore** is a synchronization mechanism used to control access to shared resources by
multiple processes or threads. It uses integer values to signal and manage the availability of
resources, with operations typically including **wait** (decrement) and **signal** (increment) to
coordinate access.
Monitors are high-level synchronization constructs that provide a safe way to manage access to
shared resources in solving the critical section problem. They encapsulate shared resources and the
operations on them, ensuring that only one process can execute a monitor's procedures at a time,
preventing race conditions.Monitors offer a higher-level, safer approach to synchronization by
automatically managing access to shared resources.
Peterson's solution is a classic algorithm for achieving mutual exclusion in concurrent
programming, ensuring that only one of two processes can enter their critical sections at a time. It
uses two shared variables to communicate and coordinate between processes to prevent
simultaneous access.
The Producer-Consumer Problem is a classic synchronization issue where one or more producer
processes generate data and place it into a shared buffer, while one or more consumer processes
retrieve and use that data. The challenge is to manage access to the buffer to prevent data
corruption, ensure the buffer doesn’t overflow or underflow, and maintain proper coordination
between producers and consumers.
The **Reader-Writer Problem** is a synchronization problem that involves managing access to a
shared resource, such as a database, where multiple processes or threads are either reading from or
writing to the resource:
- **Readers:** These processes only read data and do not modify it. Multiple readers can access the
resource simultaneously as long as no writer is active.
**Writers:** These processes modify the data and require exclusive access to the resource. Only
one writer can access the resource at a time, and no readers can access the resource while a writer is
active.
The challenge is to ensure that readers and writers do not interfere with each other, preventing
issues such as data inconsistency or starvation of either readers or writers.
The Dining Philosophers Problem is a classic synchronization problem used to illustrate and study
issues related to resource sharing and deadlock in concurrent systems. The problem involves five
philosophers sitting around a circular table with a fork placed between each pair of adjacent
philosophers. Each philosopher alternates between thinking and eating. To eat, a philosopher needs
to use the two forks adjacent to them.

Key Challenges:
• Deadlock: All philosophers may simultaneously pick up one fork and wait for the other,
resulting in a situation where none can eat.
• Starvation: A philosopher might never get both forks if other philosophers continuously eat.

Solutions:
• Resource Hierarchy: Number the forks and ensure philosophers pick up the lower-
numbered fork first, then the higher-numbered one.
• Waiter Solution: Introduce a central authority (waiter) to control the access to forks and
avoid deadlock.
• Allow at most four philosophers to be sitting simultaneously at the table.
• Allow a philosopher to pick up her chopsticks only if both chopsticks are
available (to do this, she must pick them up in a critical section).
• Use an asymmetric solution—that is, an odd-numbered philosopher picks
up first her left chopstick and then her right chopstick, whereas an even-
numbered philosopher picks up her right chopstick and then her left
chopstick.

The Cigarette Smoker Problem is a synchronization problem that involves three smokers and a
central agent. Each smoker has a specific ingredient (tobacco, paper, or matches) and can only
smoke if they have all three ingredients. The central agent places two of the ingredients on the table,
and the smokers must collaborate to complete the set.

Problem Description:
• Smokers: There are three smokers, each with a unique ingredient. They can only smoke if
they have all three ingredients: tobacco, paper, and matches.
• Agent: The central agent places two of the three ingredients on the table at random.
• Goal: Smokers must figure out when they have the missing ingredient (based on what the
agent has placed) and access the table to pick it up to complete the set.

Key Challenges:
• Mutual Exclusion: Ensuring that only one smoker accesses the table at a time to pick up the
missing ingredient.
• Synchronization: Properly coordinating the placement of ingredients by the agent and the
smoking actions of the smokers.

Solutions:
• Semaphore-based Synchronization: Use semaphores to manage access to the table and
ensure that only one smoker can pick up the ingredients at a time.
• Mutexes and Condition Variables: Employ mutexes to protect shared resources and
condition variables to signal when a smoker can proceed with smoking.
The Cigarette Smoker Problem is used to demonstrate and address synchronization issues in
concurrent programming, particularly in resource allocation scenarios.

Deadlock

What is Deadlock?
Deadlock is a situation in computing where two or more processes are unable to proceed because
each is waiting for the other to release resources. Waiting is done is circular manner.
Deadlock can arise if the following four conditions hold simultaneously (Necessary
Conditions)
• Mutual Exclusion: Two or more resources are non-shareable (Only one process can use at a time)
Hold and Wait: A process is holding at least one resource and waiting for resources.
No Pre-emption: A resource cannot be taken from a process unless the process releases the
resource.
Circular Wait: A set of processes waiting for each other in circular form.

>read pdf
File System
The File System is a method or data structure that helps in storing the data or files in the system so
that it can be fetched easily when required.

File Directories
The File Directory is itself a file that contains information about the files. The File directory is the
collection of files. It contains a lot of information about the files like name, type, location,
protection, etc.

Types of File Directories

Single-Level Directory:
• Structure: There is just one directory that holds all the files for all users. Each file has a
unique name within this directory.
• Advantages: Simple to implement and manage; requires minimal overhead.
• Disadvantages: Limited scalability as the number of files and users grows. With many files,
it becomes difficult to manage and locate files, and the risk of name collisions increases.
Two-Level Directory:
• Structure: Consists of a root directory and a subdirectory for each user. The root directory
contains subdirectories named after users, and each user’s subdirectory contains their own
files.
• Advantages: Improves organization by segregating files by user, reducing the risk of name
collisions and making file management easier. Each file is identified by a path name, e.g.,
/user1/file1 or /user2/file2.
• Disadvantages: While it provides better organization, it still has limitations in terms of
scalability and flexibility compared to more complex structures.
Tree-Structured Directory:
• Structure: This organizes directories in a hierarchical tree format. There is a root directory,
and from it, multiple branches (subdirectories) can be created, leading to further
subdirectories and files.
• Advantages: Highly scalable and flexible. It supports grouping of related files and
directories, which makes file management, searching, and organization more efficient.
Allows for complex structures and easier navigation.
• Disadvantages: More complex to implement and manage compared to single-level and two-
level directories. Requires more overhead in terms of maintaining the directory structure and
handling paths.
Allocation of Files

Continuous Allocation:
• Structure: Files are stored in a single contiguous block of disk space.
• Advantages: Simple and efficient for sequential access since the entire file is stored in a
single place.
• Disadvantages: Can lead to fragmentation and inefficient use of disk space as files grow
and shrink.
Linked Allocation:
• Structure: Files are divided into blocks scattered throughout the disk. Each block contains a
pointer to the next block.
• Advantages: No need for contiguous disk space, which reduces fragmentation and makes
dynamic file size adjustments easier.
• Disadvantages: Slower access times due to the need to follow pointers, and the overhead of
managing pointers can be significant.
Indexed Allocation:
• Structure: Each file has an index block that contains pointers to all the data blocks of the
file.
• Advantages: Combines benefits of contiguous and linked allocations by allowing efficient
access and management of file blocks. Supports random access and avoids external
fragmentation.
• Disadvantages: Requires additional disk space for index blocks and can become complex if
multiple levels of indexing are used for very large files.

Disk Scheduling
It is the scheduling of the I/O requests made by a process for the disk.
In disk scheduling, "disk" refers to a storage device used for storing digital data, typically a hard
disk drive (HDD) or a solid-state drive (SSD).

Components of a Disk:
1. Platters: In an HDD, these are the circular disks coated with magnetic material where data
is stored.
2. Read/Write Heads: These are the mechanisms that move over the platters to read from or
write data to them.
3. Tracks: Concentric circles on the platters where data is recorded.
4. Sectors: Subdivisions of tracks, typically the smallest unit of storage on the disk.
5. Cylinders: The collection of tracks that are vertically aligned across all platters.
Important Terms related to Disk Scheduling Algorithms
• Seek Time - It is the time taken by the disk arm to locate the desired track to perform read
and write operations.
• Rotational Latency - The time taken by a desired sector of the disk to rotate itself to the
position where it can access the Read/Write heads is called Rotational Latency.
• Transfer Time - It is the time taken to transfer the data requested by the processes.
• Disk Access Time - Disk Access time is the sum of the Seek Time, Rotational Latency, and
Transfer Time.
• Disk Response Time: Response Time is the average of time spent by a request waiting to
perform its I/O operation. Average Response time is the response time of the all requests.

Disk Scheduling Algorithms


• First Come First Serve (FCFS) - In this algorithm, the requests are served in the order they
come. Those who come first are served first. This is the simplest disk scheduling algorithm.
• Shortest Seek Time First (SSTF) - In this algorithm, the shortest seek time is checked from
the current position and those requests which have the shortest seek time is served first. In
simple words, the closest request from the disk arm is served first.
• SCAN - The disk arm moves in a particular direction till the end and serves all the requests
in its path, then it returns to the opposite direction and moves till the last request is found in
that direction and serves all of them.
• LOOK - The disk arm moves in a particular direction till the last request is found in that
direction and serves all of them found in the path, and then reverses its direction and serves
the requests found in the path again up to the last request found. The only difference
between SCAN and LOOK is, it doesn't go to the end it only moves up to which the request
is found.
• C-SCAN - This algorithm is the same as the SCAN algorithm. The only difference between
SCAN and C-SCAN is, it moves in a particular direction till the last and serves the requests
in its path. Then, it returns in the opposite direction till the end and doesn't serve the request
while returning. Then, again reverses the direction and serves the requests found in the path.
• C-LOOK - This algorithm is also the same as the LOOK algorithm. The only difference
between LOOK and C-LOOK is, it moves in a particular direction till the last request is
found and serves the requests in its path. Then, it returns in the opposite direction till the last
request is found in that direction and doesn't serve the request while returning. Then, again
reverses the direction and serves the requests found in the path.

You might also like