Operating System
Operating System
UNIT-I
In a batch system, similar jobs are grouped together and executed one
after the other without user interaction.
Jobs are submitted to the computer operator, who batches them with
similar jobs.
Multiple programs reside in memory at the same time. The CPU switches
from one process to another to maximize CPU utilization.
When one program is waiting for I/O, the CPU executes another.
Operating System 1
Disadvantages: More complex scheduling and memory management.
Each user or process is given a small time slice of the CPU. The system
switches rapidly between processes, giving the illusion of simultaneous
execution.
Controls a group of distinct computers and makes them appear to the user
as a single system.
Multiple systems are connected to work together as a single unit for high
availability.
Two types:
Operating System 2
Hard Real-Time OS: Strict deadlines (e.g., missile systems).
1. Monolithic System:
Example: MS-DOS.
2. Layered Approach:
Operating System 3
Example: THE operating system.
3. Microkernel System:
Only essential functions (like communication, I/O) are in the kernel; other
services run in user space.
4. Modules:
Example: Linux.
5. Hybrid Systems:
Unit 2
1. Process Concept
A process is a program in execution. It is more than just the program code (which
is known as the text section). It includes the current activity, represented by the
value of the program counter, the contents of the processor’s registers, the
process stack (which contains temporary data such as function parameters,
return addresses, and local variables), and a data section that contains global
variables.
Operating System 4
The process life cycle describes the various stages a process goes through
during its lifetime. The main states are:
Waiting (Blocked): The process is waiting for some event to occur (e.g., I/O
completion).
Transitions between these states are managed by the operating system, which
schedules and allocates resources accordingly. State diagrams are used to
represent these transitions.
Operating System 5
Process State
Program Counter
CPU Registers
Accounting Information
The PCB is essential for context switching, as it stores the snapshot of a process
so it can resume execution correctly after being paused.
5. Process Scheduling
Process Scheduling is the activity of the operating system that handles the
execution of processes. The goal is to maximize CPU utilization and system
responsiveness.
Schedulers are classified into:
Priority Scheduling
Each has its own advantages and is chosen based on the system’s requirements.
Operating System 6
6. Process Synchronization
Process Synchronization involves coordinating processes that share resources to
avoid conflicts, such as race conditions, where multiple processes access and
modify shared data concurrently.
Critical-Section Problem
The critical section refers to the portion of a process that accesses shared
resources (like data structures, files, etc.). The critical-section problem arises
when multiple processes execute their critical sections simultaneously, leading to
inconsistent or corrupted data.
1. Mutual Exclusion: Only one process can be in the critical section at a time.
3. Bounded Waiting: A limit must exist on the number of times other processes
are allowed to enter the critical section after a process has made a request.
Synchronization Hardware
Hardware-based solutions use machine-level instructions to ensure mutual
exclusion. For example:
Test-and-Set
Swap Instruction
Semaphores
A semaphore is a synchronization tool used to control access to shared
resources. It is an integer variable that is accessed through two atomic operations:
wait (P): Decrements the semaphore. If the value becomes negative, the
process is blocked.
Operating System 7
signal (V): Increments the semaphore. If there are blocked processes, one of
them is unblocked.
Critical Region
The critical region refers to the code that accesses shared resources and must be
protected by synchronization mechanisms. Only one process should execute in
this region at a time.
Monitors
A monitor is a high-level synchronization construct that provides a convenient and
effective mechanism for process synchronization. It allows only one process to be
active within the monitor at any given time. Monitors encapsulate shared
variables, operations, and the synchronization between concurrent process calls.
They use condition variables and two main operations:
Monitors are supported by high-level languages like Java and are easier to use
and less error-prone than semaphores.
UNIT-III
Deadlocks
A deadlock is a state in a system where a group of processes are blocked
because each process is holding a resource and waiting for another resource that
is being held by another process in the group. As a result, none of the processes
can proceed.
Operating System 8
1. Deadlock Characterization
For a deadlock to occur, four necessary conditions must hold simultaneously:
2. Hold and Wait: A process is holding at least one resource and waiting to
acquire additional resources held by other processes.
4. Circular Wait: A set of processes exists such that each process is waiting for
a resource held by the next process in the chain, forming a circular chain.
4. Ignore the Problem: Assume that deadlocks are rare and do nothing about
them (used in many systems like UNIX).
3. Deadlock Prevention
This approach ensures at least one of the four necessary conditions never holds:
Operating System 9
No Preemption: Allow the system to forcibly remove resources from
processes.
4. Deadlock Avoidance
In this strategy, the OS keeps track of the resource allocation state and makes
decisions to avoid unsafe states.
The most famous algorithm is Banker's Algorithm, which checks for safe
state before granting resource requests.
A safe state is one where the system can allocate resources to each process
in some order without leading to a deadlock.
5. Deadlock Detection
If deadlocks are allowed to occur, the system must have a mechanism to detect
them:
For a single instance of each resource type, a wait-for graph can be used.
6. Deadlock Recovery
Once a deadlock is detected, recovery methods include:
CPU Scheduling
Operating System 10
CPU Scheduling is the process of selecting a process from the ready queue and
allocating the CPU to it. It is one of the most important tasks of the operating
system to ensure fair and efficient use of the CPU.
1. CPU Schedulers
There are different types of schedulers:
Long-Term Scheduler (Job Scheduler): Decides which jobs enter the ready
queue.
2. Scheduling Criteria
When evaluating scheduling algorithms, several criteria are considered:
Response Time: Time from submission to the first response (important for
interactive systems).
3. Scheduling Algorithms
There are several CPU scheduling algorithms, each with its own pros and cons:
Non-preemptive.
Operating System 11
Simple but may cause convoy effect (long waiting time for short
processes).
Non-preemptive or preemptive.
Optimal in terms of average waiting time but difficult to predict job lengths.
Preemptive.
4. Priority Scheduling:
UNIT-IV
Memory Management
Memory management is a crucial function of the operating system that handles
or manages primary memory. The OS is responsible for allocating and
Operating System 12
deallocating memory to various processes and ensuring efficient use of memory
resources.
1. Swapping
Swapping is a memory management technique where processes are temporarily
moved from main memory to secondary storage (usually the hard disk) and
brought back when needed.
Swap space is the portion of the hard disk reserved for swapping.
2. Contiguous Allocation
In contiguous memory allocation, each process is allocated a single contiguous
block of memory.
a) Advantages:
Simple to implement.
b) Disadvantages:
Leads to fragmentation.
Internal Fragmentation
Occurs when allocated memory may be slightly larger than the requested
memory, leaving unused space within the allocated block.
External Fragmentation
Occurs when enough total memory is free, but it is not contiguous; thus, it can't
be used by a process needing a large block.
Operating System 13
Compaction can be used to reduce external fragmentation by shifting
processes to make free space contiguous.
3. Non-Contiguous Allocation
Allows a process to be allocated memory in different locations, reducing
fragmentation and increasing flexibility.
a) Paging
Divides physical memory into fixed-size blocks called frames, and logical
memory into blocks of the same size called pages.
The OS keeps a page table that maps logical pages to physical frames.
b) Demand Paging
Pages are loaded into memory only when they are needed, not in advance.
Advantages:
Disadvantages:
c) Segmentation
Memory is divided based on logical divisions of a program: code, stack, heap,
data, etc.
Operating System 14
Segmentation provides better logical organization of memory than paging.
4. Page Replacement
When a page is required and there are no free frames, the OS must choose a page
to remove and replace it with the required one. This is called page replacement.
Replaces the page that hasn’t been used for the longest time.
Replaces the page that will not be used for the longest time in the future.
4. Clock Algorithm:
6. Allocation of Frames
Operating System 15
The OS must decide how many frames to allocate to each process.
Strategies:
7. Thrashing
Thrashing occurs when a process spends more time in page faults than
executing. It happens when:
The working set (active pages) of the process is too large to fit in memory.
Solution:
Use working set model or page fault frequency to control memory allocation.
UNIT-V
Linux
Linux is a powerful, open-source operating system based on UNIX. It was
originally developed by Linus Torvalds in 1991 and has since evolved into a robust
system used in servers, desktops, mobile devices, embedded systems, and
supercomputers.
1. Linux History
Origin: Started as a personal project by Linus Torvalds while studying at the
University of Helsinki.
Operating System 16
Initially designed to work on x86 architecture, inspired by MINIX (a teaching
OS).
Licensed under the GNU General Public License (GPL), allowing users to
freely use, modify, and distribute it.
2. Design Principles
Linux follows several core design principles:
Modularity: Uses a modular kernel that can load and unload features
dynamically.
Open Source: The source code is freely available for inspection and
modification.
Security and Stability: Known for being secure, reliable, and rarely crashing.
3. Kernel Modules
A kernel module is a piece of code that can be loaded into the kernel
dynamically to extend its functionality without restarting the system.
Benefits:
Operating System 17
insmod – insert a module
4. Process Management
Linux treats each process as a unique entity, represented by a process
control block (PCB).
5. Scheduling
Linux uses preemptive multitasking with advanced scheduling algorithms.
Supports:
Real-time scheduling
Round-Robin
Time-sharing
6. Memory Management
Linux uses virtual memory via paging and swapping.
Operating System 18
Uses the concept of zones (DMA, Normal, HighMem) for physical memory
allocation.
Memory-related features:
7. File Systems
Linux supports various file systems:
Key concepts:
ls , cd , mkdir , rm , cp , mv , df , du
I/O devices are managed using device drivers loaded as kernel modules.
Operating System 19
Linux provides several mechanisms for communication and synchronization
between processes:
Message Queues
Semaphores
Shared Memory
Signals
Security Features:
Operating System 20
Let me know if you’d like this unit summarized into revision notes, made into
slides, or turned into a chart comparing Linux components!
Operating System 21