Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

Operating System

The document provides an overview of operating systems, detailing their functions, types, services, and structures. It covers process management, including concepts like process life cycle, interprocess communication, and scheduling, as well as deadlocks and memory management techniques. Various scheduling algorithms and memory allocation strategies are discussed to highlight the complexities and functionalities of operating systems.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Operating System

The document provides an overview of operating systems, detailing their functions, types, services, and structures. It covers process management, including concepts like process life cycle, interprocess communication, and scheduling, as well as deadlocks and memory management techniques. Various scheduling algorithms and memory allocation strategies are discussed to highlight the complexities and functionalities of operating systems.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Operating System

UNIT-I

Introduction to Operating Systems


An Operating System (OS) is system software that acts as an intermediary
between computer hardware and the user. Its primary purpose is to manage
hardware resources efficiently and provide an environment in which applications
can run. It performs tasks like memory management, process scheduling, file
handling, input/output operations, and security control.
The operating system ensures optimal usage of hardware components and
provides a user-friendly interface, allowing users to interact with the system
through commands or graphical elements.

Types of Operating Systems


1. Batch Operating System

In a batch system, similar jobs are grouped together and executed one
after the other without user interaction.

Jobs are submitted to the computer operator, who batches them with
similar jobs.

Example: Early IBM systems.

Advantages: Efficient for large jobs, reduces idle time.

Disadvantages: No interaction with users during execution, difficult to


debug.

2. Multiprogramming Operating System

Multiple programs reside in memory at the same time. The CPU switches
from one process to another to maximize CPU utilization.

When one program is waiting for I/O, the CPU executes another.

Advantages: Better resource utilization.

Operating System 1
Disadvantages: More complex scheduling and memory management.

3. Time-Sharing Operating System

Also called Multitasking OS.

Each user or process is given a small time slice of the CPU. The system
switches rapidly between processes, giving the illusion of simultaneous
execution.

Examples: UNIX, Windows.

Advantages: Good for interactive systems, better response time.

Disadvantages: Complex to manage and maintain.

4. Distributed Operating System

Controls a group of distinct computers and makes them appear to the user
as a single system.

Resources are shared across multiple machines through a network.

Advantages: Resource sharing, increased reliability.

Disadvantages: Complex synchronization and communication


mechanisms.

5. Clustered Operating System

Similar to distributed systems but tightly coupled.

Multiple systems are connected to work together as a single unit for high
availability.

Used in high-performance computing and server environments.

Advantages: Fault tolerance, load balancing.

Disadvantages: More expensive and complex setup.

6. Real-Time Operating System (RTOS)

Designed to process data and events within a fixed time constraint.

Used in embedded systems, robotics, medical systems, and more.

Two types:

Operating System 2
Hard Real-Time OS: Strict deadlines (e.g., missile systems).

Soft Real-Time OS: Flexible timing (e.g., streaming).

Advantages: Predictable, responsive.

Disadvantages: Limited multitasking, resource-constrained.

Operating System Services


Operating systems provide several essential services for user programs and
system operations:

1. Program Execution: Loads programs into memory and executes them.

2. I/O Operations: Manages input and output with devices.

3. File System Manipulation: Provides access to files and directories.

4. Communication Services: Enables interprocess communication.

5. Error Detection: Monitors for hardware and software errors.

6. Resource Allocation: Manages CPU, memory, and I/O devices.

7. Security and Protection: Protects data and system integrity from


unauthorized access.

Operating System Structure


The structure of an operating system defines how it is organized and how various
components interact. Common OS structures include:

1. Monolithic System:

Entire OS works in kernel space as a single unit.

All services are integrated into one large block.

Example: MS-DOS.

2. Layered Approach:

OS is divided into layers, each built on top of lower layers.

Separation of concerns makes it more manageable and easier to debug.

Operating System 3
Example: THE operating system.

3. Microkernel System:

Only essential functions (like communication, I/O) are in the kernel; other
services run in user space.

Increases system security and stability.

Example: QNX, MINIX.

4. Modules:

Modern operating systems use a modular approach.

Core kernel with dynamically loadable modules.

Example: Linux.

5. Hybrid Systems:

Combine features of monolithic and microkernel systems.

Example: Windows, macOS.

Unit 2

1. Process Concept
A process is a program in execution. It is more than just the program code (which
is known as the text section). It includes the current activity, represented by the
value of the program counter, the contents of the processor’s registers, the
process stack (which contains temporary data such as function parameters,
return addresses, and local variables), and a data section that contains global
variables.

A process is the unit of work in a modern time-sharing system. Systems allow


multiple processes to run concurrently, sharing the CPU. Each process has a
lifecycle and a unique identity, and the operating system manages all aspects of
its execution.

2. Process Life Cycle

Operating System 4
The process life cycle describes the various stages a process goes through
during its lifetime. The main states are:

New: The process is being created.

Ready: The process is waiting to be assigned to a processor.

Running: Instructions are being executed.

Waiting (Blocked): The process is waiting for some event to occur (e.g., I/O
completion).

Terminated: The process has finished execution.

Transitions between these states are managed by the operating system, which
schedules and allocates resources accordingly. State diagrams are used to
represent these transitions.

3. Interprocess Communication (IPC)


Interprocess Communication (IPC) refers to the mechanisms an operating
system provides to allow processes to manage shared data and communicate with
one another. It is essential in multitasking environments where processes must
work together.
There are two basic models of IPC:

Message Passing: Processes communicate by sending and receiving


messages.

Shared Memory: Processes share a region of memory and read/write data


directly.

IPC is crucial for synchronization, coordination, and data exchange between


processes. Examples of IPC mechanisms include pipes, message queues,
sockets, and shared memory.

4. Process Control Block (PCB)


The Process Control Block (PCB) is a data structure maintained by the operating
system for every process. It contains all the information about a process,
including:

Operating System 5
Process State

Program Counter

CPU Registers

Memory Management Information

Accounting Information

I/O Status Information

The PCB is essential for context switching, as it stores the snapshot of a process
so it can resume execution correctly after being paused.

5. Process Scheduling
Process Scheduling is the activity of the operating system that handles the
execution of processes. The goal is to maximize CPU utilization and system
responsiveness.
Schedulers are classified into:

Long-Term Scheduler (Job Scheduler): Decides which jobs or processes are


admitted into the system.

Short-Term Scheduler (CPU Scheduler): Decides which of the ready


processes gets the CPU next.

Medium-Term Scheduler: Temporarily removes processes from main memory


and places them in secondary storage (swapping).

Common scheduling algorithms:

First-Come, First-Served (FCFS)

Shortest Job Next (SJN)

Round Robin (RR)

Priority Scheduling

Multilevel Queue Scheduling

Each has its own advantages and is chosen based on the system’s requirements.

Operating System 6
6. Process Synchronization
Process Synchronization involves coordinating processes that share resources to
avoid conflicts, such as race conditions, where multiple processes access and
modify shared data concurrently.

Critical-Section Problem
The critical section refers to the portion of a process that accesses shared
resources (like data structures, files, etc.). The critical-section problem arises
when multiple processes execute their critical sections simultaneously, leading to
inconsistent or corrupted data.

To solve this, we need:

1. Mutual Exclusion: Only one process can be in the critical section at a time.

2. Progress: If no process is in the critical section, one of the waiting processes


should be allowed to enter.

3. Bounded Waiting: A limit must exist on the number of times other processes
are allowed to enter the critical section after a process has made a request.

Synchronization Hardware
Hardware-based solutions use machine-level instructions to ensure mutual
exclusion. For example:

Test-and-Set

Swap Instruction

These are atomic operations provided by hardware to prevent race conditions


without disabling interrupts.

Semaphores
A semaphore is a synchronization tool used to control access to shared
resources. It is an integer variable that is accessed through two atomic operations:

wait (P): Decrements the semaphore. If the value becomes negative, the
process is blocked.

Operating System 7
signal (V): Increments the semaphore. If there are blocked processes, one of
them is unblocked.

Semaphores can be:

Binary (mutex): Only 0 or 1; used for mutual exclusion.

Counting: Allows a resource to be accessed by a limited number of


processes.

Critical Region
The critical region refers to the code that accesses shared resources and must be
protected by synchronization mechanisms. Only one process should execute in
this region at a time.

Monitors
A monitor is a high-level synchronization construct that provides a convenient and
effective mechanism for process synchronization. It allows only one process to be
active within the monitor at any given time. Monitors encapsulate shared
variables, operations, and the synchronization between concurrent process calls.
They use condition variables and two main operations:

wait(): Makes a process wait inside the monitor.

signal(): Resumes execution of a waiting process.

Monitors are supported by high-level languages like Java and are easier to use
and less error-prone than semaphores.

UNIT-III

Deadlocks
A deadlock is a state in a system where a group of processes are blocked
because each process is holding a resource and waiting for another resource that
is being held by another process in the group. As a result, none of the processes
can proceed.

Operating System 8
1. Deadlock Characterization
For a deadlock to occur, four necessary conditions must hold simultaneously:

1. Mutual Exclusion: At least one resource must be held in a non-shareable


mode.

2. Hold and Wait: A process is holding at least one resource and waiting to
acquire additional resources held by other processes.

3. No Preemption: Resources cannot be forcibly taken from a process; they must


be released voluntarily.

4. Circular Wait: A set of processes exists such that each process is waiting for
a resource held by the next process in the chain, forming a circular chain.

These conditions can be represented graphically using a Resource Allocation


Graph (RAG).

2. Methods for Handling Deadlocks


There are four general strategies to handle deadlocks:

1. Deadlock Prevention: Proactively deny one of the necessary conditions for


deadlocks.

2. Deadlock Avoidance: Dynamically examine the resource-allocation state to


ensure that a circular wait condition does not occur.

3. Deadlock Detection and Recovery: Allow the system to enter a deadlock


state, detect it, and recover from it.

4. Ignore the Problem: Assume that deadlocks are rare and do nothing about
them (used in many systems like UNIX).

3. Deadlock Prevention
This approach ensures at least one of the four necessary conditions never holds:

Mutual Exclusion: Difficult to prevent for non-shareable resources like


printers.

Hold and Wait: Require processes to request all resources at once.

Operating System 9
No Preemption: Allow the system to forcibly remove resources from
processes.

Circular Wait: Impose an ordering of resource acquisition to prevent circular


chains.

4. Deadlock Avoidance
In this strategy, the OS keeps track of the resource allocation state and makes
decisions to avoid unsafe states.

The most famous algorithm is Banker's Algorithm, which checks for safe
state before granting resource requests.

A safe state is one where the system can allocate resources to each process
in some order without leading to a deadlock.

5. Deadlock Detection
If deadlocks are allowed to occur, the system must have a mechanism to detect
them:

For a single instance of each resource type, a wait-for graph can be used.

For multiple instances, a detection algorithm similar to Banker's is used to


track resource allocation and determine if a deadlock exists.

6. Deadlock Recovery
Once a deadlock is detected, recovery methods include:

Process Termination: Abort all deadlocked processes or one at a time until


the deadlock is resolved.

Resource Preemption: Temporarily take resources from other processes and


allocate them to those in need.

Rollback: Roll back processes to a safe state using checkpointing.

CPU Scheduling

Operating System 10
CPU Scheduling is the process of selecting a process from the ready queue and
allocating the CPU to it. It is one of the most important tasks of the operating
system to ensure fair and efficient use of the CPU.

1. CPU Schedulers
There are different types of schedulers:

Long-Term Scheduler (Job Scheduler): Decides which jobs enter the ready
queue.

Short-Term Scheduler (CPU Scheduler): Selects from among the ready


processes for execution.

Medium-Term Scheduler: Swaps processes in and out of memory to improve


performance.

2. Scheduling Criteria
When evaluating scheduling algorithms, several criteria are considered:

CPU Utilization: Keep the CPU as busy as possible.

Throughput: Number of processes completed per unit of time.

Turnaround Time: Total time taken from submission to completion.

Waiting Time: Total time a process spends in the ready queue.

Response Time: Time from submission to the first response (important for
interactive systems).

Fairness: Each process should get a fair share of CPU time.

3. Scheduling Algorithms
There are several CPU scheduling algorithms, each with its own pros and cons:

1. First-Come, First-Served (FCFS):

Non-preemptive.

Processes are executed in the order they arrive.

Operating System 11
Simple but may cause convoy effect (long waiting time for short
processes).

2. Shortest Job Next (SJN) / Shortest Job First (SJF):

Non-preemptive or preemptive.

Selects the process with the smallest execution time.

Optimal in terms of average waiting time but difficult to predict job lengths.

3. Round Robin (RR):

Preemptive.

Each process is given a fixed time quantum in a cyclic order.

Good for time-sharing systems, improves response time.

4. Priority Scheduling:

Processes are scheduled according to priority.

Can be preemptive or non-preemptive.

Risk of starvation for low-priority processes (solved by aging).

5. Multilevel Queue Scheduling:

Processes are divided into multiple queues (e.g., foreground, background),


each with its own scheduling algorithm.

Priorities determine queue selection.

6. Multilevel Feedback Queue:

A more flexible version of multilevel queues.

Processes can move between queues based on behavior and history.

UNIT-IV
Memory Management
Memory management is a crucial function of the operating system that handles
or manages primary memory. The OS is responsible for allocating and

Operating System 12
deallocating memory to various processes and ensuring efficient use of memory
resources.

1. Swapping
Swapping is a memory management technique where processes are temporarily
moved from main memory to secondary storage (usually the hard disk) and
brought back when needed.

Used to increase the number of processes in memory.

Allows better CPU utilization.

Swap space is the portion of the hard disk reserved for swapping.

Disadvantage: Time-consuming due to disk I/O (known as swap time).

2. Contiguous Allocation
In contiguous memory allocation, each process is allocated a single contiguous
block of memory.

a) Advantages:
Simple to implement.

Minimal overhead in memory management.

b) Disadvantages:
Leads to fragmentation.

Internal Fragmentation
Occurs when allocated memory may be slightly larger than the requested
memory, leaving unused space within the allocated block.

External Fragmentation
Occurs when enough total memory is free, but it is not contiguous; thus, it can't
be used by a process needing a large block.

Operating System 13
Compaction can be used to reduce external fragmentation by shifting
processes to make free space contiguous.

3. Non-Contiguous Allocation
Allows a process to be allocated memory in different locations, reducing
fragmentation and increasing flexibility.

a) Paging
Divides physical memory into fixed-size blocks called frames, and logical
memory into blocks of the same size called pages.

The OS keeps a page table that maps logical pages to physical frames.

Advantages: Eliminates external fragmentation.

Disadvantage: May lead to internal fragmentation.

b) Demand Paging
Pages are loaded into memory only when they are needed, not in advance.

Pages not currently in use are kept in secondary memory.

Page fault: Occurs when a requested page is not in memory.

Advantages:

Saves memory space.

Efficient use of resources.

Disadvantages:

May lead to performance issues if page faults occur frequently.

c) Segmentation
Memory is divided based on logical divisions of a program: code, stack, heap,
data, etc.

Each segment has a segment number and offset.

Operating System 14
Segmentation provides better logical organization of memory than paging.

Can result in external fragmentation.

4. Page Replacement
When a page is required and there are no free frames, the OS must choose a page
to remove and replace it with the required one. This is called page replacement.

Page Replacement Algorithms


1. FIFO (First-In, First-Out):

Oldest page is replaced first.

Simple but may cause more page faults (Belady’s Anomaly).

2. LRU (Least Recently Used):

Replaces the page that hasn’t been used for the longest time.

Performs well but needs hardware support or software approximation.

3. Optimal Page Replacement:

Replaces the page that will not be used for the longest time in the future.

Theoretical; used for comparison.

4. Clock Algorithm:

Circular buffer with a "use" bit; simulates LRU in an efficient way.

5. Performance of Demand Paging


Depends on the page fault rate.

The effective access time (EAT) can be calculated using:


EAT = (1 - p) * memory access time + p * page fault time

where p is the page fault rate.

High page fault rate leads to performance degradation.

6. Allocation of Frames

Operating System 15
The OS must decide how many frames to allocate to each process.

Strategies:

Equal Allocation: Each process gets the same number of frames.

Proportional Allocation: Frames are allocated based on the size of the


process.

Global Allocation: A process can take frames from others.

Local Allocation: Each process is assigned a fixed number of frames.

7. Thrashing
Thrashing occurs when a process spends more time in page faults than
executing. It happens when:

There is insufficient memory.

Too many processes are running simultaneously.

The working set (active pages) of the process is too large to fit in memory.

Solution:

Reduce the degree of multiprogramming.

Use working set model or page fault frequency to control memory allocation.

UNIT-V
Linux
Linux is a powerful, open-source operating system based on UNIX. It was
originally developed by Linus Torvalds in 1991 and has since evolved into a robust
system used in servers, desktops, mobile devices, embedded systems, and
supercomputers.

1. Linux History
Origin: Started as a personal project by Linus Torvalds while studying at the
University of Helsinki.

Operating System 16
Initially designed to work on x86 architecture, inspired by MINIX (a teaching
OS).

Licensed under the GNU General Public License (GPL), allowing users to
freely use, modify, and distribute it.

With contributions from thousands of developers worldwide, Linux has


become the backbone of major systems like Android, Ubuntu, Red Hat,
Fedora, and more.

2. Design Principles
Linux follows several core design principles:

Modularity: Uses a modular kernel that can load and unload features
dynamically.

Portability: Designed to run on various hardware platforms.

Multiuser and Multitasking: Supports multiple users and simultaneous


process execution.

Open Source: The source code is freely available for inspection and
modification.

UNIX Compatibility: Compatible with most UNIX standards and tools.

Security and Stability: Known for being secure, reliable, and rarely crashing.

3. Kernel Modules
A kernel module is a piece of code that can be loaded into the kernel
dynamically to extend its functionality without restarting the system.

Examples: device drivers, file system support, network protocols.

Benefits:

Reduces kernel size.

Improves flexibility and maintainability.

Easier debugging and updating.

Linux provides tools like:

Operating System 17
insmod – insert a module

rmmod – remove a module

lsmod – list loaded modules

4. Process Management
Linux treats each process as a unique entity, represented by a process
control block (PCB).

Each process has a unique PID (Process ID).

Supports foreground and background processes, zombie and orphan states.

Parent-child relationships are maintained using fork() system call.

Important process commands:

ps , top , kill , nice , renice , killall

5. Scheduling
Linux uses preemptive multitasking with advanced scheduling algorithms.

The Completely Fair Scheduler (CFS) is used in modern Linux versions.

Priorities are assigned using nice values.

Supports:

Real-time scheduling

Round-Robin

FIFO (First-In, First-Out)

Time-sharing

6. Memory Management
Linux uses virtual memory via paging and swapping.

Supports demand paging, page replacement algorithms, and thrashing


control.

Operating System 18
Uses the concept of zones (DMA, Normal, HighMem) for physical memory
allocation.

Memory-related features:

Buddy System for memory allocation.

Slab Allocator for caching frequently used objects.

7. File Systems
Linux supports various file systems:

Ext2, Ext3, Ext4

XFS, Btrfs, ReiserFS

NTFS (read-only/write with additional drivers)

Key concepts:

Everything is a file, including devices.

File types: regular, directory, symbolic links, character/block special files.

Uses inode to store metadata.

Mounting integrates file systems into the hierarchy.

File system commands:

ls , cd , mkdir , rm , cp , mv , df , du

8. Input and Output


Linux handles I/O via device files located in the /dev directory.

I/O devices are managed using device drivers loaded as kernel modules.

Uses a buffer cache and supports asynchronous I/O.

I/O redirection: > , < , >> , |

9. Interprocess Communication (IPC)

Operating System 19
Linux provides several mechanisms for communication and synchronization
between processes:

Pipes (unnamed and named)

Message Queues

Semaphores

Shared Memory

Signals

Sockets (used in networking)

System calls and tools: pipe() , shmget() , semop() , msgsnd() , etc.

10. Network Structure


Linux has powerful networking support built into the kernel.

Includes support for:

TCP/IP, UDP, ICMP protocols

Network interfaces like Ethernet, Wi-Fi

Routing, bridging, firewalls

Tools: ifconfig , ip , ping , netstat , ss , iptables , nmap

11. Security Summary


Linux is considered secure due to its permission and ownership model.

Security Features:

User Authentication: via login, passwords, and /etc/passwd

File Permissions: read, write, execute (for user, group, others)

SELinux (Security-Enhanced Linux): Adds mandatory access control.

Firewall & Packet Filtering: via iptables , firewalld

Pluggable Authentication Modules (PAM): Manage authentication tasks.

Operating System 20
Let me know if you’d like this unit summarized into revision notes, made into
slides, or turned into a chart comparing Linux components!

Operating System 21

You might also like