Live Seminar: Subject - Operating System Date - 07 Dec, 2022
Live Seminar: Subject - Operating System Date - 07 Dec, 2022
Live Seminar: Subject - Operating System Date - 07 Dec, 2022
The OS helps you to communicate with the computer without knowing how to
speak the computer’s language. It is not possible for the user to use any computer
or mobile device without having an operating system.
If any issue occurs in OS, you may lose all the contents which have been
stored in your system
Operating system’s software is quite expensive for small size organization
which adds burden on them. Example Windows
It is never entirely secure as a threat can occur at any time
Dual mode operation
The dual-mode operations in the operating system protect the operating system
from illegal users. We accomplish this defense by designating some of the system
instructions as privileged instructions that can cause harm.
An error in one program can adversely affect many processes, it might modify data
of another program, or also can affect the operating system. For example, if a
process stuck in the infinite loop then this infinite loop could affect the correct
operation of other processes. So to ensure the proper execution of the operating
system, there are two modes of operation:
User mode
When the computer system is run by user applications like creating a text
document or using any application program, then the system is in user mode. When
the user application requests for a service from the operating system or an interrupt
occurs or system calls, then there will be a transition from user to kernel mode to
fulfill the requests.
Kernel Mode
When the system boots, hardware starts in kernel mode and when the operating
system is loaded, it starts user application in user mode. To provide protection to
the hardware, we have privileged instructions which execute only in kernel mode.
If the user attempts to run privileged instruction in user mode then it will treat
instruction as illegal and traps to OS. Some of the privileged instructions are:
1. Handling Interrupts
2. To switch from user mode to kernel mode.
3. Input-Output management.
Terms User Mode Kernel Mode
The purpose of this operating system was mainly to transfer control from one job
to another as soon as the job was completed. It contained a small set of programs
called the resident monitor that always resided in one part of the main memory.
The remaining part is used for servicing jobs.
In the 1970s, Batch processing was very popular. In this technique, similar types of
jobs were batched together and executed in time. People were used to having a
single computer which was called a mainframe.
In Batch operating system, access is given to more than one person; they submit
their respective jobs to the system for the execution.
The system put all of the jobs in a queue on the basis of first come first serve and
then executes the jobs one by one. The users collect their respective output when
all the jobs get executed.
Step 2 − After that the user submits the job to the programmer.
Step 3 − The programmer collects the jobs from different users and sorts the jobs
into batches with similar needs.
Step 4 − Finally, the programmer submits the batches to the processor one by one.
Advantages
The time taken by the system to execute all the programs will be reduced.
It can be shared between multiple users.
Disadvantages
Multi-programming OS
Advantages
CPU utilization is high because the CPU is never goes to idle state.
Memory utilization is efficient.
CPU throughput is high and also supports multiple interactive user
terminals.
Disadvantages
CPU scheduling is compulsory because lots of jobs are ready to run on CPU
simultaneously.
User is not able to interact with jobs when it is executing.
Programmers also cannot modify a program that is being executed.
Time-shared OS
Disadvantages :
1. Reliability problem.
2. One must have to take of security and integrity of user programs and data.
3. Data communication problem.
Multi-processor OS
In operating systems, to improve the performance of more than one CPU can be
used within one computer system called Multiprocessor operating system.
Multiple CPUs are interconnected so that a job can be divided among them for
faster execution. When a job finishes, results from all CPUs are collected and
compiled to give the final output. Jobs needed to share main memory and they may
also share other system resources among themselves. Multiple CPUs can also be
used to run multiple jobs simultaneously.
Advantages
Increased reliability: Due to the multiprocessing system, processing tasks
can be distributed among several processors. This increases reliability as if
one processor fails; the task can be given to another processor for
completion.
Increased throughout: As several processors increase, more work can be
done in less
The economy of Scale: As multiprocessors systems share peripherals,
secondary storage devices, and power supplies, they are relatively cheaper
than single-processor systems.
Disadvantages
Types of Multi-processor OS -:
Symmetric
Asymmetric
Asymmetric Multiprocessor
Every processor is given seeded tasks in this operating system, and the master
processor has the power for running the entire system. In the course, it uses the
master-slave relationship.
Symmetric Multiprocessor
In this system, every processor owns a similar copy of the OS, and they can make
communication in between one another. All processors are connected with peering
relationship nature, meaning it won’t be using master & slave relation.
Real-time OS
In Hard RTOS, all critical tasks must be completed within the specified time
duration, i.e., within the given deadline. Not meeting the deadline would result in
critical failures such as damage to equipment or even loss of human life.
For Example,
Let's take an example of airbags provided by carmakers along with a handle in the
driver's seat. When the driver applies brakes at a particular instance, the airbags
grow and prevent the driver's head from hitting the handle. Had there been some
delay even of milliseconds, then it would have resulted in an accident.
Soft RTOS accepts a few delays via the means of the Operating system. In this
kind of RTOS, there may be a closing date assigned for a particular job, but a delay
for a small amount of time is acceptable. So, cut off dates are treated softly via
means of this kind of RTOS.
For Example,
This type of system is used in Online Transaction systems and Livestock price
quotation Systems.
Advantages
Easy to layout, develop and execute real-time applications under the real-
time operating system.
The real-time working structures are extra compact, so those structures
require much less memory space.
In a Real-time operating system, the maximum utilization of devices and
systems.
Focus on running applications and less importance to applications that are in
the queue.
Since the size of programs is small, RTOS can also be embedded systems
like in transport and others.
Disadvantages
Real-time operating systems have complicated layout principles and are very
costly to develop.
Real-time operating systems are very complex and can consume critical
CPU cycles.
Android Operating System
Among all the components Linux Kernel provides main functionality of operating
system functions to smartphones and Dalvik Virtual Machine (DVM) provide
platform for running an android application.
Applications
Application Framework
Android Runtime
Platform Libraries
Linux Kernel
Applications
Application framework
Application runtime
Like Java Virtual Machine (JVM), Dalvik Virtual Machine (DVM) is a register-
based virtual machine and specially designed and optimized for android to ensure
that a device can run multiple instances efficiently. It depends on the layer Linux
kernel for threading and low-level memory management. The core libraries enable
us to implement android applications using the standard JAVA or Kotlin
programming languages.
Platform libraries
The Platform Libraries includes various C/C++ core libraries and Java based
libraries such as Media, Graphics, Surface Manager, OpenGL etc. to provide a
support for android development.
Media library provides support to play and record an audio and video
formats.
Surface manager responsible for managing access to the display subsystem.
SGL and OpenGL both cross-language, cross-platform application program
interface (API) are used for 2D and 3D computer graphics.
SQLite provides database support and FreeType provides font support.
Web-Kit This open source web browser engine provides all the
functionality to display web content and to simplify page loading.
SSL (Secure Sockets Layer) is security technology to establish an
encrypted link between a web server and a web browser.
Linux Kernel
Linux Kernel is heart of the android architecture. It manages all the available
drivers such as display drivers, camera drivers, Bluetooth drivers, audio drivers,
memory drivers, etc. which are required during the runtime.
MS-DOS is one of the oldest and widely used operating system. DOS is a set of
computer programs, the major functions of which are file management, allocation
of system resources, providing essential features to control hardware devices.
Internal Commands − Commands such as DEL, COPY, TYPE, etc. are the
internal commands that remain stored in computer memory.
External Commands − Commands like FORMAT, DISKCOPY, etc. are
the external commands and remain stored on the disk.
Multi-user: UNIX operating system supports more than one user to access
computer resources like main memory, hard disk, tape drives, etc. Multiple users
can log on to the system from different terminals and run different jobs that share
the resources of a command terminal. It deals with the principle of time-sharing.
Time-sharing is done by a scheduler that divides the CPU time into several
segments also called a time slice, and each segment is assigned to each user on a
scheduled basis. This time slice is tiny. When this time is expired, it passes control
to the following user on the system. Each user executes their set of instructions
within their time slice.
Portability: This feature makes the UNIX work on different machines and
platforms with the easy transfer of code to any computer system. Since a
significant portion of UNIX is written in C language, and only a tiny portion is
coded in assembly language for specific hardware.
File Security and Protection: Being a multi-user system, UNIX makes special
consideration for file and system security. UNIX has different levels of security
using assigning username and password to individual users ensuring the
authentication, at the level providing file access permission viz. read, write and
execute and lastly file encryption to change the file into an unreadable format.
Command Structure: UNIX commands are easy to understand and simple to use.
Example: "cp", mv etc. While working in the UNIX environment, the UNIX
commands are case-sensitive and are entered in lower case.
Open Source: UNIX operating system is open source it means it is freely available
to all and is a community-based development project.
Accounting: UNIX keeps an account of jobs created by the user. This feature
enhances the system performance in terms of CPU monitoring and disk space
checking. It allows you to keep an account of disk space used by each user, and the
disk space can be limited by each other. You can assign every user a different disk
quota. The root user can perform these accounting tasks using various commands
such as quota, df, du, etc.
UNIX Tools and Utilities: UNIX system provides various types of tools and
utilities facilities such as UNIX grep, sed and awk, etc. Some of the general-
purpose tools are compilers, interpreters, network applications, etc. It also includes
various server programs which provide remote and administration services.
An Operating System provides services to both the users and to the programs.
Program execution
I/O operations
File System manipulation
Communication
Error Detection
Resource Allocation
Protection
Program execution
Operating systems handle many kinds of activities from user programs to system
programs like printer spooler, name servers, file server, etc. Each of these activities
is encapsulated as a process.
I/O Operation
I/O operation means read or write operation with any file or any specific I/O
device.
Operating system provides the access to the required I/O device when
required.
The operating system gives the permission to the program for operation on
file.
Permission varies from read-only, read-write, denied and so on.
Operating System provides an interface to the user to create/delete files.
Operating System provides an interface to the user to create/delete
directories.
Operating System provides an interface to create the backup of file system.
Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, the operating system manages
communications between all the processes. Multiple processes communicate with
one another through communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention
and security. Following are the major activities of an operating system with respect
to communication −
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O
devices or in the memory hardware. Following are the major activities of an
operating system with respect to error handling −
Resource Management
Protection
As can be seen from this diagram, the processes execute normally in the user mode
until a system call interrupts this. Then the system call is executed on a priority
basis in the kernel mode. After the execution of the system call, the control returns
to the user mode and execution of user processes can be resumed.
In general, system calls are required in the following situations −
If a file system requires the creation or deletion of files. Reading and writing
from files also require a system call.
Creation and management of new processes.
Network connections also require system calls. This includes sending and
receiving packets.
Access to a hardware devices such as a printer, scanner etc. requires a
system call.
There are mainly five types of system calls. These are explained in detail as
follows −
Process Control
These system calls deal with processes such as process creation, process
termination etc.
File Management
These system calls are responsible for file manipulation such as creating a file,
reading a file, writing into a file etc.
Device Management
These system calls are responsible for device manipulation such as reading from
device buffers, writing into device buffers etc.
Information Maintenance
These system calls handle information and its transfer between the operating
system and the user program.
Communication
These system calls are useful for interprocess communication. They also deal with
creating and deleting a communication connection.
Task Scheduler
Performance Monitor
Task Scheduler
The Task Scheduler is a tool included with Windows that allows predefined
actions to be automatically executed whenever a certain set of conditions is met.
For example, you can schedule a task to run a backup script every night, or send
you an e-mail whenever a certain system event occurs.
The picture below is an example of what the Task Scheduler looks like in
Microsoft Windows 7.
How to open Task Scheduler
– OR –
– OR –
For most purposes, creating a Basic Task will serve your needs. To create one,
follow these steps.
1. In the Action menu (pictured), select Create basic task to open the Create
Basic Task wizard.
2. In the Name: field, enter a name for your task. You may optionally add a
description of the task in the Description: text box below. When this is done,
click Next.
3. Select a trigger from the options presented (pictured below) to define when
you want your task to run. When you're ready to continue, click Next.
Depending on your trigger selection, you may be prompted to choose a
specific time, day, or event. If so, make your choice and then click Next.
4. Select the Action you want to perform when the trigger occurs (Start a
program, Send an e-mail, or Display a message), then click Next.
5. Depending on the Action you chose in the previous step, fill out the relevant
information, then click Next.
6. Finally, the Finish screen displays your task as you configured it. If you
need to make changes, click the Back button to return to a previous step,
make your changes, then click Next until you return to the Finish screen.
7. To finish configuring your task, click Finish. The next time your trigger
occurs, the task runs.
Performance Monitor
#include <stdio.h>
int main() {
printf("Hello, World! \n");
return 0;
}
The process, from its creation to completion, passes through various states. The
minimum number of states is five.
The names of the states are not standardized although the process may be in one of
the following states during execution.
1. New
2. Ready
The processes which are ready for the execution and reside in the main memory
are called ready state processes. There can be many processes present in the ready
state.
3. Running
One of the processes from the ready state will be chosen by the OS depending
upon the scheduling algorithm. Hence, if we have only one CPU in our system,
the number of running processes for a particular time will always be one. If we
have n processors in the system then we can have n processes running
simultaneously.
4. Block or wait
From the Running state, a process can make the transition to the block or wait
state depending upon the scheduling algorithm or the intrinsic behavior of the
process.
When a process waits for a certain resource to be assigned or for the input from
the user then the OS move this process to the block or wait state and assigns the
CPU to the other processes.
5. Completion or termination
When a process finishes its execution, it comes in the termination state. All the
context of the process (Process Control Block) will also be deleted the process
will be terminated by the Operating system.
6. Suspend ready
A process in the ready state, which is moved to secondary memory from the main
memory due to lack of the resources (mainly primary memory) is called in the
suspend ready state.
If the main memory is full and a higher priority process comes for the execution
then the OS have to make the room for the process in the main memory by
throwing the lower priority process out into the secondary memory. The suspend
ready processes remain in the secondary memory until the main memory gets
available.
7. Suspend wait
Instead of removing the process from the ready queue, it's better to remove the
blocked process which is waiting for some resources in the main memory. Since it
is already waiting for some resource to get available hence it is better if it waits in
the secondary memory and make room for the higher priority process. These
processes complete their execution once the main memory gets available and their
wait is finished.
Process Control Block
Process Control Block is a data structure that contains information of the process
related to it. The process control block is also known as a task control block, entry
of the process table, etc.
It is very important for process management as the data structuring for processes is
done in terms of the PCB. It also defines the current state of the operating system.
The process control stores many data items that are needed for efficient process
management. Some of these data items are explained with the help of the given
diagram −
Process State
This specifies the process state i.e. new, ready, running, waiting or terminated.
Process Number
Program Counter
This contains the address of the next instruction that needs to be executed in the
process.
Registers
This specifies the registers that are used by the process. They may include
accumulators, index registers, stack pointers, general purpose registers etc.
These are the different files that are associated with the process
The process priority, pointers to scheduling queues etc. is the CPU scheduling
information that is contained in the PCB. This may also include any other
scheduling parameters.
Memory Management Information
The memory management information includes the page tables or the segment
tables depending on the memory system used. It also contains the value of the base
registers, limit registers etc.
This information includes the list of I/O devices used by the process, the list of
files etc.
Accounting information
The time limits, account numbers, amount of CPU used, process numbers etc. are
all a part of the PCB accounting information.
The process control block is kept in a memory area that is protected from the
normal user access. This is done because it contains important process information.
Some of the operating systems place the PCB at the beginning of the kernel stack
for the process as it is a safe location.
Process Scheduling
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process
on the basis of a particular strategy.
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in this
queue.
Device queues − The processes which are blocked due to unavailability of
an I/O device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin,
Priority, etc.). The OS scheduler determines how to move processes between the
ready and run queues which can only have one entry per processor core on the
system; in the above diagram, it has been merged with the CPU.
Schedulers
Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to
decide which process to run. Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Context Switching involves storing the context or state of a process so that it can
be reloaded when required and execution can be resumed from the same point as
earlier. This is a feature of a multitasking operating system and allows a single
CPU to be shared by multiple processes.
Save the context of the process that is currently running on the CPU. Update
the process control block and other important fields.
Move the process control block of the above process into the relevant queue
such as the ready queue, I/O queue etc.
Select a new process for execution.
Update the process control block of the selected process. This includes
updating the process state to running.
Update the memory management data structures as required.
Restore the context of the process that was previously running when it is
loaded again on the processor. This is done by loading the previous values of
the process control block and registers.
Threads in OS
Need of Thread
It takes far less time to create a new thread in an existing process than to
create a new process.
Threads can share the common data, they do not need to use Inter- Process
communication.
Context switching is faster when working with threads.
It takes less time to terminate a thread than a process.
Types of Threads
Kernel level thread.
User-level thread.
User - Level Threads
The user-level threads are implemented by users and the kernel is not aware of the
existence of these threads. It handles them as if they were single-threaded
processes. User-level threads are small and much faster than kernel level threads.
They are represented by a program counter(PC), stack, registers and a small
process control block. Also, there is no kernel involvement in synchronization for
user-level threads.
User-level threads are easier and faster to create than kernel-level threads.
They can also be more easily managed.
User-level threads can be run on any operating system.
There are no kernel mode privileges required for thread switching in user-
level threads.
Kernel-level threads are handled by the operating system directly and the thread
management is done by the kernel. The context information for the process as well
as the process threads is all managed by the kernel. Because of this, kernel-level
threads are slower than user-level threads.
A mode switch to kernel mode is required to transfer control from one thread
to another in a process.
Kernel-level threads are slower to create as well as manage as compared to
user-level threads.
Preemptive & Non-Preemptive
scheduling
Preemptive Scheduling
Preemptive scheduling is used when a process switches from running state to ready
state or from the waiting state to ready state. The resources (mainly CPU cycles)
are allocated to the process for a limited amount of time and then taken away, and
the process is again placed back in the ready queue if that process still has CPU
burst time remaining. That process stays in the ready queue till it gets its next
chance to execute.
Non-Preemptive Scheduling
There are various algorithms which are used by the Operating System to schedule
the processes on the processor in an efficient way.
FCFS Scheduling
First come first serve (FCFS) scheduling algorithm simply schedules the jobs
according to their arrival time. The job which comes first in the ready queue will
get the CPU first. The lesser the arrival time of the job, the sooner will the job get
the CPU. FCFS scheduling may cause the problem of starvation if the burst time of
the first process is the longest among all the jobs.
Advantages of FCFS
Simple
Easy
First come, First serve
Disadvantages of FCFS
1. The scheduling method is non preemptive, the process will run to the
completion.
2. Due to the non-preemptive nature of the algorithm, the problem of starvation
may occur.
3. Although it is easy to implement, but it is poor in performance since the
average waiting time is higher as compare to other scheduling algorithms.
-------------------------------------------------------------------------------------------
Shortest Job First (SJF) Scheduling
Till now, we were scheduling the processes according to their arrival time (in
FCFS scheduling). However, SJF scheduling algorithm, schedules the processes
according to their burst time.
In SJF scheduling, the process with the lowest burst time, among the list of
available processes in the ready queue, is going to be scheduled next.
However, it is very difficult to predict the burst time needed for a process hence
this algorithm is very difficult to implement in the system.
Advantages of SJF
1. Maximum throughput
2. Minimum average waiting and turnaround time
Disadvantages of SJF
-------------------------------------------------------------------------------------------
Priority Scheduling
-------------------------------------------------------------------------------------------
Round robin Scheduling
This is the preemptive version of first come first serve scheduling. The
Algorithm focuses on Time Sharing. In this algorithm, every process gets
executed in a cyclic way.
A certain time slice is defined in the system which is called time quantum.
Each process present in the ready queue is assigned the CPU for that time
quantum, if the execution of the process is completed during that time then
the process will terminate else the process will go back to the ready queue
and waits for the next turn to complete the execution.
Advantages
1. It can be actually implementable in the system because it is not depending
on the burst time.
2. It doesn't suffer from the problem of starvation or convoy effect.
3. All the jobs get a fare allocation of CPU.
Disadvantages
1. The higher the time quantum, the higher the response time in the system.
2. The lower the time quantum, the higher the context switching overhead in
the system.
3. Deciding a perfect time quantum is really a very difficult task in the system.
Time for Problem -:
-------------------------------------------------------------------------------------------
Deadlock
Deadlock is a situation where a set of processes are blocked because each process
is holding a resource and waiting for another resource acquired by some other
process.
Consider an example when two trains are coming toward each other on the same
track and there is only one track, none of the trains can move once they are in front
of each other. A similar situation occurs in operating systems when there are two
or more processes that hold some resources and wait for resources held by other(s).
For example, in the below diagram, Process 1 is holding Resource 1 and waiting
for resource 2 which is acquired by process 2, and process 2 is waiting for resource
1.
Necessary conditions for Deadlocks
1. Mutual Exclusion
A process waits for some resources while holding another resource at the
same time.
3. No preemption
The process which once scheduled will be executed till the completion. No
other process can be scheduled by the scheduler meanwhile.
4. Circular Wait
All the processes must be waiting for the resources in a cyclic manner so
that the last process is waiting for the resource which is being held by the
first process.
Advantages of Deadlock
This situation works well for processes which perform a single burst of
activity
No preemption needed for Deadlock.
Convenient method when applied to resources whose state can be saved and
restored easily
Feasible to enforce via compile-time checks
Needs no run-time computation since the problem is solved in system design
Disadvantages of Deadlock
Paging -:
Paging is a memory management scheme that eliminates the need for contiguous
allocation of physical memory. The process of retrieving processes in the form of
pages from the secondary storage into the main memory is known as paging. The
basic purpose of paging is to separate each procedure into pages. Additionally,
frames will be used to split the main memory.This scheme permits the physical
address space of a process to be non – contiguous.
The main idea behind the paging is to divide each process in the form of
pages. The main memory will also be divided in the form of frames.
Pages of the process are brought into the main memory only when they are
required otherwise they reside in the secondary storage.
Different operating system defines different frame sizes. The sizes of each
frame must be equal. Considering the fact that the pages are mapped to the
frames in Paging, page size needs to be as same as frame size.
What is Page Fault in Operating System?
Page faults dominate more like an error. A page fault will happen if a program
tries to access a piece of memory that does not exist in physical memory (main
memory). The fault specifies the operating system to trace all data into virtual
memory management and then relocate it from secondary memory to its primary
memory, such as a hard disk.
Page Hit
When the CPU attempts to obtain a needed page from main memory and the page
exists in main memory (RAM), it is referred to as a "PAGE HIT".
This is the simplest page replacement algorithm. In this algorithm, the operating
system keeps track of all pages in the memory in a queue, the oldest page is in the
front of the queue. When a page needs to be replaced page in the front of the queue
is selected for removal.
-------------------------------------------------------------------------------------------
LRU ( Least recently used )
This algorithm replaces the page which has not been referred for a long time. This
algorithm is just opposite to the optimal page replacement algorithm. In this, we
look at the past instead of staring at future.
The primary purpose of any page replacement algorithm is to reduce the number of
page faults. When a page replacement is required, the LRU page replacement
algorithm replaces the least recently used page with a new page. This algorithm is
based on the assumption that among all pages, the least recently used page will not
be used for a long time. It is a popular and efficient page replacement technique.
-------------------------------------------------------------------------------------------
Optimal Page Replacement
This algorithms replaces the page which will not be referred for so long in future.
Although it can not be practically implementable but it can be used as a
benchmark. Other algorithms are compared to this in terms of optimality.
Optimal page replacement is the best page replacement algorithm as this algorithm
results in the least number of page faults. In this algorithm, the pages are replaced
with the ones that will not be used for the longest duration of time in the future. In
simple terms, the pages that will be referred farthest in the future are replaced in
this algorithm.
-------------------------------------------------------------------------------------------
Basic concept of File
Sequential Access
Most of the operating systems access the file sequentially. In other words, we can
say that most of the files need to be accessed sequentially by the operating system.
In sequential access, the OS read the file word by word. A pointer is maintained
which initially points to the base address of the file. If the user wants to read first
word of the file then the pointer provides that word to the user and increases its
value by 1 word. This process continues till the end of the file.
Modern word systems do provide the concept of direct access and indexed access
but the most used method is sequential access due to the fact that most of the files
such as text files, audio files, video files, etc need to be sequentially accessed.
Direct Access
The Direct Access is mostly required in the case of database systems. In most of
the cases, we need filtered information from the database. The sequential access
can be very slow and inefficient in such cases.
Suppose every block of the storage stores 4 records and we know that the record
we needed is stored in 10th block. In that case, the sequential access will not be
implemented because it will traverse all the blocks in order to access the needed
record.
Direct access will give the required result despite of the fact that the operating
system has to perform some complex tasks such as determining the desired block
number. However, that is generally implemented in database applications.
Allocation Methods
The allocation methods define how the files are stored in the disk blocks. There are
three main disk space or file allocation methods.
Contiguous Allocation
Linked Allocation
Indexed Allocation
Contiguous Allocation
In this scheme, each file occupies a contiguous set of blocks on the disk. For
example, if a file requires n blocks and is given a block b as the starting location,
then the blocks assigned to the file will be: b, b+1, b+2,……b+n-1. This means
that given the starting block address and the length of the file (in terms of blocks
required), we can determine the blocks occupied by the file.
The directory entry for a file with contiguous allocation contains
The file ‘mail’ in the following figure starts from the block 19 with length = 6
blocks. Therefore, it occupies 19, 20, 21, 22, 23, 24 blocks.
Linked List Allocation
In this scheme, each file is a linked list of disk blocks which need not be
contiguous. The disk blocks can be scattered anywhere on the disk.
The directory entry contains a pointer to the starting and the ending file block.
Each block contains a pointer to the next block occupied by the file.
The file ‘jeep’ in following image shows how the blocks are randomly distributed.
The last block (25) contains -1 indicating a null pointer and does not point to any
other block.
Indexed Allocation
In this scheme, a special block known as the Index block contains the pointers to
all the blocks occupied by a file. Each file has its own index block. The ith entry in
the index block contains the disk address of the ith file block. The directory entry
contains the address of the index block as shown in the image:
End .
How to enroll ?