Fundamentals of Os
Fundamentals of Os
Fundamentals of Os
PGDCA 104
BLOCK 1:
INTRODUCTION TO
OPERATING SYSTEMS
Author
Er. Nishit Mathur
Language Editor
Prof. Jaipal Gaikwad
Acknowledgment
Every attempt has been made to trace the copyright holders of material reproduced
in this book. Should an infringement have occurred, we apologize for the same and
will be pleased to make necessary correction/amendment in future edition of this
book.
The content is developed by taking reference of online and print publications that
are mentioned in Bibliography. The content developed represents the breadth of
research excellence in this multidisciplinary academic field. Some of the
information, illustrations and examples are taken "as is" and as available in the
references mentioned in Bibliography for academic purpose and better
understanding by learner.'
ROLE OF SELF INSTRUCTIONAL MATERIAL IN DISTANCE LEARNING
Contents
UNIT 1
BASICS OF OS
UNIT 2
TYPES OF OPERATING SYSTEM
UNIT 3
BATCH OPERATING SYSTEM
BLOCK 1: INTRODUCTION TO
OPERATING SYSTEMS
Block Introduction
An operating system is important software which makes the computer to
run. It handles all the computer processes and runs the hardware. It makes you to
communicate with computer without having command on its language. It is seen
that your computer operating system manages all software and hardware
functions. The main idea of operating system is to coordinate will all processes
and links these processes with central processing unit (CPU), memory and
storage.
In this block, we will detail about the basic of Operating System and
different types of Operating System. The block will focus on the study and
concept that led to explanation about Operating System structure. The students
will give with the idea about Batch processing system.
In this block, the student will make to learn and understand about the basic
of operating system and its functions. The student will be demonstrated practically
and theoretically about different types of operating system used.
Block Objective
After learning this block, you will be able to understand:
Block Structure
Unit 1: Basics of OS
1
Introduction
to Operating UNIT 1: BASICS OF OS
Systems
Unit Structure
1.0 Learning Objectives
1.1 Introduction
1.8 Glossary
1.9 Assignment
1.10 Activities
2
Basics of OS
1.1 Introduction
Earlier in 1960’s, operating system is software which handles the hardware.
Presently, we see operating system as set of programs that create the hardware to
work. Generally, operating system is set of programs to facilitate controls of a
computer. There are different types of operating systems as UNIX, MS-DOS, MS-
Windows, Windows/NT, and VM.
3
Introduction
to Operating
Systems
The operating system is system software that is stored on the storage device
such as hard disk, CD-ROM or floppy disk. When a computer is switched on, the
operating system is transferred from the storage device into main memory through
ROM.
4
The work of the operating system involves:- Basics of OS
Managing Input/output
Managing Files
This part of operating system is loaded into main memory when required. It
includes:
WINDOWS- NE
a. System software
d. All of above
5
Introduction 2. Which among the following is not a function of an operating system?
to Operating
Systems a. recognize input from keyboard
c. loads keyboard
d. track of files
Professional operators when interacted with computer found that users drop
such jobs and finally returned to hold the result soon after running of particular
job. It was quiet difficult for users as expensive computer were made to involve in
such type of processing of jobs.
With this time sharing OS, many users shared the computer and then spent
only a fraction of second on every job before moving to the next job. It is found
that a fast computer will work for many user’s jobs at the same time thereby
making the illusion that they were full attentive while receiving such jobs.
During mid-1970, the personal computers allows pockets and Altair 8800
were initially used for commercial purposes for an individuals. In the start of
1975, the Altair was sold to hobbyists in kit form. It was without the operating
system because it has only toggle switches and light emitting diodes which serves
as input and output.
After sometimes, people started connected terminals and floppy disk drives
to Altairs. During the year 1976, Digital Research introduced CP/M operating
6
system for such Computer. CP/M and later on DOS had CLIs which were similar Basics of OS
to timeshared operating systems where computer was only for a particular user.
With the success of Apple Macintosh in 1984, the particular system pushed
the state of hardware art which were restricted to small with black and white
display. As hardware continued to develop, many colour Macs were under
developed position and soon Microsoft introduced Windows as its GUI operating
system.
It was found that the Macintosh operating system was based on decades of
research on graphically-oriented personal computer operating systems and
applications. Computer applications today require a single machine to perform
many operations and the applications may compete for the resources of the
machine. This demands a high degree of coordination which can be handled by
system software known as an operating system
The internal part of the OS is often called the kernel which comprises of:
File Manager
Device Drivers
Memory Manager
Scheduler
Dispatcher
F
Fig 1.3 Interfaces of OS
a. 1980
b. 1970
c. 1985
d. 1955
7
Introduction
to Operating
1.4 Operating system structure-monolithic layered
Systems
The mean of operating system architecture usually follows the leave-taking
of particular principle. Such principle guide to re-structure the operating system
mainly into relatively independent parts that can be easily provide basic
independent features by keeping complicated designs in manageable conditions.
Monolithic Systems
It is seen that CP/M and DOS are examples of monolithic operating systems
that share common address space with certain applications. It is found in CP/M,
16 bit address space will begins with system variables along with application area
additionally ends with 3 parts of O/S which are known as:
8
Basics of OS
a. CCP c. BIOS
b. BDOS d. DOS
9
Introduction hardware as well as associated maintenance costs, along with reduces power
to Operating furthermore cooling demand. They also allay management due to virtual hardware
Systems
does not collapse. Administrators can take advantage of virtual circumstances to
simplify backups, disaster recovery, new deployments as well as elementary
system administration tasks.
Client server
10
the services of the equivalent server program, a special server identified a daemon Basics of OS
may be charged due to anticipate client requests. In marketing, the client/server
had been once used to differentiate allocated computing by personal computers
(PCs) from the monolithic, concentrated computing model exercised by
mainframes. This differentiation has largely evaporated, although, as mainframes
along with their applications possess additionally turned to the client/server model
further become part of network computing.
a. Windows
b. Linux
c. DOS
d. all
2. In a Client/server program:
3. Input/output
4. Execution of applications
11
Introduction 5. Files
to Operating
Systems 6. Information management
Answers: (1-b)
Answers: (1-d)
1.8 Glossary
1. Shell - It makes interface with user
1.9 Assignment
Write note on Client Operating System?
12
1.10 Activities Basics of OS
13
Introduction
to Operating UNIT 2: Types of Operating System
Systems
Unit Structure
2.0 Learning Objectives
2.1 Introduction
2.5 Glossary
2.6 Assignment
2.7 Activities
Distributed System
2.1 Introduction
There are abundant Operating Systems those monopolize be constructed for
functioning the performances those are demanded by the user. There are ample
Operating Systems which acquire the ability to behave the entreaties those are
acquired from the approach. The Operating system can behave a unique Operation
14
furthermore also multiple movements at duration. Hence there are numerous Types of
categories of Operating systems those are arranged by utilizing their Working Operating
System
mechanisms.
1) Serial Processing
2) Batch Processing
3) Multi-Programming
6) Multiprocessing
Hard Real Time System: In the Hard Real Time System, Time is fixed and
we can’t Change any Moments of the Time of Processing. Means CPU will
Process the data as we Enters the Data.
Soft Real Time System: In the Soft Real Time System, some Moments can
be Change. Means after giving the Command to the CPU, CPU Performs
the Operation after a Microsecond.
15
Introduction 2.2.2 Multi-user System
to Operating
Systems As we comprehend that in the Batch Processing System there are multiple
jobs appear by the System. The System foremost compose a batch furthermore
later that he will accomplish all the jobs those are saved into the Batch.
Furthermore the innermost difficulty is that if a mechanism or job needs an Input
as well as Output Operation, that time it is not achievable and second there will be
the wastage of the duration when we are composing the batch as well as the CPU
will continue idle at that duration.
Then if we want to Take Some Data from other Computer, Then we use the
Distributed Processing System. And we can also Insert and Remove the Data from
out Location to another Location. In this Data is shared between many users. And
we can also Access all the Input and Output Devices are also accessed by Multiple
Users.
16
Types of
Check your progress 1 Operating
1. In Hard Real Time System, Time: System
a. varies c. zero
2. Soft Real Time System is a part of Real Time O/S where each moment
changes
3. Multi programming is a programming technique where many
programs perform
2.5 Glossary
1. Real Time System - It is an Operating system which is exercised to get
timely return
2. Soft Real Time System - It is a type of Real Time O/S where each moment
changes
17
Introduction 3. Multi programming - it a programming techniques where many programs
to Operating perform
Systems
2.6 Assignment
Write note on Batch processing Operating System?
2.7 Activities
Explain the cycle of operation of Real Time Operating System
18
UNIT 3: BATCH OPERATING SYSTEM
Unit Structure
3.0 Learning Objectives
3.1 Introduction
3.3.1 Jobs
3.6 Glossary
3.7 Assignment
3.8 Activities
19
Introduction
to Operating
3.1 Introduction
Systems
Batch is the term which is given to the work of doing similar jobs
continuously again and again but with a difference as in this the input data is
shown for every iteration of the job and probably the output file.
Images
Batch processing is often used to perform various operations with digital
images such as resize, convert, watermark, or otherwise edit image files.
Conversions
Batch window
20
Numerous untimely computer systems granted sole batch processing; hence
Batch Operating
jobs could be plunge any time within a 24-hour day. With the accomplishment of System
assignment processing the online approaches might singular is required from 9:00
a.m. to 5:00 p.m., abstracting dual shifts obtainable for batch work, in this casket
the batch window would be sixteen hours. The difficulty is not normally that the
computer system is inadequate of admitting combined online along with batch
work, although that the batch systems normally constrain approach to data in an
integrated state, released from online updates until the batch processing is
complete.
21
Introduction input. A program acquires a portion of data files as input, processes the data, as
to Operating well as brings about a set of output facts files. This operating arrangement is
Systems
identified as “batch processing” since the input data are acquired into batches or
sets of records as well as each batch endures processed as a unit. The output exists
another batch that can be reused for assessment.
Batch processing has been affiliated with mainframe computers owing to the
earliest decades of electronic computing in the 1950s. There was a
multifariousness of reasons why batch processing commanded premature
computing. One logic is that the foremost bustling business problems for analyses
of profitability as well as competitiveness were initially accounting problems, like
as billing. Billing may effectively be appeared as a batch-oriented business
process, along with appropriately every business constraining bill, reliably as well
as on-time. Furthermore, every computing resource had been costly; hence
consecutive submission of batch jobs on punched cards matched the resource
constraints as well as technology evolution at the duration. Later, interactive
sessions with coupled text-based computer terminal interfaces or graphical user
interfaces became additional common. Furthermore, computers originally were
not even cogent of having multiple programs loaded into the main memory.
22
integrated with grid computing solutions to measure a batch job above a large Batch Operating
number of processors; however there are relevant programming conflicts in doing System
so? High volume batch processing grounds particularly heavy demands on system
along with application architectures as well. Architectures that feature energetic
input/output performance as well as vertical scalability, along with modern
mainframe computers, tend to cater better batch performance than alternatives.
Batch processing is most suitable for tasks where a large amount of data has
to be processed on a regular basis.
Examples
A. payroll systems
B. examination report card systems
Advantages
Once the data are submitted for processing, the computer may be left
running without human interaction.
The computer is only used for a certain period of time for the batch job.
Jobs can be scheduled for a time when the computer is not busy.
Disadvantages
3.3.1 Jobs
In batch processing, job contains relevantly common group of processing
along with calculation actions that utilizes small or very less cooperation among
you along with the computer system. When a batch job is acknowledged, that time
the job will primarily enter in a job queue where it will functionally halt till the
system captures ready to process the next job. Here the system formed off its
processing mechanism of job when it acquires the job from the job sequence. A
batch job is put in a job queue by:
23
Introduction It is found that a job queue carries several work or jobs which are halted for the
to Operating system to process them. Your job waits while the system processes other jobs that
Systems
other users submit prior to your job or have a higher priority. When system
resources are available, the system processes your job.
24
Batch Operating
System
Each of the Excel files from a well plate reader observes prefer the one
demonstrated in Figure 3.2. Five duplicate measurements of definite binding are
demonstrated in columns C through G. For tutorial approaches, the liberated radio
ligand entrancement has been accumulated in column B. The macro has been
written to appropriate an equation to two columns of data so for this example we
will ignore the replicates. It is at ease to change the macro to encompass the row
wise duplicate format in the curve fit.
25
Introduction
to Operating
You can then select the appropriate region of the Excel file containing the
Systems
data to fit. This is shown in Figure 3.3 for the data in Figure 3.2.
The distorted function is chosen to fit each data set additionally a simple
scatter plot continues used to demonstrate the results. Note that every equation in
the Sigma Plot curve suited library is accounted in the dropdown box in Figure
3.4. It is comfortable to conduct this on account of Sigma Plot Automation
authorizes you to look for a notebook (in this case the standard.jfl notebook
containing all the curve fit equations) for benefit objects (or objects of any type)
as well as create a list of them. If you expect you can displaced in this macro a
contrary notebook with another group of fit equations. The new equations will that
time display in the dropdown list. If a user-defined equation is acquired to
common. jf l that time it will display in the list. The batch process effects are that
time saved in a notebook. You may browse to select the suitable file.
26
Batch Operating
System
For the five files shown in Figure 3.5, the notebook contains five sections
each with worksheets with individual data sets, scatter plots of the data, fit results
and detailed curve fit reports.
a. instructions c. rules
b. commands d. all
27
Introduction 2. In a program, the data files acts as:
to Operating
Systems a. input c. processes
a. analog c. mainframe
b. hybrid d. all
Batch processing has been affiliated with mainframe computers owing to the
earliest decades of electronic computing
Answers: (1- b)
3.6 Glossary
1. Batch - It is term that describes the amount of the work to do particular jobs
continuously.
28
Batch Operating
3.7 Assignment System
Write short details on Batch data.
3.8 Activities
Study the different techniques of Batch jobs.
29
Introduction
to Operating
Block Summary
Systems
In this block, we have studied about the basic of Operating System and
different types of Operating System. We have an idea about the necessary
function and advantages of using an operating system. There are abundant
Operating Systems those monopolize be constructed for functioning the
performances those are demanded by the user. Batch is the term which is given to
the work of doing similar jobs continuously again and again but with a difference
as in this the input data is shown for every iteration of the job and probably the
output file.
The block detailed about the concept that explains the usability and
structuring about Operating System. In this block students have given an idea
about Batch processing system. The block focuses on practical implications of
operating system.
30
Block Assignment
Short Answer Questions
1. What is an Operating System?
31
Introduction Enrolment No.
to Operating
Systems 1. How many hours did you need for studying the units?
Unit No 1 2 3 4
Nos of Hrs
2. Please give your reactions to the following items based on your reading of
the block:
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
……………………………………………………………………………………………
32
Education is something
which ought to be
brought within
the reach of every one.
- Dr. B. R. Ambedkar
BLOCK 2:
MEMORY MANAGEMENT
AND PROCESS
SCHEDULING
Author
Er. Nishit Mathur
Language Editor
Prof. Jaipal Gaikwad
Acknowledgment
Every attempt has been made to trace the copyright holders of material reproduced
in this book. Should an infringement have occurred, we apologize for the same and
will be pleased to make necessary correction/amendment in future edition of this
book.
The content is developed by taking reference of online and print publications that
are mentioned in Bibliography. The content developed represents the breadth of
research excellence in this multidisciplinary academic field. Some of the
information, illustrations and examples are taken "as is" and as available in the
references mentioned in Bibliography for academic purpose and better
understanding by learner.'
ROLE OF SELF INSTRUCTIONAL MATERIAL IN DISTANCE LEARNING
Contents
UNIT 1
MEMORY MANAGEMENT
UNIT 2
PROCESS SCHEDULING
BLOCK 2: MEMORY MANAGEMENT
AND PROCESS
SCHEDULING
Block Introduction
An operating system is important software which makes the computer to
run. It handles all the computer processes and runs the hardware. It makes you to
communicate with computer without having command on its language. It is seen
that your computer operating system manages all software and hardware
functions. The main idea of operating system is to coordinate will all processes
and links these processes with central processing unit (CPU), memory and
storage.
In this block, we will discuss detail about the basic of memory management
and process scheduling of Operating System. The block will focus on the study
and concept of virtual memory, paging and segmentation. The students will give
with the idea about Catch memory and virtual processor.
In this block, the student will make to learn and understand about the basic
of memory management techniques and its techniques. The concept related to
memory hierarchy, process state and interrupt mechanism will also be explained
to the students. The student will be demonstrated practically about the working of
page replace algorithm and its technique.
Block Objective
After learning this block, you will be able to understand:
1
Memory
Management and Basic of interrupt mechanism.
Process
Scheduling
Block Structure
Unit 1: Memory Management
2
UNIT 1: MEMORY MANAGEMENT
Unit Structure
1.0 Learning Objectives
1.1 Introduction
1.11 Glossary
1.12 Assignment
1.13 Activities
Associative memory
3
Memory
Management and 1.1 Introduction
Process
Scheduling Memory management is a type of subsystem which is an important part of
an operating system. During the computing period there was continuously need of
more memory in computer systems. Strategies have been developed to overcome
this limitation and the most successful of these is virtual memory. Virtual memory
makes the system appear to have more memory than it actually has by sharing it
between competing processes as they need it. With earlier computing it is found
that:
Program should be carryout into the memory and is kept inside a process for
further working.
The single process implementation will do from input queue which is loaded
inside the memory for implementation.
In computers, the address space starts with 00000, which was first address
of the user process that cannot be all 0.
It is found that during the compile time and load time, the address binding
schemes makes these two tend similar but they differ in execution time address
4
binding scheme and MMU (Memory Management Unit) that caters the translation Memory
Management
of such addresses.
In fig 1.1, it is looked that every procedure in the system comprises its own
virtual address space. Similarly virtual address spaces are entirely alienated from
each other furthermore on account of a process bounding one application cannot
authorize another. Additionally, the hardware virtual memory approaches assign
regions of memory to be maintained across writing. This conserves code as well
as data from being overwritten by miscreant applications.
5
Memory
Management and information that had been conveyed in a set of tables to preserve safe the
Process operating system.
Scheduling
To simpler this, virtual as well as physical memory are allotted as handy
sized chunks recognized as pages. Similarly chunks are of identical size where it
serves difficult for the system to monitor as Linux on Alpha AXP systems utilizes
8 KB pages as well as on Intel x86 systems it applies 4 KB pages. Every chunk of
pages are allotted with a unique number as page frame number (PFN).
• Offset
It is found that the virtual memory will allow process to be of virtual address
spaces, so that there are times when you need processes to share such memory.
Now the processor uses virtual page frame number as an index into the processes
page table to get back its page table entry. If the entry is valid, then the processor
will carry out the physical page frame number from such entry. If the entry is not
a valid entry, then the process will try to come out with a non-existing area of its
virtual memory. Under such conditions, the processor cannot solve the address
and will pass the control to the operating system which is a permanent process.
Logical and physical addresses are the same in compile-time and load-time
address-binding schemes; logical (virtual) and physical addresses differ in
execution-time address-binding scheme
6
Memory
Check your progress 1 Management
a. 8 c. 32
b. 16 d. 64
7
Memory
Management and In fig 1.2, the process page gets loaded to particular memory frame. Such
Process pages will further be loaded into neighboring frames or in non-neighboring
Scheduling memory as highlighted in the figure 1.2 It is seen that the outside fragmentation
gets improved because the processes gets inside in a separate holes.
Page Allocation
1) Best-fit Policy: It allocates the hole where the process is tight as the difference
between whole size and process size is lowest.
2) First-fit Policy: This will allocates the initial found hole that can be big enough
to fit in the new process.
3) Worst-fit Policy: It allocates the maximum size hole which leaves the full
amount of unused space.
From the above three listed strategies it seems that the strategy best-fit and
first-fit are better as compared to the worst-fit. Both best-fit and first-fit strategies
are efficient in terms of time and storage capacity. In case of best-fit strategy,
minimum leftover space is seen which will create the smallest hole which are not
used frequently. In case of first-fit strategy, it uses least overheads in order to
work because it is the simplest strategy to work upon. Possibly worst-fit also
sometimes leaves large holes that could further be used to accommodate other
processes. Thus all these policies have their own merits and demerits.
It is seen that all logical page in paging scheme is further divided as:
8
Memory
From the figure 1.3, it is seen that a page number will take 5 bits with its Management
range starts from value 0 to 31 that can be 2 5 -1. Likewise, if we consider an offset
value of having 11-bits, then the range will be from 0 to 2023 which is 2 11 –1.
Totally, we see that the paging scheme uses 32 pages, each having 2024 locations.
Also, the table that keeps virtual address to physical address translations is further
classified as page table. It is found that as the displacement is fixed, the translation
of virtual page number to physical page exists which can be seen in the figure 1.4.
It is seen that the page number is required in shape of an index which is into
the page table containing base address for every corresponding physical memory
page number. This arrangement will lowers the dynamic relocation efforts which
are shown by the paging hardware support as in figure 1.5.
9
Memory
Management and Paging address Translation by direct mapping
Process Consider a case of direct mapping as shown in fig 1.5 where a page table
Scheduling
sends directly to physical memory page. In this, the drawback is that the speed of
translation decreases because the page table is put in primary storage place having
a considerably larger in size that increases the instruction execution time and led
to lowering of system speed. In order to conquer such situation, the use of extra
hardware such as registers and buffers are used.
It is based on the utilization of fixed registers that has high speed and
efficiency. Such small, fast-lookup Catch will help to put the whole page table in
content-addresses associative storage place thereby making the speed to improve
and further to care for the lookup problem of the Catch. These are known as
associative registers or Translation Look-aside Buffers (TLB’s). It is found that
each register consists of two entries:
10
Memory
It is seen that in paging hardware, there is a presence of some protection
Management
mechanism. Inside the page table there exists corresponding frame where a
protection bit is linked. Such type of bit will show whether the page is read-only
or read-write. In this, sharing code and data will takes place only when two pages
table entries in different process shows the similar physical page where every
process shares the memory. It is seen that if one process writes the data, then the
second process will locate for the changes. Such type of an arrangement is quiet
efficient while in communicating. Sharing is required to control in order to protect
modification and admission of data in a single process with the help of second
process. Such type of programs is kept independent as procedures and data where
procedures and data that are pure/reentrant code get shared. Re-entrant code will
not be able to change itself and should make sure that it contains a separate copy
of per-process global variables. It is predicted that modifiable data and procedures
will not share without the intervention of concurrency controls. Such type of non-
modifiable procedures sometimes are called as pure procedures or reentrant codes.
In case of an example, it is illustrated that in such system only single copy of
editor or compiler code be kept in the memory, and all editor or compiler
processes and executes sit with the help of single copy of code which will help in
memory utilization.
Advantages
1. Virtual address space must be greater than main memory size. i.e., can
execute program with large logical address space as compared with physical
address space.
Disadvantages
11
Memory
Management and Segmentation
Process In generic, a consumer or a programmer likes to observe system memory as
Scheduling
an assembly of variable-sized allocations rather than as a linear arrangement of
words. Segmentation occurs as a memory management arrangement that accepts
this glance of memory.
Principles of Operation
Address Translation
12
Memory
Management
13
Memory
Management and Check your progress 2
Process
Scheduling 1.The page sizes or frame sizes is in the range of-
a. maximum
b. lowest
c. half
d. none of these
User written error handling routines are used only when an error occurred in
the data or computation.
Many tables are assigned a fixed amount of address space even though only
a small amount of the table is actually used.
Less number of I/O would be needed to load or swap each user program into
memory.
14
A program would no longer be constrained by the amount of physical Memory
Management
memory that is available.
Each user program could take less physical memory; more programs could
be run the same time, with a corresponding increase in CPU utilization and
throughput.
When a process is to be exchanged in, the pager conceives which pages will
be facilitated before the process is exchanged out again. Instead of exchanging in
a whole process, the pager carries only those essential pages into memory. So, it
bypasses reading into memory pages that will not be used in anyway, shortening
the swap time as well as the amount of physical memory expected.
Hardware support is essential to discriminate between those pages that are
in memory as well as those pages that are on the disk employing the valid-invalid
character scheme where correct as well as defective pages can be examined by
checking the bit. Marking a page will hold no effect if the process never
15
Memory
Management and approaches to approach the page. While the process achieves as well as accesses
Process pages that are memory resident, execution approaches predominantly.
Scheduling
a. processes c. instructions
a. Paging c. Segmentation
16
Memory
1.5 Page Replacement Algorithms Management
Page replacement algorithms are the mechanisms exercising which
Operating System determines which memory pages to exchange out, write to disk
when a page of memory expects to be assigned. Paging occurs whenever a page
fault arises also a free page cannot be facilitated for allocation approach
accounting to analysis that pages are not obtainable or the number of free pages is
shorten than necessary pages.
When the page that was chosen for exchanged and was paged out, is
referenced again that time it has to read in from disk, furthermore this stipulates
for I/O completion. This approach considers the quality of the page replacement
algorithm: the lesser the time subsiding for page-ins, the better is the algorithm. A
page replacement algorithm beholds at the edged information about approaching
the pages delivered by hardware, additionally tries to choose which pages should
be replaced to shorten the total number of page lacks, while balancing it with the
costs of primary storage along with processor time of the algorithm itself. There
are abundant different page replacement algorithms. We calculate an algorithm by
bounding it on a definite string of memory reference as well as assessing the
number of page faults.
RAND (Random)
Change the page that will be referenced best in the future or not at all.
Choose the page that has been in main memory the longest.
17
Memory
Management and Problem: however a page has been residing for a long time, it may be
Process absolutely useful.
Scheduling
Windows NT as well as Windows 2000 utilize this algorithm, as a local
page replacement algorithm (explained distinctly), with the pool approach
(described in more detail separately).
Construct a bay of the pages that have been labeled for removal.
Manage the pool in the identical way as the rest of the pages.
Select the page that was final referenced the longest time ago.
Can control LRU with a list identify the LRU stack or the paging stack (data
structure).
In the LRU stack, the initial entry explains the page referenced least
recently, the last entry describes to the last page referenced.
Too slow to be used in practice for controlling the page table, however
many systems use assessments to LRU.
As an appraisal to LRU, choose one of the pages that has not been exercised
currently (as opposed to identifying exactly which one has not been
employed for the longest amount of time).
Save one bit identified the "used bit" or "reference bit", where 1 => used
recently and 0 => not used recently.
18
Most variations facilitate a scan pointer and pass through the page frames
one by one, in some order, inspecting for a page that has not been used
currently.
Memory
Working Set (WS) Management
Thrashing: when the computer system is compulsive with paging, i.e., CPU
has little to do but there is heavy disk traffic moving pages to and from
memory with little use of those pages.
Working set: the pages that a process has used in the last w time intervals.
Block the number of processes in the develop list so that whole can have
their functioning set of pages in memory.
Before beginning a process, make sure it’s working set is in main memory.
When a page fault exists, if the last page fault for that process was fresh,
that time increase the size of its working set (up to a maximum).
Load original code pages, original data pages, original stack pages.
If a process holds not faulted currently, ease the size of its ws i.e. exclude all
pages not used currently ("used bit").
Additionally, in WinNT, can call the process object service to alter working-
set min as well as max for a process, up to a defined max as well as min.
19
Memory
Management and Check your progress 4
Process
Scheduling 1. Page replacement algorithms decides:
d. all
2. In FIFO page replacement algorithm, the page to be replaced__________.
d. none
3. Which algorithm select page that was not used for long period whenever a
page is replaced?
The Catch memory lies in the direction between the processor as well as the
memory. The Catch memory hence, has underling access time than memory
further is faster than the main memory. A Catch memory acquires an access time
of 100ns, while the main memory may acquire an access time of 700ns.
20
The Catch memory is very costly moreover owing to subsists limited in capacity. Memory
Management
Earlier Catch memories were practicable individually but the microprocessors
include the Catch memory on the chip itself.
Expectation for the Catch memory is just to the mismatch between the
speeds of the main memory as well as the CPU. The CPU clock as lectured earlier
is very fast, whereas the main memory access time is contrastingly slower.
Therefore, no matter how fast the processor is, the processing speed depends
additional on the speed of the main memory (the energy of a chain is the energy of
its weakest link). It is on account of this analysis that a Catch memory acquires
access time closer to the processor speed that is created.
The Catch memory stores the program (or its part) currently being executed
or which may be executed within a short period of time. The Catch memory
additionally accumulates temporary data that the CPU may commonly stipulate
for manipulation.
It appears as a high speed buffer between CPU along with main memory
additionally is used to temporary store very energetic data and action all along
processing because the Catch memory is faster than main memory, the processing
speed is elevated by making the data as well as instructions desired in current
processing available in Catch. The Catch memory is very costly and therefore is
bordered in capacity.
a. RAM c. Catch
b. ROM d. all
b. 700ns d. 500ns
21
Memory
Management and 1.7 Hierarchy of Memory Types
Process
Scheduling Memory hierarchy is employed in computer architecture when chattering
behaviour events in computer architectural idea, algorithm predictions, as well as
the compact level programming composes such as confounding locality of
reference. A "memory hierarchy" in computer storage discriminates each level in
the "hierarchy" by response time. There are physically various brands of memory
acquiring significant asymmetries in the time to read or write the contents of a
peculiar position in memory, the measure of information that is read or written on
an allotted condition, the complete volume of information that can be stored,
along with the unit amounts of storing an assigned amount of information. To
optimize its operation as well as to capture greater efficiency along with economy,
memory is arranged in a hierarchy with the greatest performance as well as in
universal the best high-priced devices at the top, as well as with progressively
lessen performance additionally less costly devices in following layers as shown
in fig 1.11 The contents of a certain memory hierarchy, along with the way in
which data flows between immediate layers, might be arranged as results.
22
Register Memory
Management
A single word confined in each register of the processor; definitely a word
includes 4 bytes. This is sometimes not considered of as chunk of the hierarchy.
Catch
These are bunches of words within the Catch; definitely an individual group
in the Catch will gather 64 words (say 256 bytes), along with there will be, say,
1024 alike groups, assigning a complete Catch of 256 KBs. Individual word flows
between the Catch as well as registers within the processor. All transfers into
further out of the Catch are controlled completely by hardware.
Main memory
Words within the main (random-access) memory. On a very high
performance system, groups of words acknowledging to a group within the Catch
are conveyed between the Catch as well as the main memory in a unique cycle of
main memory. On lower-performance systems the dimension of the group of
words in the Catch is more than the width of the memory bus, along with the
transfer takes the category of a chain of memory cycles. The algorithm that
administers this movement is exercised completely in hardware. Main memory
measurements are very variable – from as little as 1 GB on a compact system up
to numerous GB on a high-performance system.
Demountable storage
23
Memory
Management and with the improvement of a backed-up file may be automatic, or may stipulate
Process direct facilitation by the end user. For additional systems the backup agency is
Scheduling definitely a changed feature of a video or audio cassette system, perchance
elevated in some structure of computer-controlled cassette-handling robot
assessment. Smaller systems may conduct a cassette system or floppy disks.
Read-only library
a. 5 bytes c. 7 bytes
b. 4 bytes d. 16 bytes
auto associative
hetero associative
24
An auto associative memory accumulates a formerly stored prototype that Memory
Management
most immediately looks like the today’s prototype. In a hetero associative
memory, the accumulated prototype is, in general, distinct from the input
prototype not only in content yet perhaps furthermore in type as well as format.
The page sizes or the frame sizes will be of power 2, and fluctuates between
512 bytes to 8192 bytes per page.
The Catch Memory exists as Memory which is very nearest to the CPU.
25
Memory
Management and 1.10 Answers for Check Your Progress
Process
Scheduling Check your progress 1
Answers: (1-c)
1.11 Glossary
1. Memory hierarchy - Refers to different types of memory.
1.12 Assignment
What are the four important tasks of a memory manager?
26
Memory
1.13 Activities Management
27
Memory
Management and UNIT 2: PROCESS SCHEDULING
Process
Scheduling Unit Structure
2.0 Learning Objectives
2.1 Introduction
2.6 Threads
2.9 Glossary
2.10 Assignment
2.11 Activities
28
Process
2.1 Introduction Scheduling
Maximum systems have a great figure of processes with abrupt CPU bursts
bracketed between I/O requests as well as a little figure of processes with
elongated CPU bursts. To allow good time-sharing behaviour, we may pre-empt a
moving process to allow another one flow. The arrange list, additionally
comprehended as a run chain, in the operating system preserves a history of
complete processes that are eager to run moreover not blocked on input/output or
another blocking system demand, alike as a semaphore. The entries in this
document are pointers to a procedure control block, which accumulates all
information besides state about a process.
When an I/O approach for a process is accomplish, the process behaves
from the waiting state to the ready state further acquires placed on the run chain.
The fresh process flows from the running to the waiting condition due to it
issues an I/O request or numerous operating system demand that cannot be
satisfied currently. The recent process halts.
A timer interrupt drives the scheduler to run as well as decide that a process
acquires run for its allocated duration of time as well as it is time to proceed it
from the active to the develop state.
29
Memory
Management and The judgments that the scheduler brings about concerning the sequence as
Process well as length of time that mechanisms may run is designated the scheduling
Scheduling algorithm (or scheduling policy). These judgments are not contend ones, as the
scheduler acquires only a restricted number of information about the processes
that are develop to run. An excellent scheduling algorithm should:
Be attractive – allocate each process a pretty share of the CPU, permit each
process to proceed in a feasible measure of time.
Minimize overhead – don’t excrete too many means. Keep approximating time
as well as context switch time at a minimal.
30
Process
Operating Systems resource in application.
Scheduling
A process flows through an arrangement of different process states.
b. reading d. all
a. new c. halting
31
Memory
Management and more vCPUs will not automatically advance action. This is due to as the notation
Process of vCPUs flows up, it serves increased complicated for the scheduler to arrange
Scheduling time slots on the real CPUs, along with the wait time can disgrace performance.
(2) In parallel processing surroundings that adheres more data components than
processors, a virtual processor is a duplicated processor. Virtual processors
conduct in series, not in parallel, although authenticate applications that
need a processor for each data component to flow in a computer with fewer
processors.
b. CPU d. none
32
Process
1. Actuate any further Scheduling
2. Interrupt signal is detected
Following the interrupt signal is perceived, the computer either begins again
running the program it endured running or commences running another program.
1) Device arm,
2) NVIC enables,
3) Global enable,
4) Interrupt priority level must be higher than current level executing, and
33
Memory
Management and
Process
Scheduling Check your progress 3
1. Interrupt mechanism uses__________.
Decreasing-Time Algorithm
Critical-Path Algorithm
Decreasing-Time Algorithm
Decreasing-Time Algorithm (DTA) is based on simple strategy:
Perform the longer jobs initially as well as save the shorter jobs for final.
Basically it places the DTA to make a Priority List by listing the everyday jobs in
declining order of dispensation times. Tasks through equal processing times are
capable of listing in any order. A Priority List produced by the DTA is over and
over again a decreasing-time list as shown in fig 2.1.
34
One time, it is seen that the precedence relations always overrule the Process
Scheduling
Priority List as soon as there is a conflict involving the two. As a result, for
example, at this time the task X cannot in fact be assigned first despite of the fact
that it is first on the Priority List from the time when precedence relations insist to
facilitate task Q lead task X.
Even if the approach of scheduling says that the longer tasks first are good,
it does have a major defect. The DTA pay no attention to any information in the
project diagram that shows that one or more tasks ought to be done near the
beginning rather than late. For illustration, if one or more tasks by way of long
processing times can’t commence in anticipation of task X to get finished, at that
time passing on task X early will almost certainly result in a shorter finishing time
still however assigning task X early go against the DTA.
Critical-Path Algorithm
Formerly, the theory of critical time is known, now we will study about
Critical-Path Algorithm. The Critical-Path Algorithm (CPA) is based on an
approach comparable to with the aim of Decreasing-Time Algorithm:
It performs the work with high critical times first as well as keeps the jobs
with shorter critical times for final. It is seen that, the CPA produce a Priority List
by listing the work in declining order of significant times. It is found that work
with equal critical times can be listed in any manner. A Priority List created by the
CPA is often called a critical-path list.
35
Memory
Management and The initial step in applying the CPA to a project diagram is to understand
Process the Backflow Algorithm to return all processing times with critical times.
Scheduling Although the Critical-Path Algorithm is usually enhanced as compared to
Decreasing-Time Algorithm, neither is guaranteed to produce an optimal
schedule. In fact, no efficient scheduling algorithm is presently known that
always gives an optimal schedule. However, the Critical-Path Algorithm is the
best general-purpose scheduling algorithm currently known.
2.6 Threads
A thread is a particular sequence stream surrounded by a process. For the
reason that threads have a number of properties of processes, they are occasionally
called light weight processes. In a process, threads permit multiple
implementations of streams. In numerous reverences, threads are accepted way to
get better application through parallelism. The CPU switches quickly back as well
as forth in the middle of the threads giving false impression that the threads are
running in parallel. Like a conventional process i.e., process with one thread, a
thread can be in any of several states. Each thread has its individual stack. In view
of the fact that thread will usually call different procedures moreover thus a
different execution history. This is why thread needs its individual stack. An
operating system that has thread facility, the fundamental unit of CPU operation is
a thread. A thread has or consists of a program counter (PC), a register set as well
as stack space. Threads are not self-governing of one other like processes as a
result threads distribute with other threads their code section, data section, OS
resources also known as task, such as open files and signals.
A process with multiple threads makes a great server for example printer
server.
36
Process
Because threads can share common data, they do not need to use interposes
Scheduling
communication.
They only need a stack along with storage for registers as a result, threads
are cheap to create.
Threads use very small resources of an operating system in which they are
working. That is, threads do not require new address space, global data,
program code or operating system resources.
Context switching is fast as soon as working with threads. The reason is that
we only have to save and/or restore PC, SP and registers.
Architecture
37
Memory
Management and This is on the whole true when one of the tasks possibly will block,
Process furthermore it is required to allow the other tasks to proceed with no blocking.
Scheduling
For instance in a word processor, a surroundings thread may ensure spelling
as well as grammar while a centre thread processes user input, while however a
third thread loads images from the hard drive, as well as a fourth does periodic
automatic backups of the file being condensed.
Benefits
There are four major categories of benefits to multi-threading:
Responsiveness - One thread may give rapid reply at the same time other
threads are blocked-up or slow down doing serious calculations.
a. running c. ready
b. parsing d. blocked
38
Process
2.7 Let Us Sum Up Scheduling
In this unit we have learned:
We see that thread is the smallest unit of processing that can be performed
in an operating system.
Answers: (1-b)
Answers: (1-c)
2.9 Glossary
1. Virtual reality - Virtual reality is an artificial environment that is created
with software and presented to the user in such a way that the user suspends
belief and accepts it as a real environment.
39
Memory
Management and 3. VMware Platform Services Controller (PSC) - VMware Platform
Process Services Controller (PSC) is a new service in vSphere 6 that handles the
Scheduling infrastructure security functions.
2.10 Assignment
Write detail on Page replacement algorithms.
2.11 Activities
Explain Paging address Translation by direct mapping.
40
Block Summary
In this block, the students have learnt about the basic of memory
management and process scheduling that occurs in Operating System. The block
focuses more on the concept of virtual memory, paging and segmentation. The
understanding about Catch memory and virtual processor along with its necessary
techniques has also been explained.
After completing this block, students will be able to learn and work on
variety of operating system available today. The use of operating system with
various processing techniques will allow them to gain practical knowledge on
processor and its interference with operating system. The authors have made
every possible effort in learning and designing about basic of memory
management techniques and its related concepts with more knowledge on memory
hierarchy. The students will explained diagrammatically about different process
involves along with interrupt mechanism. The students will be demonstrated with
working of page replace algorithm and its technique.
41
Memory
Management and Block Assignment
Process
Scheduling Short Answer Questions
1. What is paging?
3. What is segmentation?
42
Enrolment No.
1. How many hours did you need for studying the units?
Unit No 1 2 3 4
Nos of Hrs
2. Please give your reactions to the following items based on your reading of
the block:
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
……………………………………………………………………………………………
43
Education is something
which ought to be
brought within
the reach of every one.
- Dr. B. R. Ambedkar
BLOCK 3:
FILE AND I/O
MANAGEMENT
Author
Er. Nishit Mathur
Language Editor
Prof. Jaipal Gaikwad
Acknowledgment
Every attempt has been made to trace the copyright holders of material reproduced
in this book. Should an infringement have occurred, we apologize for the same and
will be pleased to make necessary correction/amendment in future edition of this
book.
The content is developed by taking reference of online and print publications that
are mentioned in Bibliography. The content developed represents the breadth of
research excellence in this multidisciplinary academic field. Some of the
information, illustrations and examples are taken "as is" and as available in the
references mentioned in Bibliography for academic purpose and better
understanding by learner.'
ROLE OF SELF INSTRUCTIONAL MATERIAL IN DISTANCE LEARNING
Contents
UNIT 1
FILE SYSTEM
UNIT 2
I/O MANAGEMENT
BLOCK 3: FILE AND I/O
MANAGEMENT
Block Introduction
An operating system is important software which makes the computer to
run. It handles all the computer processes and runs the hardware. It makes you to
communicate with computer without having command on its language. It is seen
that your computer operating system manages all software and hardware
functions. The main idea of operating system is to coordinate will all processes
and links these processes with central processing unit (CPU), memory and
storage.
In this block, we will discuss about the basic of file system management and
input output memory management. The block will focus on the study and concept
of disk space allocation, disk scheduling and input out device drivers. The
students will give an idea on DMA control input output and basic programmed
input output.
In this block, the student will made to learn and understand about the basic
of programmed DMA input output management techniques. The concept related
to input output supervisors and input output drivers will also be explained to the
students. The student will be demonstrated practically about programmed input
output technique.
Block Objective
After learning this block, you will be able to understand:
1
File and I/O
Management
Block Structure
Unit 1: File System
2
UNIT 1: FILE SYSTEM
Unit Structure
1.0 Learning Objectives
1.1 Introduction
1.7 Glossary
1.8 Assignment
1.9 Activities
Types of files
3
File and I/O 1.1 Introduction
Management
A file system is the methods and data structures that an operating system
uses to keep track of files on a disk or partition; that is, the way the files are
organized on the disk. The word is also used to refer to a partition or disk that is
used to store the files or the type of the file system. Thus, one might say “I have
two file systems” meaning one has two partitions on which one stores files, or that
one is using the “extended file system”, meaning the type of the file system.
The difference between a disk or partition and the file system it contains is
important. A few programs (including, reasonably enough, programs that create
file systems) operate directly on the raw sectors of a disk or partition; if there is an
existing file system there it will be destroyed or seriously corrupted. Most
programs operate on a file system, and therefore won't work on a partition that
doesn't contain one (or that contains one of the wrong types).
In spite of all of the media hype about them, a hard disk is merely a medium
for storing information. A replacement for the limited capacity of the floppy disk,
which was the first type of disk storage media available on small computers. As
hard disks grow in capacity, becoming larger and larger every year, it is becoming
increasingly difficult for operating systems and their companion file systems, to
use them in an efficient manner.
4
File System
File systems will find about the naming particular files that are having
maximum name characters to be utilised in certain systems and till what time the
file name suffix can be applied. It shows a way to specify path to a file by the use
of directory structure. It uses metadata to keep and retrieve files that will cover:
Date created
Date modified
File size
Such type of file system example can be OS X that are utilised by
Macintosh hardware by allowing various optimization features that will cover file
names with 255 characters.
For certain group of user, such type of file system is constraints as they will
not provide read / write access. The best way is to wither put a password or to
encrypt the files so that the user can’t access. While encrypting, a key is provided
5
File and I/O to encrypt the file which can further decrypt the encrypted file text. The user with
Management definite key can only access the required file.
1.2.1 Partitions
When referring to a computer hard drive, a disk partition or partition is a
segment of the hard drive that is separated from other portions of the hard drive.
Partitions help enable users to divide a computer hard drive into different drives or
into different portions for multiple operating systems to run on the same drive.
With older file allocation tables, such as FAT16, creating smaller partitions
allows a computer hard drive to run more efficiently and save more disk space.
However, with new file allocation tables, such as FAT32, this is no longer the
case.
Boot: It is a partition that contains the files required for a system start
up.
6
NTFS: It is used with Microsoft Windows NT 4.x, Windows 2000 and File System
Windows XP.
Solaris X86: This is used with Sun Solaris X86 platform operating system.
In this type of directory structure as shown in fig 1.2, all files are in the same
directory.
7
File and I/O It has certain limitations as:
Management
Being all files in same directory, they possess unique name.
If two users call their data free test, then unique name rule is violated.
Even a single user may find it difficult to remember the names of all files as
the number of file increases.
ii. On account of user job start or log in, system Master File Directory (MFD)
gets searched. The Master File Directory gets indexed by user name or
Account Number.
iii. In case when the user refers to certain file, then at that time its own UFD
gets searched.
8
File System
2. Here the special calls are used to form and delete directories.
3. In this all process has current directory that carries many files which are of
current interest to the process.
5. In this, the user will amend his current directory whenever he wants.
6. In case, if the file is not required in present directory then the user normally
either shows a path name or practically amends the present directory. Here
the paths can be of two types:-
a) Absolute Path: This will start at root and follows a path down to
particular file.
9
File and I/O b) If any sub directories exist, same procedure must be applied.
Management
Here the UNIX rm command is used, whereas MS dos will not delete a
directory unless it is empty.
Fig 1.5 shows another type of directory which is a graph having no cycles.
A. Creating a link
Deletion of the link will not affect the original file, only link is
removed.
10
Check your progress 1 File System
a. DOS c. DRDOS
4. In which type of directory arrangement, all files are placed in single directory.
11
File and I/O techniques like contiguous, linked along with indexed. It is finding that every
Management method has its own merits and demerits.
1. Continuous
2. Non-continuous
Contiguous Allocation
12
File System
Also it is noted that the file directories available inside contiguous allocation
systems is clear to implement. It is found that every file is required to keep the
initial location of particular file along with required file size. Consider again the
diagram shown in fig 1.6, where the file size is of N blocks long which originates
from location L, then it will gather blocks L, L+ 1 ........L+N- 1. In this the
directory entry shows the location or address of initial block along with the file
size length.
The only problem with in the contiguous allocation is locate for the space
inside a new file when process of free space listing is performed with the help of
bit map method. In order to create n-bytes which are quiet long file, we should
first locate for n 0 bits in row. For better understanding of contiguous storage
allocation problem, we need to assume disk space as a mixture of free as well as
occupied portion. With this, we feel that each segment is contiguous group of disk
blocks.
13
File and I/O 1. First – fit
Management
In case of first fit strategy as soon as first hole appears, you will see that the
searching gets paused moreover the memory is billed for developing a file. Here,
searching will start at the start of set of holes or from the place where earlier first
fit search gets halted.
2. Best – fit
In case of best fit strategy, the searching of complete list started for smallest
hole, which is quiet big that builds for developing a particular file.
From the above two strategies, we see that both these are not fit in case of
storage applications. It is seen that out of the two strategies, first-fit is normally
faster as compared to best fit.
With this reason, such type of algorithms will lack from external
fragmentations. Once the files are billed and erased, the empty space on the disk
space gets broken down into smaller pieces. So we can say that external
fragmentations is basically scattered groups that has free blocks which are very
tiny for allocation that on collection will show big disk size.
As per the full disk storage and related file size, it seems that external
fragmentation could result in minor or major problem. If we see the major
problem in case of contiguous allocation, we will find that it is difficult to find the
space which is required by the file system. Such type of problems will not arise in
case of copying files where exact determination of file size is hard and is not
correct.
When the expected file size is much larger in a way that its extension is kept
in different disk area, then such of mechanism is called as file overflow. It is seen
that locating overflowed contiguous area is quiet boring and difficult that will lost
a feel in regard of contiguous allocation.
As the user are unaware about the file storage capacity and data present to
contiguous , so nowadays, the storage allocation systems are changed with more
dynamic non-contiguous storage allocation systems, that can be:
Linked allocation
Indexed allocation
14
Linked Allocation File System
It is seen that linked allocation is normally a disk based description of linked
list, where the disk blocks are placed here and there on the disk. In this, a
directory consists of pointer which is placed in first and last block of file. Also
each block contains pointers to the next block, which are not made available to the
user.
It can be used effectively for sequential access only but there also it may
generate long seeks between blocks. Another issue is the extra storage space
required for pointers. Yet the reliability problem is also there due to loss/damage
of any pointer. The diagram 1.7 shows linked /chained allocation where each
block contains the information about the next block.
MS-DOS and OS/2 use another variation on linked list called FAT (File
Allocation Table). The beginning of each partition contains a table having one
entry for each disk block and is indexed by the block number.
The directory entry contains the block number of the first block of the file.
The table entry indexed by block number contains the block number of the next
block in the file.
The Table pointer of the last block in the file has an EOF pointer value. This
chain continues until EOF (end of file) table entry is encountered.
With this, we will pass to the next pointers without entering inside the disk
for all. The 0 table value shows the presence of unused block; hence allocation of
15
File and I/O free blocks with the help of FAT arrangement is clear with simple searching of
Management first block along with 0 table pointer. MS-DOS and OS/2 use this scheme. The
figure 1.8 shows file allocation table (FAT).
Indexed Allocation:
Typically, the file indexes are not physically stored as part of the file
allocation table. Rather, the index for a file is kept in a separate block, and entry
for the file in the allocation table points to that block. The allocation may be on
the basis of either fixed size blocks or variable size portions. The indexed
allocation scheme is shown in fig 1.9.
16
File System
The advantage of this scheme is that it supports both sequential and random
access. The searching may take place in index blocks themselves. The indexes
blocks may be kept close together in secondary storage to minimize seek time.
Also space is wasted only on the index which is not very large and there’s no
external fragmentation.
c. both a and b
d. neither a nor b
17
File and I/O 1.4 Disk Scheduling
Management
It is seen that in an individual Computer, there can be many operations at a
particular time; hence the management is required on all running processes that
are running on particular system at a time. The idea of multi-programming is to
perform multiple programs at the same time. To control and distribute memory to
related processes, operating system will try to utilise the disk scheduling
procedure.
With this process, the CPU time is distributed among various related
processes which will describe all procedures in order to perform well. These
processes will be specified by disk scheduling that specifies which process to be
executed initially by the CPU. Normally, scheduling is concerned with processes
that are taken care by the CPU during the particular time.
In First Come First Serve process, Job or Processes which are undertaken
gets arranged as per their order to entry inside the Computer system. The role of
Operating System which is located in a queue has series of order which will be
acted and describe for the whole process. In this, the jobs are carried out in the
same manner as they entered inside the computer system.
SSTF or Shortest Seek Time First -
C-Scan Scheduling -
18
an end was taken, then it will again start its process from starting process. This File System
happens as many times, when a CPU is working on a process, and then the user
may wish to enter some data. It shows that the user if required can enter some data
as the situation arises where the CPU will again execute the process soon after the
Input operation. This type of scheduling is mostly applied in order to process
same process in a cycle.
Look Scheduling -
This scheduling involves complete CPU scanning of list from start to end of
Disk along with certain other procedures. This scheduling involves continuous
CPU scanning of complete disk again and again from one point end to another end
point.
Round Robin -
Priority Scheduling -
In this scheduling, every prioritized process gets checked with the help of
total time which is carried out by such processes. This scheduling will examine
the complete process time along with total number of Input/output process so that
to stop priorities of processes.
Multilevel Queue:
This scheduling is applicable when there are many queues for definite
number of processes. This happens as we know that there are many works which
are to be performed with computers during the particular time. To arrange
different Queues, the CPU here will arrange such Queues by using certain type of
approach. The queues will assemble and are arranged in definite functions whose
request is there to work.
19
File and I/O
Management
Check your progress 3
1. Which technique is used by the Operating System to search for the shortest
time?
a. FCFC c. C-Scan
a. FCFC c. C-Scan
b. Look d. none
MS-DOS and OS/2 use another variation on linked list called FAT.
Index allocation addresses many of the problems of contiguous and chained
allocation.
C-Scan Scheduling is a type of scheduling, where the processes get arranged
by using particular circular order list.
Round Robin is a type of scheduling where the time of CPU is shared into
equal numbers which is called as Quantum Time.
20
File System
Check your progress 3
1.7 Glossary
1. File - A file is a collection of records.
2. File Organisation - It is way by which the records get accessed on the disk.
1.8 Assignment
Explain the Operating System File structure?
1.9 Activities
Study file organisation in Operating System.
21
File and I/O 3. Computer Science & Application by Dr. Arvind Mohan Parashar,
Management Chandresh Shah, Saurab Mishra.
22
UNIT 2: I/O MANAGEMENT
Unit Structure
2.0 Learning Objectives
2.1 Introduction
2.9 Glossary
2.10 Assignment
2.11 Activities
23
File and I/O 2.1 Introduction
Management
Management of I/O devices is one of the important parts of an operating
system. It is so important and varied that the entire I/O subsystems will be
focussed particularly in its operation. On considering certain devices such as
mouse, keyboards, disk drives, display adapters, USB devices, network
connections, audio I/O, printers etc., we find that Input/Output subsystems will
work on following principles:
The focus of using devices that gets attached will help to get new developed
devices for old systems
The creation of latest devices that gets interfaced with original standard
which are not easy and compatible.
Further we see that for every hardware device there is a device driver which
works in support of Operating System which will handle the complete hardware.
Goals for I/O
24
I/O
Managemen
t
Figure 2.1 shows three of four types of buses that are usually found in
modern computers:
PCI bus that connects high speed having high band width devices to the
memory subsystem and the processor or CPU.
Expansion bus will connect low band width devices that normally transmit
data single character at a particular time by use of buffering.
SCSI bus which connects many SCSI devices to particular SCSI controller.
Daisy chain bus is a type of bus that will show when a string of devices is
connected each other as beads on chain where only single device is directly
connected to host computer.
Status register having bits works by host to have a position of certain device
to feel as idle, ready, busy, error, etc.
25
File and I/O Control register have bits applied by host which determines commands that
Management can also alter settings of various device.
In memory mapped I/O several processor's address space parts are mapped
with device where communications is done through reading as well as writing
directly from the memory location.
It is beneficial for devices which travel with large quantities of data quickly.
These I/O are application with additional mixture of original registers. It is seen
that possibly the problem with memory-mapped I/O, is that a process is allowed to
write straight to address space that are used by memory-mapped I/O device.
a. storage c. user-interface
b. communications d. all
26
2. Buses are set of________. I/O
Managemen
a. rules c. information
t
b. protocols d. data
27
File and I/O device numbers are used to differentiate among various devices and their
Management controllers having each partition on primary IDE disk contains different minor
device number. So, we see that in /dev/hda2, the second partition of primary IDE
disk contains major number 3 and minor number 2. In this, Linux maps the device
special file that is passed in system calls to device driver that uses major device
number and number of system tables as character device table labelled as chrdevs.
In order to handle a request, a device driver will take care of the request as:
• If the request appears to read block N and the driver is ease at said time,
then it will process the request immediately
• If the driver is busy, then the request so appears will be placed in the queue.
a. data c. information
b. program d. all
28
To see such variation in data transfer, Direct Memory Access is applied will I/O
be breakthrough such type of problems. As seen, a PC's ISA DMA controller will Managemen
carry 8 DMA channels out of which, 7 channels are feasible by device drivers. In t
order to begin with data transfer, device driver will allot DMA channel's address
along with count registers in the direction of data transfer that will read or write
data. During that period, the device will start the DMA when it is required.
It is seen that the DMA channels are restricted, as they possess only 7
channels out of 16 which are difficult while sharing among device drivers.
Similarly like interrupts, the device drivers will able to judge and manage which
DMA channel should be applied. Many device drivers carry standard DMA
channel, which is same in case of interrupts.
Linux operating system will takes care of all DMA channels applications
with the help of vector dma_chan data structures. Such DMA data structure carry
pointer which shows presence of DMA channel owner and a flag that will reflect
on allocation of DMA channel.
The devices which transfer big data will be of no use of data transfer in CPU
during data input to registers.
Such type of work can be carried off by Direct Memory Access and
Controller.
Initially the command is sent to DMA controller with data storage location
and data transfer with number of bytes of data to transfer.
DMA controller is independent component of computer where different bus
mastering I/O cards has their own DMA hardware.
Handshaking exist among DMA controllers and certain devices will be done
through double DMA request and DMA acknowledge wires.
29
File and I/O
Management
a. 4
b. 8
c. 16
d. 32
30
I/O
2.5 Programmed I/O
Managemen
The basic computer structure is shown in figure 2.4. t
In case of UDI, the Programmed I/O working is done with the support of
environmental service calls which is coded as function calls instead of direct
memory references in certain drivers.
It is found that PIO will keep an unclear data type which will carry
addressing, data translation as well as access restriction information which is
needed to work device or memory address in certain address space.
In case of transaction which will work a device where PIO offset is shown
will highlight the offset that carries a space which is referenced by PIO handle
with which I/O operations occur.
31
File and I/O In such cases, there will no ordering confirmation with regards to working
Management of transactions lists having a syntax udi_pio_trans calls instead of the calls made
from similar portion exist in similar serialization domain as processed in FIFO
order.
a. registers c. files
b. data d. all
b. install d. all
That I/O devices are concerning with storage, communications and user-
interface which will work and interface with the help of computer by using
analog and digital signals which is there through wires or through air.
32
The buses in computer architecture cover rigid protocols for certain different I/O
messages that can be sent across the bus and procedures in order to solve Managemen
t
conflict issues.
Drivers or Device driver is a program or routine that for designed for an I/O
device.
Answers: (1-b)
Answers: (1-b)
Answers: (1-a)
Answers: (1-b)
2.9 Glossary
1. I/O devices - These can be storage, communications, user-interface devices
that communicate by using the computer through signals that used to send
by wires or through air.
2. Buses - Are protocols for certain different messages that can be sent across
the bus and procedures in order to solve conflict issues.
3. Drivers or Device driver - Program or routine that for designed for an I/O
device.
33
File and I/O 2.10 Assignment
Management
Write short note on directory structure of Operating System.
2.11 Activities
Collect some information on I/O devices.
34
Block Summary
In this block, students have learnt and understand about the basic of file
system management and input output memory management. The block gives an
idea on the study and concept of disk space allocation, disk scheduling and input
out device drivers. The students have be well explained on the concepts of DMA
controlled input output and basic programmed input output.
The block detailed about the basic of programmed DMA input output
management techniques. The concept related to input output supervisors and input
output drivers will also be explained to the students. The student will be
demonstrated practically about programmed input output technique.
35
File and I/O Block Assignment
Management
Short Answer Questions
1. What is Disk scheduling?
36
Enrolment No.
1. How many hours did you need for studying the units?
Unit No 1 2 3 4
Nos of Hrs
2. Please give your reactions to the following items based on your reading of
the block:
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
……………………………………………………………………………………………
37
Education is something
which ought to be
brought within
the reach of every one.
- Dr. B. R. Ambedkar
BLOCK 4:
BASICS OF DISTRIBUTED
OPERATING SYSTEM
Author
Er. Nishit Mathur
Language Editor
Prof. Jaipal Gaikwad
Acknowledgment
Every attempt has been made to trace the copyright holders of material reproduced
in this book. Should an infringement have occurred, we apologize for the same and
will be pleased to make necessary correction/amendment in future edition of this
book.
The content is developed by taking reference of online and print publications that
are mentioned in Bibliography. The content developed represents the breadth of
research excellence in this multidisciplinary academic field. Some of the
information, illustrations and examples are taken "as is" and as available in the
references mentioned in Bibliography for academic purpose and better
understanding by learner.'
ROLE OF SELF INSTRUCTIONAL MATERIAL IN DISTANCE LEARNING
Contents
UNIT 1
DISTRIBUTED OPERATING SYSTEM
UNIT 2
MORE ON OPERATING SYSTEM
BLOCK 4: BASICS OF DISTRIBUTED
OPERATING SYSTEM
Block Introduction
An operating system is important software which makes the computer to
run. It handles all the computer processes and runs the hardware. It makes you to
communicate with computer without having command on its language. It is seen
that your computer operating system manages all software and hardware
functions. The main idea of operating system is to coordinate will all processes
and links these processes with central processing unit (CPU), memory and
storage.
In this block, we will detail about the basic of distributed operating system
and its modelling techniques. The block will focus on architecture and distribution
of distributed operating system with study about their characteristics. The concept
of distributed operating system layout and working characteristics are also
explained.
In this block, the student will made to learn and understand about the basic
of remote procedure calls and its techniques. The concept related to distributed
shared memory and Unix operating system will also be explained to the students.
The student will be demonstrated practically about Unix architecture.
Block Objective
After learning this block, you will be able to understand:
1
Basics of
Distributed
Block Structure
Operating
Unit 1: Distributed Operating system
System
Unit 2: More on Operating System
2
UNIT 1: DISTRIBUTED OPERATING SYSTEM
Unit Structure
1.0 Learning Objectives
1.1 Introduction
1.7 Glossary
1.8 Assignment
1.9 Activities
3
Basics of
Distributed 1.1 Introduction
Operating
System Distributed Operating System as shown in fig 1.1 is a model where
distributed applications are running on multiple computers linked by
communications. Such type of operating system is an advancement of network
operating system which is basically designed for higher communication and
integration levels for certain machines which are on network. It will appear as
simple centralized operating system to its users where it can handle and perform
multiple independent central processing units operations.
These systems are referred as loosely coupled systems where each processor
has its own local memory and processors communicate with one another through
various communication lines, such as high speed buses or telephone lines. By
loosely coupled systems, we mean that such computers possess no hardware
connections at the CPU - memory bus level, but are connected by external
interfaces that run under the control of software. The Distributed O/S involves a
4
collection of independent computer systems, competent of communicating and Distributed
Operating
cooperating with each other from side to side by LAN / WAN. A Distributed OS System
provides a virtual machine abstraction to its users and wide sharing of resources
like as computational capacity, I/O and files etc.
5
Basics of computers and users of a distributed system to be independent each other but
Distributed
Operating having a limited possibility to cooperate. An example of such a system is a group
System of computers connected through a local network. Every computer has its own
memory, hard disk. There are some shared resources such files and printers. If the
interconnection network broke down, individual computers could be used but
without some features like printing to a non-local printer.
b. information d. all
b. CPU d. modem
Layered architecture
Object-based architecture
Data-centered architecture
Event-based architecture
The architecture is organized into logically different components, and
distributed components over various machines.
6
The layered architecture is an arrangement of client system model as shown in fig Distributed
Operating
1.2.
System
Fig 1.3 shows an object-based style for distributed object systems which is
less structured having component similar as object and containing connector as
RPC or RMI.
In this, the decoupling processes in space and also time led to various
alternative styles as shown in fig 1.4.
7
Basics of In case of publish/subscribe event-based architecture as shown in fig 1.5 carries-
Distributed
Operating
System
Publish-subscribe
Broadcast
Point-to-point
Here the decouples sender & receiver acts as asynchronous communication.
In the event driven architecture (EDA), the activities such as production,
detection, consumption of reaction results in occurrence of various events.
Such type of data centred architecture access and update data that are kept
for data-centred system. In this, the processes communicate or exchange the data
or information mainly by reading and modifying data in some shared repository. It
is seen that a many web-based distributed systems communicate among
themselves through the use of shared Web based data services.
8
Distributed
Operating
System
Further it is seen that in basic Client Server Model as shown in fig 1.7, there
exists certain main features that-
There seems that the clients and servers can be on different machines.
Since many years it is seen that there exists high level development growth
in peer-to-peer systems. These can be:-
Structured P2P: where the nodes are arranged having a particular distributed
data structure.
Unstructured P2P: where the nodes have arbitrarily selected other close
nodes.
Hybrid P2P: where some nodes are presented as special functions in a good
organized manner.
Nodes are arranged in a structure overlay network like logical ring as shown
in fig 1.8 and develop a specific node that are responsible for services based only
on their ID.
9
Basics of
Distributed
Operating
System
Here the idea is to use a distributed hash table (DHT) to arrange the nodes.
The earlier hash functions transform a key to hash value that can be used as an
index in hash table:
Keys are unique –each key shows an object to keep in the table;
Hash function value is there to insert an object in hash table and can get
anytime.
It is seen that in DHT, the data objects and nodes are both given a key which
hashes to a random number from a very big identifier space. In this, mapping
function gives objects to nodes that are based on hash function value. In such case
a lookup based on hash function value will go back to the network address of the
node which keeps all requested object.
There occur several unstructured P2P systems that will try to maintain a
random graph.
10
The basic idea of unstructured P2P systems is that every node is required to Distributed
Operating
contact a randomly selected other node which:-
System
allows each peer to maintain a partial view of network that carries n nodes
will select each node P periodically and also node Q from its partial view
The hybrid architecture of Client-server combined with P2P where edge server
architectures used for Content Delivery Networks is shown in fig 1.9
In another hybrid architecture of C/S with P2P which is a Bit Torrent, where
users cooperate in file distribution is shown in fig 1.10
11
Basics of It is seen that once a node finds a path to download a file, it will joins a
Distributed
Operating swarm of downloader which will parallel get file chunks from the source, and will
System further distribute such chunks among them. It can be shown by fig 1.11.
They have better performance that led to rapid response time and higher
system throughput.
12
They have less process synchronization. Distributed
Operating
They have week resource management. System
least restrictive
The structure of distributed operating system comprises of -
Monolithic kernel
Collective kernel
In this the OS services can be done as individual processes which are not
necessarily for all computers.
Here the microkernel supports interaction among processes by way of messages.
Object-oriented OS
13
Basics of
Distributed Check your progress 2
Operating
System 1. In an object-based style system, there exists a connector as____.
a. RPC
b. RIM
c. RCP
d. PCR
2. Which is not an activity of event driven architecture?
a. production
b. organisation
c. consumption
d. detection
b. event-based architecture
d. none
b. Unstructured P2P
c. Hybrid P2P
d. all
14
Integrated Hybrid Model - Workstations used as processor pools Distributed
Operating
Workstation-server Model System
Here we see that some of the workstations in offices are particularly for
single user, whereas others can be for public during the course of a day. We see
that in such cases, a workstation can either have single user logged into it or will
remain idle.
It is found that in some systems, the workstations contain local disks and
some are without disk. Such type of computers is called as diskless workstations
where as workstations containing a disk is often known as diskful or disky
workstations. It is found that the workstations having no disk, will store file in the
one or more remote file servers. These workstations will requests for read and
write of files that are sent to file server which after performing sends the file back.
Apart from this, the diskless workstations are popular as their maintenance
is easy. When in this case, a compiler comes out, and then system administrator
can easily install it on small number of file servers in respective machines. In this
case taking backup of files and maintain the hardware is somewhat easy with
centrally placed hard disk.
15
Basics of logged in. It is examined that when all files are stored on local disks, then by
Distributed
Operating accessing any workstation will be easier to get files as compared to getting your
System own. If the workstations contain private disks, such disks can be used in either of
ways:
From above, the first design is based on observation that since it is easier to
keep all user files on single file servers, disks also requires paging for temporary
files. This model is used only for paging and files that are temporary, unshared,
and can be leftover during end of login session.
The next arrangement is an alternative of the first one as in this, the local
disks keeps the executable programs like compilers, text editors and electronic
mail handlers. If a program is called, it will be obtained from the local disk as a
replacement for file server which lowers the network load. As such programs
rarely changes, they get installed on local disks and can be stored for future use.
On obtaining the new release of system program, these will be shown on all
machines.
The third option uses local disks for open a cache. In this, users can
download files from file servers to its own disks which will be read and write by
him and can again be uploaded at the end of login session. The idea of this
arrangement is to keep long-term storage in a particular place, but reduce network
load by keeping files locally at the time they are used.
The last option shows that every machine will ideally have its own self-
contained file system that can be mounted or accessed by other machines file
systems. The idea is that every machine is on the whole self-contained and that
contact with the outside world is inadequate. This organization makes available a
uniform as well as guaranteed reply time for the user further more position little
load on the network. The disadvantage is to facilitate sharing is additional
difficult, and the resulting system is largely close to network operating system as
compared to a true transparent distributed operating system.
16
response time. Complicated graphics programs can be extremely fast, in view of Distributed
Operating
the fact that they can have straight access to the screen. Every user has a great System
degree of independence and be able to distribute his workstation's assets as he
sees fit. Local disks put in to this independence, furthermore make it likely to
carry on working to a smaller or better degree even in the countenance of file
server collides.
In its place, giving users personal workstations, this model gives high-
performance graphics terminals like X terminals. This design is based on the
inspection that what numerous users actually would like is high-quality graphical
interface as well as high-quality performance. Theoretically, it is greatly faster to
usual timesharing as compared to personal computer model, even though it is built
with up to date technology involving low cost microprocessors.
The inspiration for the processor pool thought comes as of taking the
diskless workstation proposal a step extra. If the file system be able to centralise
in a little number of file servers to increase economies of scale, it have to be
possible to do the similar thing for computer servers. Besides putting all the CPUs
in a large rack in the machine room, power supply as well as other packaging
costs is able to abridged, giving more computing power for a specified amount of
money. In addition, it authorizes the use of cheaper X terminals, and decouples
the amount of users as of number of workstations. The model in addition allocate
for simple incremental growth. If the computing pack increases by 10%, you can
immediately buy 10% additional processors along with them in the pool.
17
Basics of In result, we are converting all computing power into idle workstations with
Distributed
Operating the purpose of to access vigorously. Users can be allocate as several CPUs as they
System need for little periods, after which they are revisit to the pool as a result that the
other users can have them. There is no idea of possession here as all the
processors fit in equally to everyone.
The biggest fight for centralizing the computing power in a processor pool
comes as of queuing theory. A queuing scheme is a circumstance in which users
produce accidental requests for work from a server. When the server is full of
activity, the users queue for service as well as process in turn. Frequent examples
of queuing systems are:-
Bakeries
Queuing systems are helpful for the reason that they can be easily modelled
analytically. Allow us call the whole input rate requests per second, as of all the
users combined. Allow us call the rate at which the server can practice the
requests. For steady operation, we should have >. If the server be able to handle
100requests/sec, other than the users continuously generate 110requests/sec, then
the queue will produce with no bound.
Integrated Hybrid Model
18
either a clean workstation model or a clean processor pool model, having the Distributed
advantages of both. Operating
System
Interactive work can be completed on workstations, giving certain reply.
Idle workstations, on the other hand, are not exploiting, making for a simpler
system design. They are now left idle. In its place, all non-interactive procedures
run on the processor pool, because they do all serious computing in all-purpose.
This model makes available for fast interactive response, an efficient use of
resources, and a straightforward design.
19
Basics of
Distributed 1.6 Answers for Check Your Progress
Operating
System Check your progress 1
1.7 Glossary
1. Structured P2P - where the nodes are arranged having a particular
distributed data structure.
2. Unstructured P2P - where the nodes have arbitrarily selected other close
nodes.
1.8 Assignment
Design a Processor-pool Model in your institute.
1.9 Activities
Create an activity on Unstructured P2P.
20
Distributed
1.10 Case Study Operating
System
Is you institute carries Workstation-server Model.
21
Basics of
Distributed UNIT 2: MORE ON OPERATING SYSTEM
Operating
System Unit Structure
2.0 Learning Objectives
2.1 Introduction
2.7 Glossary
2.8 Assignment
2.9 Activities
2.1 Introduction
In that context, by allowing the programmer to access and to share “memory
objects” without being in charge of their management, virtually shared memory
systems want to propose a trade-off between the easy-programming of shared
memory machines and the efficiency and scalability of distributed memory
systems. We can say that a procedure is an arrangement of closed sequence of
instructions which will be controlled through external source. In this case, the data
22
More on
approximations are travelled in all directions which indicate the flow of control.
Operating
At last, the procedure call is an invention of a procedure. System
23
Basics of
Distributed
Operating
System
program number
version number
procedure number
The program number makes out a group of connected remote events, each
of which has an exclusive procedure number. A program possibly will consist of
one or more versions. Every version comprises of compilation of procedures
which are accessible to be called remotely. Version numbers facilitate manifold
versions of an RPC protocol to be obtainable at the same time. Every version
includes a number of procedures to facilitate remotely. Each procedure has a
procedure number.
It is studied that all RPC will a rises in the background of a thread which
shows sequential amount of control flow through single execution all the time. It
is found that the thread is created and handled by certain application code that is
present inside an application thread.
The RPC usage makes use of application threads in order to give equal
RPCs as well as RPC run time calls. It is found that an RPC client will be
gathered by one or more client application threads which applies RPCs.
24
While calculating remote procedures, RPC server will employ single or
More on
many call threads to present RPC run-time system. In the beginning, the server Operating
application thread will brings about several simultaneous calls. The single System
threaded applications will carry out at least single call thread. It is found that an
run-time system will produce call threads in server execution background as
shown in Fig 2.2.
In the figure, the RPC will get expanded from one corner to corner client
along with server execution. When, a client application thread calls a remote
process, at that time, it will become the part of rational thread of execution, which
will be located as RPC thread. Such types of thread behaves as rational assembly
which carry certain portions of RPC that gets increased from one corner point to
another corner point with fixed threads of execution in a network. The working
part of RPC thread when in execution stage will cover:
25
Basics of During the lack of RPC, the cancellation and location of thread with local
Distributed
Operating working belong to similar framework. In the presence of RPC, the system will
System have a remote procedure where both local as well as fraction of cancelled thread's
will work.
a. data c. message
b. information d. function
a. call c. message
b. information d. function
26
More on
Operating
System
Object DSM
In such class, the joint data such as objects that are variables having an
access functions. In his purpose, the user has merely to describe which data
(objects) are shared. The complete management of the collective objects (creation,
access, modification) is handled by the DSM system. In contradictory of SVM
systems which work at operating system layer, objects DSM systems in fact
propose a programming model option to the classical message-passing.
27
Basics of Methods of Achieving DSM
Distributed
Operating Hardware - It uses special network interfaces as well as cache coherence
System
circuits.
Advantages of DSM
It is system scalable.
It hides message passing and do not open definite sending messages among
processes.
It can use easy extensions to sequential programming.
It handles difficult and big data bases without replication or sending data to
processes.
Disadvantages of DSM
28
Consistency Models used on DSM Systems More on
Operating
Release Consistency System
An extension of weak consistency shown in fig 2.5 in which the
synchronization operations have been specified-
29
Basics of
Distributed Check your progress 2
Operating
System 1. Execution of DSM system involves_______.
b. data access
d. all
2. Which is an advantage of DSM?
d. It is system scalable.
Accept command
Read command
Process command
Execute command
30
It is found that the programs will work in the background by using suffix More on
Operating
command line entry which is applied by ampersand (&). The result of this is that System
the parent will not wait for child process to get completed.
During the working of UNIX, there exist three files for a particular process:
31
Basics of The Kernel
Distributed
Operating This is the middle portion of an OS which will give system services to
System
application programs and shell.
In Unix sharing of text region is done and changes in process occurs as per
environment by calls.
In Unix, directories cannot be directly changed but will change with the help
of operating system.
File system contains super block, an arrangement of i-nodes, actual file data
blocks as well as free blocks.
32
The i-node contains More on
Operating
The file owner’s user-id and group-id. System
Protection bit for owner, group, and world.
File size.
Accounting information.
File type.
The Block Locator
Consists of 13 fields
The root has copyright for passwd command that can be executed by
permission to access.
Setuid is bit which on application with executable file will give similar
privileges to user.
33
Basics of Process Management
Distributed
Operating Description of Process Management in SunOS.
System
Scheduling
Allow for ageing, but also increases or decreases process priority based or
past behavior.
Signals
Inter-process Communication
The initial signal will work regularly, while second work during the process
that works with process code.
The last will work at the time when process works out process code.
Memory Management
Direct Mapping is used, with the page map held in a high-speed RAM
cache.
34
Each page map entry contains a modified bit and accessed bit, a valid bit (if More on
Operating
the page is resident in PM) and protection bits. System
The system maintains 8 page maps – 1 for the kernel and 7 for processes.
2 context register are used – one points to the running process page map and
the other to the kernel’s page map.
The replacement strategy replaces the page that has not been active for
longest (LRU).
Paging
The loops contains an ordered list of all allocated page frames (except for
the kernel).
When a page is swapped out (not necessarily replaced the system judges
whether the page is likely to be used again).
If the page contains a text region, the page is added to bottom of the free list,
otherwise it is added to the top.
When a page fault occurs, if the page is still in the free list it is reclaimed.
I/O Data
UNIX not put any sort of structure on data, but applications do.
Devices
35
Basics of Generic Unix Command
Distributed
Operating
System
Command Function
Cal year Displays calendar for all months for the specified year
Call month year Displays calendar for the specified month of the year.
Who Login detail of all user as their IP, Terminal NO, User
Name,
lp file queue Allows the user to spool a job along with other in a
print
36
History To display the command used by the user since log on. More on
Operating
System
Exit / out Exit form a process. If shell is the only process then
logs
a. STDIN c. STDERR
b. STDOUT d. all
a. kernel c. compilers
b. shell d. database
b. group-id d. all
37
Basics of
A thread formed as well as managed by application code is an application
Distributed
Operating thread.
System
Distributed shared memory is a method permitting the end-users procedure
to right to use shared data with no inter-process communications.
2.7 Glossary
1. Kernel - It is the central part of OS which provides system services to
application programs and the shell.
2.8 Assignment
Explain the Consistency Models used on DSM Systems.
2.9 Activities
Write a DSM system in C++ using MPI for the underlying message-passing
and process communication.
38
2.10 Case Study More on
Operating
System
Compile and run the remote directory example rls.cand run both client and
server on the network.
39
Basics of
Distributed Block Summary
Operating
System In this block, the student will understand about the basic of distributed
operating system and its modelling techniques. The block gives an idea on
architecture and distribution of distributed operating system with study about their
characteristics. The examples related to concept of distributed operating system
layout and working characteristics are also discussed.
In this block, the student will understand about the basic of remote
procedure calls and its techniques. The concept related to distribute shared
memory and Unix operating system is also detailed. The student will be
demonstrated practically about Unix architecture.
40
Block Assignment
Short Answer Questions
1. What is Workstation Model?
3. What are the advantages and drawbacks of Distributed Operating System Model?
41
Basics of
Distributed
Enrolment No.
Operating
1. How many hours did you need for studying the units?
System
Unit No 1 2 3 4
Nos of Hrs
2. Please give your reactions to the following items based on your reading of
the block:
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
……………………………………………………………………………………………
42
Education is something
which ought to be
brought within
the reach of every one.
- Dr. B. R. Ambedkar