Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit - V - Principles of Operating Systems - Ece - Iii - Ii

Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

UNIT V INPUT/ OUTPUT AND FILES

Overview of Mass Storage Structure - Disk Structure - Disk Scheduling and Management-

File System Interface: File Concept - Access Methods -Directory and Disk Structure

Directory Implementation - Allocation Methods

I/O Systems: I/O Hardware- Application I/O Interface - Kernel I/O Subsystem.

--------------------------------

PART 1
TOPIC - 5.1 OVERVIEW OF MASS-STORAGE STRUCTURE
5.1.1 Magnetic Disks

• Traditional magnetic disks have the following basic structure:

o One or more platters in the form of disks covered with magnetic media. Hard
disk platters are made of rigid metal, while "floppy" disks are made of more flexible
plastic.

o Each platter has two working surfaces. Older hard disk drives would sometimes not
use the very top or bottom surface of a stack of platters, as these surfaces were
more susceptible to potential damage.

o Each working surface is divided into a number of concentric rings called tracks. The
collection of all tracks that are the same distance from the edge of the platter, ( i.e.
all tracks immediately above one another in the following diagram ) is called
a cylinder.

o Each track is further divided into sectors, traditionally containing 512 bytes of data
each, although some modern disks occasionally use larger sector sizes. ( Sectors also
include a header and a trailer, including checksum information among other things.
Larger sector sizes reduce the fraction of the disk consumed by headers and trailers,
but increase internal fragmentation and the amount of disk that must be marked
bad in the case of errors. )

o The data on a hard drive is read by read-write heads. The standard configuration (
shown below ) uses one head per surface, each on a separate arm, and controlled by
a common arm assembly which moves all heads simultaneously from one cylinder to
another. ( Other configurations, including independent read-write heads, may speed
up disk access, but involve serious technical difficulties. )

o The storage capacity of a traditional disk drive is equal to the number of heads ( i.e.
the number of working surfaces ), times the number of tracks per surface, times the
number of sectors per track, times the number of bytes per sector. A particular
physical block of data is specified by providing the head-sector-cylinder number at
which it is located.
Figure 5.1 - Moving-head disk mechanism.

• In operation the disk rotates at high speed, such as 7200 rpm ( 120 revolutions per second. )
The rate at which data can be transferred from the disk to the computer is composed of
several steps:

o The positioning time, a.k.a. the seek time or random access time is the time
required to move the heads from one cylinder to another, and for the heads to settle
down after the move. This is typically the slowest step in the process and the
predominant bottleneck to overall transfer rates.

o The rotational latency is the amount of time required for the desired sector to
rotate around and come under the read-write head.This can range anywhere from
zero to one full revolution, and on the average will equal one-half revolution. This is
another physical step and is usually the second slowest step behind seek time. ( For a
disk rotating at 7200 rpm, the average rotational latency would be 1/2 revolution /
120 revolutions per second, or just over 4 milliseconds, a long time by computer
standards.

o The transfer rate, which is the time required to move the data electronically from
the disk to the computer. ( Some authors may also use the term transfer rate to refer
to the overall transfer rate, including seek time and rotational latency as well as the
electronic data transfer rate. )

• Disk heads "fly" over the surface on a very thin cushion of air. If they should accidentally
contact the disk, then a head crash occurs, which may or may not permanently damage the
disk or even destroy it completely. For this reason it is normal to park the disk heads when
turning a computer off, which means to move the heads off the disk or to an area of the disk
where there is no data stored.

• Floppy disks are normally removable. Hard drives can also be removable, and some are
even hot-swappable, meaning they can be removed while the computer is running, and a
new hard drive inserted in their place.

• Disk drives are connected to the computer via a cable known as the I/O Bus. Some of the
common interface formats include Enhanced Integrated Drive Electronics, EIDE; Advanced
Technology Attachment, ATA; Serial ATA, SATA, Universal Serial Bus, USB; Fiber Channel, FC,
and Small Computer Systems Interface, SCSI.

• The host controller is at the computer end of the I/O bus, and the disk controller is built into
the disk itself. The CPU issues commands to the host controller via I/O ports. Data is
transferred between the magnetic surface and onboard cache by the disk controller, and
then the data is transferred from that cache to the host controller and the motherboard
memory at electronic speeds.

5.1.2 Solid-State Disks - New

• As technologies improve and economics change, old technologies are often used in different
ways. One example of this is the increasing used of solid state disks, or SSDs.

• SSDs use memory technology as a small fast hard disk. Specific implementations may use
either flash memory or DRAM chips protected by a battery to sustain the information
through power cycles.

• Because SSDs have no moving parts they are much faster than traditional hard drives, and
certain problems such as the scheduling of disk accesses simply do not apply.

• However SSDs also have their weaknesses: They are more expensive than hard drives,
generally not as large, and may have shorter life spans.

• SSDs are especially useful as a high-speed cache of hard-disk information that must be
accessed quickly. One example is to store filesystem meta-data, e.g. directory and inode
information, that must be accessed quickly and often. Another variation is a boot disk
containing the OS and some application executables, but no vital user data. SSDs are also
used in laptops to make them smaller, faster, and lighter.

• Because SSDs are so much faster than traditional hard disks, the throughput of the bus can
become a limiting factor, causing some SSDs to be connected directly to the system PCI bus
for example.

5.1.3 Magnetic Tapes - was 12.1.2

• Magnetic tapes were once used for common secondary storage before the days of hard disk
drives, but today are used primarily for backups.

• Accessing a particular spot on a magnetic tape can be slow, but once reading or writing
commences, access speeds are comparable to disk drives.

• Capacities of tape drives can range from 20 to 200 GB, and compression can double that
capacity.
TOPIC - 5.2 DISK STRUCTURE
• The traditional head-sector-cylinder, HSC numbers are mapped to linear block addresses by
numbering the first sector on the first head on the outermost track as sector 0. Numbering
proceeds with the rest of the sectors on that same track, and then the rest of the tracks on
the same cylinder before proceeding through the rest of the cylinders to the center of the
disk. In modern practice these linear block addresses are used in place of the HSC numbers
for a variety of reasons:

1. The linear length of tracks near the outer edge of the disk is much longer than for
those tracks located near the center, and therefore it is possible to squeeze many
more sectors onto outer tracks than onto inner ones.

2. All disks have some bad sectors, and therefore disks maintain a few spare sectors
that can be used in place of the bad ones. The mapping of spare sectors to bad
sectors in managed internally to the disk controller.

3. Modern hard drives can have thousands of cylinders, and hundreds of sectors per
track on their outermost tracks. These numbers exceed the range of HSC numbers
for many ( older ) operating systems, and therefore disks can be configured for any
convenient combination of HSC values that falls within the total number of sectors
physically on the drive.

• There is a limit to how closely packed individual bits can be placed on a physical media, but
that limit is growing increasingly more packed as technological advances are made.

• Modern disks pack many more sectors into outer cylinders than inner ones, using one of two
approaches:

o With Constant Linear Velocity, CLV, the density of bits is uniform from cylinder to
cylinder. Because there are more sectors in outer cylinders, the disk spins slower
when reading those cylinders, causing the rate of bits passing under the read-write
head to remain constant. This is the approach used by modern CDs and DVDs.

o With Constant Angular Velocity, CAV, the disk rotates at a constant angular speed,
with the bit density decreasing on outer cylinders. ( These disks would have a
constant number of sectors per track on all cylinders. )

TOPIC -5.3 DISK SCHEDULING

• As mentioned earlier, disk transfer speeds are limited primarily by seek


times and rotational latency. When multiple requests are to be processed
there is also some inherent delay in waiting for other requests to be
processed.
• Bandwidth is measured by the amount of data transferred divided by the
total amount of time from the first request being made to the last transfer
being completed, ( for a series of disk requests. )
• Both bandwidth and access time can be improved by processing requests in
a good order.
• Disk requests include the disk address, memory address, number of sectors
to transfer, and whether the request is for reading or writing.

5.3.1 FCFS Scheduling

• First-Come First-Serve is simple and intrinsically fair, but not very


efficient. Consider in the following sequence the wild swing from
cylinder 122 to 14 and then back to 124:

Figure 5.3 - FCFS disk scheduling.

5.3.2 SSTF Scheduling

• Shortest Seek Time First scheduling is more efficient, but may lead
to starvation if a constant stream of requests arrives for the same
general area of the disk.
• SSTF reduces the total head movement to 236 cylinders, down from
640 required for the same set of requests under FCFS. Note, however
that the distance could be reduced still further to 208 by starting with
37 and then 14 first before processing the rest of the requests.
Figure 5.5 - SSTF disk scheduling.

5.3.3 SCAN Scheduling

• The SCAN algorithm, a.k.a. the elevator algorithm moves back and
forth from one end of the disk to the other, similarly to an elevator
processing requests in a tall building.

Figure 5.6 - SCAN disk scheduling.


• Under the SCAN algorithm, If a request arrives just ahead of the
moving head then it will be processed right away, but if it arrives just
after the head has passed, then it will have to wait for the head to pass
going the other way on the return trip. This leads to a fairly wide
variation in access times which can be improved upon.
• Consider, for example, when the head reaches the high end of the
disk: Requests with high cylinder numbers just missed the passing
head, which means they are all fairly recent requests, whereas
requests with low numbers may have been waiting for a much longer
time. Making the return scan from high to low then ends up accessing
recent requests first and making older requests wait that much longer.

5.3.4 C-SCAN Scheduling

• The Circular-SCAN algorithm improves upon SCAN by treating all


requests in a circular queue fashion - Once the head reaches the end
of the disk, it returns to the other end without processing any
requests, and then starts again from the beginning of the disk:

Figure 5.7 - C-SCAN disk scheduling.

5.3.5 LOOK Scheduling

• LOOK scheduling improves upon SCAN by looking ahead at the


queue of pending requests, and not moving the heads any farther
towards the end of the disk than is necessary. The following diagram
illustrates the circular form of LOOK:
Figure 5.8 - C-LOOK disk scheduling.

5.3.6 Selection of a Disk-Scheduling Algorithm

• With very low loads all algorithms are equal, since there will
normally only be one request to process at a time.
• For slightly larger loads, SSTF offers better performance than FCFS,
but may lead to starvation when loads become heavy enough.
• For busier systems, SCAN and LOOK algorithms eliminate starvation
problems.
• The actual optimal algorithm may be something even more complex
than those discussed here, but the incremental improvements are
generally not worth the additional overhead.
• Some improvement to overall filesystem access times can be made by
intelligent placement of directory and/or inode information. If those
structures are placed in the middle of the disk instead of at the
beginning of the disk, then the maximum distance from those
structures to data blocks is reduced to only one-half of the disk size.
If those structures can be further distributed and furthermore have
their data blocks stored as close as possible to the corresponding
directory structures, then that reduces still further the overall time to
find the disk block numbers and then access the corresponding data
blocks.
• On modern disks the rotational latency can be almost as significant as
the seek time, however it is not within the OSes control to account for
that, because modern disks do not reveal their internal sector mapping
schemes, ( particularly when bad blocks have been remapped to spare
sectors. )
o Some disk manufacturers provide for disk scheduling
algorithms directly on their disk controllers, ( which do know
the actual geometry of the disk as well as any remapping ), so
that if a series of requests are sent from the computer to the
controller then those requests can be processed in an optimal
order.
o Unfortunately there are some considerations that the OS must
take into account that are beyond the abilities of the on-board
disk-scheduling algorithms, such as priorities of some requests
over others, or the need to process certain requests in a
particular order. For this reason OSes may elect to spoon-feed
requests to the disk controller one at a time in certain
situations.

TOPIC 5.4 DISK MANAGEMENT

5 4.1 Disk Formatting

• Before a disk can be used, it has to be low-level formatted, which


means laying down all of the headers and trailers marking the
beginning and ends of each sector. Included in the header and trailer
are the linear sector numbers, and error-correcting codes,
ECC, which allow damaged sectors to not only be detected, but in
many cases for the damaged data to be recovered ( depending on the
extent of the damage. ) Sector sizes are traditionally 512 bytes, but
may be larger, particularly in larger drives.
• ECC calculation is performed with every disk read or write, and if
damage is detected but the data is recoverable, then a soft error has
occurred. Soft errors are generally handled by the on-board disk
controller, and never seen by the OS. ( See below. )
• Once the disk is low-level formatted, the next step is to partition the
drive into one or more separate partitions. This step must be
completed even if the disk is to be used as a single large partition, so
that the partition table can be written to the beginning of the disk.
• After partitioning, then the filesystems must be logically
formatted, which involves laying down the master directory
information ( FAT table or inode structure ), initializing free lists, and
creating at least the root directory of the filesystem. ( Disk partitions
which are to be used as raw devices are not logically formatted. This
saves the overhead and disk space of the filesystem structure, but
requires that the application program manage its own disk storage
requirements. )
5.4.2 Boot Block

• Computer ROM contains a bootstrap program ( OS independent )


with just enough code to find the first sector on the first hard drive on
the first controller, load that sector into memory, and transfer control
over to it. ( The ROM bootstrap program may look in floppy and/or
CD drives before accessing the hard drive, and is smart enough to
recognize whether it has found valid boot code or not. )
• The first sector on the hard drive is known as the Master Boot
Record, MBR, and contains a very small amount of code in addition
to the partition table. The partition table documents how the disk is
partitioned into logical disks, and indicates specifically which
partition is the active or boot partition.
• The boot program then looks to the active partition to find an
operating system, possibly loading up a slightly larger / more
advanced boot program along the way.
• In a dual-boot ( or larger multi-boot ) system, the user may be given a
choice of which operating system to boot, with a default action to be
taken in the event of no response within some time frame.
• Once the kernel is found by the boot program, it is loaded into
memory and then control is transferred over to the OS. The kernel
will normally continue the boot process by initializing all important
kernel data structures, launching important system services ( e.g.
network daemons, sched, init, etc. ), and finally providing one or
more login prompts. Boot options at this stage may include single-
user a.k.a. maintenance or safe modes, in which very few system
services are started - These modes are designed for system
administrators to repair problems or otherwise maintain the system.
Figure 5.9 - Booting from disk in Windows 2000.

5.4.3 Bad Blocks

• No disk can be manufactured to 100% perfection, and all physical


objects wear out over time. For these reasons all disks are shipped
with a few bad blocks, and additional blocks can be expected to go
bad slowly over time. If a large number of blocks go bad then the
entire disk will need to be replaced, but a few here and there can be
handled through other means.
• In the old days, bad blocks had to be checked for manually.
Formatting of the disk or running certain disk-analysis tools would
identify bad blocks, and attempt to read the data off of them one last
time through repeated tries. Then the bad blocks would be mapped
out and taken out of future service. Sometimes the data could be
recovered, and sometimes it was lost forever. ( Disk analysis tools
could be either destructive or non-destructive. )
• Modern disk controllers make much better use of the error-correcting
codes, so that bad blocks can be detected earlier and the data usually
recovered. ( Recall that blocks are tested with every write as well as
with every read, so often errors can be detected before the write
operation is complete, and the data simply written to a different sector
instead. )
• Note that re-mapping of sectors from their normal linear progression
can throw off the disk scheduling optimization of the OS, especially
if the replacement sector is physically far away from the sector it is
replacing. For this reason most disks normally keep a few spare
sectors on each cylinder, as well as at least one spare cylinder.
Whenever possible a bad sector will be mapped to another sector on
the same cylinder, or at least a cylinder as close as possible. Sector
slipping may also be performed, in which all sectors between the bad
sector and the replacement sector are moved down by one, so that the
linear progression of sector numbers can be maintained.
• If the data on a bad block cannot be recovered, then a hard error has
occurred., which requires replacing the file(s) from backups, or
rebuilding them from scratch.

----------------------------------------------------------------------------------------------------------

PART 2
File System Interface: File Concept - Access Methods -Directory and Disk Structure

TOPIC - 5.1 FILE CONCEPT

5.1.1 File Attributes

• Different OSes keep track of different file attributes, including:


o Name - Some systems give special significance to names, and
particularly extensions ( .exe, .txt, etc. ), and some do not.
Some extensions may be of significance to the OS ( .exe ), and
others only to certain applications ( .jpg )
o Identifier ( e.g. inode number )
o Type - Text, executable, other binary, etc.
o Location - on the hard drive.
o Size
o Protection
o Time & Date
o User ID
5.1.2 File Operations

• The file ADT supports many common operations:


o Creating a file
o Writing a file
o Reading a file
o Repositioning within a file
o Deleting a file
o Truncating a file.
• Most OSes require that files be opened before access and closed after
all access is complete. Normally the programmer must open and close
files explicitly, but some rare systems open the file automatically at
first access. Information about currently open files is stored in
an open file table, containing for example:
o File pointer - records the current position in the file, for the
next read or write access.
o File-open count - How many times has the current file been
opened ( simultaneously by different processes ) and not yet
closed? When this counter reaches zero the file can be
removed from the table.
o Disk location of the file.
o Access rights
• Some systems provide support for file locking.
o A shared lock is for reading only.
o A exclusive lock is for writing as well as reading.
o An advisory lock is informational only, and not enforced. ( A
"Keep Out" sign, which may be ignored. )
o A mandatory lock is enforced. ( A truly locked door. )
o UNIX used advisory locks, and Windows uses mandatory
locks.
Figure 5.2 - File-locking example in Java.

5.1.3 File Types

• Windows ( and some other systems ) use special file


extensions to indicate the type of each file:

Figure 5.3 - Common file types.

• Macintosh stores a creator attribute for each file, according to the


program that first created it with the create( ) system call.
• UNIX stores magic numbers at the beginning of certain files. (
Experiment with the "file" command, especially in directories such as
/bin and /dev )
5.1.4 File Structure

• Some files contain an internal structure, which may or may not be


known to the OS.
• For the OS to support particular file formats increases the size and
complexity of the OS.
• UNIX treats all files as sequences of bytes, with no further
consideration of the internal structure. ( With the exception of
executable binary programs, which it must know how to load and find
the first executable statement, etc. )
• Macintosh files have two forks - a resource fork, and a data fork.
The resource fork contains information relating to the UI, such as
icons and button images, and can be modified independently of the
data fork, which contains the code or data as appropriate.

5.1.5 Internal File Structure

• Disk files are accessed in units of physical blocks, typically 512 bytes
or some power-of-two multiple thereof. ( Larger physical disks use
larger block sizes, to keep the range of block numbers within the
range of a 32-bit integer. )
• Internally files are organized in units of logical units, which may be
as small as a single byte, or may be a larger size corresponding to
some data record or structure size.
• The number of logical units which fit into one physical block
determines its packing, and has an impact on the amount of internal
fragmentation ( wasted space ) that occurs.
• As a general rule, half a physical block is wasted for each file, and the
larger the block sizes the more space is lost to internal fragmentation.

TOPIC - 5.2 ACCESS METHODS

5.2.1 Sequential Access

• A sequential access file emulates magnetic tape operation, and


generally supports a few operations:
o read next - read a record and advance the tape to the next
position.
o write next - write a record and advance the tape to the next
position.
o rewind
o skip n records - May or may not be supported. N may be
limited to positive numbers, or may be limited to +/- 1.
Figure 5.4 - Sequential-access file.

5.2.2 Direct Access

• Jump to any record and read that record. Operations supported


include:
o read n - read record number n. ( Note an argument is now
required. )
o write n - write record number n. ( Note an argument is now
required. )
o jump to record n - could be 0 or the end of file.
o Query current record - used to return back to this record later.
o Sequential access can be easily emulated using direct access.
The inverse is complicated and inefficient.

Figure 5.5 - Simulation of sequential access on a direct-access file.

5.2.3 Other Access Methods

• An indexed access scheme can be easily built on top of a direct access


system. Very large files may require a multi-tiered indexing scheme,
i.e. indexes of indexes.
Figure 5.6 - Example of index and relative files.

5.3 Directory Structure

5.3.1 Storage Structure

• A disk can be used in its entirety for a file system.


• Alternatively a physical disk can be broken up into
multiple partitions, slices, or mini-disks, each of which becomes a
virtual disk and can have its own filesystem. ( or be used for raw
storage, swap space, etc. )
• Or, multiple physical disks can be combined into one volume, i.e. a
larger virtual disk, with its own filesystem spanning the physical
disks.
Figure 5.7 - A typical file-system organization.

Figure 5.8 – Solaris file systems


5.3.2 Directory Overview

• Directory operations to be supported include:


o Search for a file
o Create a file - add to the directory
o Delete a file - erase from the directory
o List a directory - possibly ordered in different ways.
o Rename a file - may change sorting order
o Traverse the file system.

5.3.3. Single-Level Directory

• Simple to implement, but each file must have a unique name.

Figure 5.9 - Single-level directory.

5.3.4 Two-Level Directory

• Each user gets their own directory space.


• File names only need to be unique within a given user's directory.
• A master file directory is used to keep track of each users directory,
and must be maintained when users are added to or removed from the
system.
• A separate directory is generally needed for system ( executable )
files.
• Systems may or may not allow users to access other directories
besides their own
o If access to other directories is allowed, then provision must be
made to specify the directory being accessed.
o If access is denied, then special consideration must be made for
users to run programs located in system directories. A search
path is the list of directories in which to search for executable
programs, and can be set uniquely for each user.
Figure 5.10 - Two-level directory structure.

5.3.5 Tree-Structured Directories

• An obvious extension to the two-tiered directory structure, and the


one with which we are all most familiar.
• Each user / process has the concept of a current directory from which
all ( relative ) searches take place.
• Files may be accessed using either absolute pathnames ( relative to
the root of the tree ) or relative pathnames ( relative to the current
directory. )
• Directories are stored the same as any other file in the system, except
there is a bit that identifies them as directories, and they have some
special structure that the OS understands.
• One question for consideration is whether or not to allow the removal
of directories that are not empty - Windows requires that directories
be emptied first, and UNIX provides an option for deleting entire sub-
trees.
Figure 5.11 - Tree-structured directory structure.

5.3.6 Acyclic-Graph Directories

• When the same files need to be accessed in more than one place in
the directory structure ( e.g. because they are being shared by more
than one user / process ), it can be useful to provide an acyclic-graph
structure. ( Note the directed arcs from parent to child. )
• UNIX provides two types of links for implementing the acyclic-graph
structure. ( See "man ln" for more details. )
o A hard link ( usually just called a link ) involves multiple
directory entries that both refer to the same file. Hard links are
only valid for ordinary files in the same filesystem.
o A symbolic link, that involves a special file, containing
information about where to find the linked file. Symbolic links
may be used to link directories and/or files in other filesystems,
as well as ordinary files in the current filesystem.
• Windows only supports symbolic links, termed shortcuts.
• Hard links require a reference count, or link count for each file,
keeping track of how many directory entries are currently referring to
this file. Whenever one of the references is removed the link count is
reduced, and when it reaches zero, the disk space can be reclaimed.
• For symbolic links there is some question as to what to do with the
symbolic links when the original file is moved or deleted:
o One option is to find all the symbolic links and adjust them
also.
o Another is to leave the symbolic links dangling, and discover
that they are no longer valid the next time they are used.
o What if the original file is removed, and replaced with another
file having the same name before the symbolic link is next
used?

Figure 5.12 - Acyclic-graph directory structure.

5.3.7 General Graph Directory

• If cycles are allowed in the graphs, then several problems can arise:
o Search algorithms can go into infinite loops. One solution is to
not follow links in search algorithms. ( Or not to follow
symbolic links, and to only allow symbolic links to refer to
directories. )
o Sub-trees can become disconnected from the rest of the tree
and still not have their reference counts reduced to zero.
Periodic garbage collection is required to detect and resolve
this problem. ( chkdsk in DOS and fsck in UNIX search for
these problems, among others, even though cycles are not
supposed to be allowed in either system. Disconnected disk
blocks that are not marked as free are added back to the file
systems with made-up file names, and can usually be safely
deleted. )

Figure 5.13 - General graph directory.

--------------------------------------------------------------------------------------------------------------------------------------

PART 3
Directory Implementation - Allocation Methods

TOPIC - 5.3 DIRECTORY IMPLEMENTATION

• Directories need to be fast to search, insert, and delete, with a minimum of


wasted disk space.

5.3.1 Linear List

• A linear list is the simplest and easiest directory structure to set up,
but it does have some drawbacks.
• Finding a file ( or verifying one does not already exist upon creation )
requires a linear search.
• Deletions can be done by moving all entries, flagging an entry as
deleted, or by moving the last entry into the newly vacant position.
• Sorting the list makes searches faster, at the expense of more complex
insertions and deletions.
• A linked list makes insertions and deletions into a sorted list easier,
with overhead for the links.
• More complex data structures, such as B-trees, could also be
considered.

5.3.2 Hash Table

• A hash table can also be used to speed up searches.


• Hash tables are generally implemented in addition to a linear or other
structure

TOPIC - 5.4 ALLOCATION METHODS

• There are three major methods of storing files on disks: contiguous, linked,
and indexed.

5.4.1 Contiguous Allocation

• Contiguous Allocation requires that all blocks of a file be kept


together contiguously.
• Performance is very fast, because reading successive blocks of the
same file generally requires no movement of the disk heads, or at
most one small step to the next adjacent cylinder.
• Storage allocation involves the same issues discussed earlier for the
allocation of contiguous blocks of memory ( first fit, best fit,
fragmentation problems, etc. ) The distinction is that the high time
penalty required for moving the disk heads from spot to spot may
now justify the benefits of keeping files contiguously when possible.
• ( Even file systems that do not by default store files contiguously can
benefit from certain utilities that compact the disk and make all files
contiguous in the process. )
• Problems can arise when files grow, or if the exact size of a file is
unknown at creation time:
o Over-estimation of the file's final size increases external
fragmentation and wastes disk space.
o Under-estimation may require that a file be moved or a process
aborted if the file grows beyond its originally allocated space.
o If a file grows slowly over a long time period and the total final
space must be allocated initially, then a lot of space becomes
unusable before the file fills the space.
• A variation is to allocate file space in large contiguous chunks,
called extents. When a file outgrows its original extent, then an
additional one is allocated. ( For example an extent may be the size of
a complete track or even cylinder, aligned on an appropriate track or
cylinder boundary. ) The high-performance files system Veritas uses
extents to optimize performance.

Figure 5.1 - Contiguous allocation of disk space.

5.4.2 Linked Allocation

• Disk files can be stored as linked lists, with the expense of the storage
space consumed by each link. ( E.g. a block may be 508 bytes instead
of 512. )
• Linked allocation involves no external fragmentation, does not
require pre-known file sizes, and allows files to grow dynamically at
any time.
• Unfortunately linked allocation is only efficient for sequential access
files, as random access requires starting at the beginning of the list for
each new location access.
• Allocating clusters of blocks reduces the space wasted by pointers, at
the cost of internal fragmentation.
• Another big problem with linked allocation is reliability if a pointer is
lost or damaged. Doubly linked lists provide some protection, at the
cost of additional overhead and wasted space.
Figure 5.2 - Linked allocation of disk space.

• The File Allocation Table, FAT, used by DOS is a variation of


linked allocation, where all the links are stored in a separate table at
the beginning of the disk. The benefit of this approach is that the FAT
table can be cached in memory, greatly improving random access
speeds.
Figure 5.3 File-allocation table.

5.4.3 Indexed Allocation

• Indexed Allocation combines all of the indexes for accessing each


file into a common block ( for that file ), as opposed to spreading
them all over the disk or storing them in a FAT table.
Figure 5.4 - Indexed allocation of disk space.

• Some disk space is wasted ( relative to linked lists or FAT tables )


because an entire index block must be allocated for each file,
regardless of how many data blocks the file contains. This leads to
questions of how big the index block should be, and how it should be
implemented. There are several approaches:
o Linked Scheme - An index block is one disk block, which can
be read and written in a single disk operation. The first index
block contains some header information, the first N block
addresses, and if necessary a pointer to additional linked index
blocks.
o Multi-Level Index - The first index block contains a set of
pointers to secondary index blocks, which in turn contain
pointers to the actual data blocks.
o Combined Scheme - This is the scheme used in UNIX inodes,
in which the first 12 or so data block pointers are stored
directly in the inode, and then singly, doubly, and triply
indirect pointers provide access to more data blocks as needed.
( See below. ) The advantage of this scheme is that for small
files ( which many are ), the data blocks are readily accessible (
up to 48K with 4K block sizes ); files up to about 4144K (
using 4K blocks ) are accessible with only a single indirect
block ( which can be cached ), and huge files are still
accessible using a relatively small number of disk accesses (
larger in theory than can be addressed by a 32-bit address,
which is why some systems have moved to 64-bit file pointers.
)

Figure 5.5 - The UNIX inode.

5.4.4 Performance

• The optimal allocation method is different for sequential access files


than for random access files, and is also different for small files than
for large files.
• Some systems support more than one allocation method, which may
require specifying how the file is to be used ( sequential or random
access ) at the time it is allocated. Such systems also provide
conversion utilities.
• Some systems have been known to use contiguous access for small
files, and automatically switch to an indexed scheme when file sizes
surpass a certain threshold.
• And of course some systems adjust their allocation schemes ( e.g.
block sizes ) to best match the characteristics of the hardware for
optimum performance.
PART 4
I/O Systems: I/O Hardware- Application I/O Interface - Kernel I/O Subsystem.

5.1 Overview

• Management of I/O devices is a very important part of the operating system


- so important and so varied that entire I/O subsystems are devoted to its
operation. ( Consider the range of devices on a modern computer, from
mice, keyboards, disk drives, display adapters, USB devices, network
connections, audio I/O, printers, special devices for the handicapped, and
many special-purpose peripherals. )
• I/O Subsystems must contend with two ( conflicting? ) trends: (1) The
gravitation towards standard interfaces for a wide range of devices, making
it easier to add newly developed devices to existing systems, and (2) the
development of entirely new types of devices, for which the existing
standard interfaces are not always easy to apply.
• Device drivers are modules that can be plugged into an OS to handle a
particular device or category of similar devices.

TOPIC - 5.2 I/O HARDWARE

• I/O devices can be roughly categorized as storage, communications, user-


interface, and other
• Devices communicate with the computer via signals sent over wires or
through the air.
• Devices connect with the computer via ports, e.g. a serial or parallel port.
• A common set of wires connecting multiple devices is termed a bus.
o Buses include rigid protocols for the types of messages that can be
sent across the bus and the procedures for resolving contention issues.
o Figure 13.1 below illustrates three of the four bus types commonly
found in a modern PC:
1. The PCI bus connects high-speed high-bandwidth devices to
the memory subsystem ( and the CPU. )
2. The expansion bus connects slower low-bandwidth devices,
which typically deliver data one character at a time ( with
buffering. )
3. The SCSI bus connects a number of SCSI devices to a
common SCSI controller.
4. A daisy-chain bus, ( not shown) is when a string of devices is
connected to each other like beads on a chain, and only one of
the devices is directly connected to the host.
Figure 5.1 - A typical PC bus structure.

• One way of communicating with devices is through registers associated


with each port. Registers may be one to four bytes in size, and may typically
include ( a subset of ) the following four:
1. The data-in register is read by the host to get input from the device.
2. The data-out register is written by the host to send output.
3. The status register has bits read by the host to ascertain the status of
the device, such as idle, ready for input, busy, error, transaction
complete, etc.
4. The control register has bits written by the host to issue commands or
to change settings of the device such as parity checking, word length,
or full- versus half-duplex operation.
• Figure 5.2 shows some of the most common I/O port address ranges.
Figure 5.2 - Device I/O port locations on PCs ( partial ).

• Another technique for communicating with devices is memory-mapped I/O.


o In this case a certain portion of the processor's address space is
mapped to the device, and communications occur by reading and
writing directly to/from those memory areas.
o Memory-mapped I/O is suitable for devices which must move large
quantities of data quickly, such as graphics cards.
o Memory-mapped I/O can be used either instead of or more often in
combination with traditional registers. For example, graphics cards
still use registers for control information such as setting the video
mode.
o A potential problem exists with memory-mapped I/O, if a process is
allowed to write directly to the address space used by a memory-
mapped I/O device.
o ( Note: Memory-mapped I/O is not the same thing as direct memory
access, DMA. See section 5.2.3 below. )

5.2.1 Polling

• One simple means of device handshaking involves polling:


1. The host repeatedly checks the busy bit on the device until it
becomes clear.
2. The host writes a byte of data into the data-out register, and
sets the write bit in the command register ( in either order. )
3. The host sets the command ready bit in the command register
to notify the device of the pending command.
4. When the device controller sees the command-ready bit set, it
first sets the busy bit.
5. Then the device controller reads the command register, sees
the write bit set, reads the byte of data from the data-out
register, and outputs the byte of data.
6. The device controller then clears the error bit in the status
register, the command-ready bit, and finally clears the busy bit,
signaling the completion of the operation.
• Polling can be very fast and efficient, if both the device and the
controller are fast and if there is significant data to transfer. It
becomes inefficient, however, if the host must wait a long time in the
busy loop waiting for the device, or if frequent checks need to be
made for data that is infrequently there.

5.2.2 Interrupts

• Interrupts allow devices to notify the CPU when they have data to
transfer or when an operation is complete, allowing the CPU to
perform other duties when no I/O transfers need its immediate
attention.
• The CPU has an interrupt-request line that is sensed after every
instruction.
o A device's controller raises an interrupt by asserting a signal
on the interrupt request line.
o The CPU then performs a state save, and transfers control to
the interrupt handler routine at a fixed address in memory.
( The CPU catches the interrupt and dispatches the interrupt
handler. )
o The interrupt handler determines the cause of the interrupt,
performs the necessary processing, performs a state restore,
and executes a return from interrupt instruction to return
control to the CPU. ( The interrupt handler clears the interrupt
by servicing the device. )
▪ ( Note that the state restored does not need to be the
same state as the one that was saved when the interrupt
went off. See below for an example involving time-
slicing. )
• Figure 5.3 illustrates the interrupt-driven I/O procedure:
Figure 5.3 - Interrupt-driven I/O cycle.

• The above description is adequate for simple interrupt-driven I/O, but


there are three needs in modern computing which complicate the
picture:
1. The need to defer interrupt handling during critical processing,
2. The need to determine which interrupt handler to invoke,
without having to poll all devices to see which one needs
attention, and
3. The need for multi-level interrupts, so the system can
differentiate between high- and low-priority interrupts for
proper response.
• These issues are handled in modern computer architectures
with interrupt-controller hardware.
o Most CPUs now have two interrupt-request lines: One that
is non-maskable for critical error conditions and one that
is maskable, that the CPU can temporarily ignore during
critical processing.
o The interrupt mechanism accepts an address, which is usually
one of a small set of numbers for an offset into a table called
the interrupt vector. This table ( usually located at physical
address zero ? ) holds the addresses of routines prepared to
process specific interrupts.
o The number of possible interrupt handlers still exceeds the
range of defined interrupt numbers, so multiple handlers can
be interrupt chained. Effectively the addresses held in the
interrupt vectors are the head pointers for linked-lists of
interrupt handlers.
o Figure 13.4 shows the Intel Pentium interrupt vector. Interrupts
0 to 31 are non-maskable and reserved for serious hardware
and other errors. Maskable interrupts, including normal device
I/O interrupts begin at interrupt 32.
o Modern interrupt hardware also supports interrupt priority
levels, allowing systems to mask off only lower-priority
interrupts while servicing a high-priority interrupt, or
conversely to allow a high-priority signal to interrupt the
processing of a low-priority one.

Figure 5.4 - Intel Pentium processor event-vector table.


• At boot time the system determines which devices are present, and
loads the appropriate handler addresses into the interrupt table.
• During operation, devices signal errors or the completion of
commands via interrupts.
• Exceptions, such as dividing by zero, invalid memory accesses, or
attempts to access kernel mode instructions can be signaled via
interrupts.
• Time slicing and context switches can also be implemented using the
interrupt mechanism.
o The scheduler sets a hardware timer before transferring control
over to a user process.
o When the timer raises the interrupt request line, the CPU
performs a state-save, and transfers control over to the proper
interrupt handler, which in turn runs the scheduler.
o The scheduler does a state-restore of a different process before
resetting the timer and issuing the return-from-interrupt
instruction.
• A similar example involves the paging system for virtual memory - A
page fault causes an interrupt, which in turn issues an I/O request and
a context switch as described above, moving the interrupted process
into the wait queue and selecting a different process to run. When the
I/O request has completed ( i.e. when the requested page has been
loaded up into physical memory ), then the device interrupts, and the
interrupt handler moves the process from the wait queue into the
ready queue, ( or depending on scheduling algorithms and policies,
may go ahead and context switch it back onto the CPU. )
• System calls are implemented via software
interrupts, a.k.a. traps. When a ( library ) program needs work
performed in kernel mode, it sets command information and possibly
data addresses in certain registers, and then raises a software
interrupt. ( E.g. 21 hex in DOS. ) The system does a state save and
then calls on the proper interrupt handler to process the request in
kernel mode. Software interrupts generally have low priority, as they
are not as urgent as devices with limited buffering space.
• Interrupts are also used to control kernel operations, and to schedule
activities for optimal performance. For example, the completion of a
disk read operation involves two interrupts:
o A high-priority interrupt acknowledges the device completion,
and issues the next disk request so that the hardware does not
sit idle.
o A lower-priority interrupt transfers the data from the kernel
memory space to the user space, and then transfers the process
from the waiting queue to the ready queue.
• The Solaris OS uses a multi-threaded kernel and priority threads to
assign different threads to different interrupt handlers. This allows for
the "simultaneous" handling of multiple interrupts, and the assurance
that high-priority interrupts will take precedence over low-priority
ones and over user processes.

5.2.3 Direct Memory Access

• For devices that transfer large quantities of data ( such as disk


controllers ), it is wasteful to tie up the CPU transferring data in and
out of registers one byte at a time.
• Instead this work can be off-loaded to a special processor, known as
the Direct Memory Access, DMA, Controller.
• The host issues a command to the DMA controller, indicating the
location where the data is located, the location where the data is to be
transferred to, and the number of bytes of data to transfer. The DMA
controller handles the data transfer, and then interrupts the CPU when
the transfer is complete.
• A simple DMA controller is a standard component in modern PCs,
and many bus-mastering I/O cards contain their own DMA hardware.
• Handshaking between DMA controllers and their devices is
accomplished through two wires called the DMA-request and DMA-
acknowledge wires.
• While the DMA transfer is going on the CPU does not have access to
the PCI bus ( including main memory ), but it does have access to its
internal registers and primary and secondary caches.
• DMA can be done in terms of either physical addresses or virtual
addresses that are mapped to physical addresses. The latter approach
is known as Direct Virtual Memory Access, DVMA, and allows
direct data transfer from one memory-mapped device to another
without using the main memory chips.
• Direct DMA access by user processes can speed up operations, but is
generally forbidden by modern systems for security and protection
reasons. ( I.e. DMA is a kernel-mode operation. )
• Figure 5.5 below illustrates the DMA process.
Figure 5.5 - Steps in a DMA transfer.

TOPIC - 5.3 APPLICATION I/O INTERFACE

• User application access to a wide variety of different devices is


accomplished through layering, and through encapsulating all of the device-
specific code into device drivers, while application layers are presented with
a common interface for all ( or at least large general categories of ) devices.
Figure 5.6 - A kernel I/O structure.

• Devices differ on many different dimensions, as outlined in Figure 5.7:


Figure 5.7 - Characteristics of I/O devices.

• Most devices can be characterized as either block I/O, character I/O,


memory mapped file access, or network sockets. A few devices are special,
such as time-of-day clock and the system timer.
• Most OSes also have an escape, or back door, which allows applications to
send commands directly to device drivers if needed. In UNIX this is
the ioctl( ) system call ( I/O Control ). Ioctl( ) takes three arguments - The
file descriptor for the device driver being accessed, an integer indicating the
desired function to be performed, and an address used for communicating or
transferring additional information.

5.3.1 Block and Character Devices

• Block devices are accessed a block at a time, and are indicated by a


"b" as the first character in a long listing on UNIX systems.
Operations supported include read( ), write( ), and seek( ).
o Accessing blocks on a hard drive directly ( without going
through the filesystem structure ) is called raw I/O, and can
speed up certain operations by bypassing the buffering and
locking normally conducted by the OS. ( It then becomes the
application's responsibility to manage those issues. )
o A new alternative is direct I/O, which uses the normal
filesystem access, but which disables buffering and locking
operations.
• Memory-mapped file I/O can be layered on top of block-device
drivers.
o Rather than reading in the entire file, it is mapped to a range of
memory addresses, and then paged into memory as needed
using the virtual memory system.
o Access to the file is then accomplished through normal
memory accesses, rather than through read( ) and write( )
system calls. This approach is commonly used for executable
program code.
• Character devices are accessed one byte at a time, and are indicated
by a "c" in UNIX long listings. Supported operations include get( )
and put( ), with more advanced functionality such as reading an entire
line supported by higher-level library routines.

5.3.2 Network Devices

• Because network access is inherently different from local disk access,


most systems provide a separate interface for network devices.
• One common and popular interface is the socket interface, which acts
like a cable or pipeline connecting two networked entities. Data can
be put into the socket at one end, and read out sequentially at the
other end. Sockets are normally full-duplex, allowing for bi-
directional data transfer.
• The select( ) system call allows servers ( or other applications ) to
identify sockets which have data waiting, without having to poll all
available sockets.

5.3.3 Clocks and Timers

• Three types of time services are commonly needed in modern


systems:
o Get the current time of day.
o Get the elapsed time ( system or wall clock ) since a previous
event.
o Set a timer to trigger event X at time T.
• Unfortunately time operations are not standard across all systems.
• A programmable interrupt timer, PIT can be used to trigger
operations and to measure elapsed time. It can be set to trigger an
interrupt at a specific future time, or to trigger interrupts periodically
on a regular basis.
o The scheduler uses a PIT to trigger interrupts for ending time
slices.
o The disk system may use a PIT to schedule periodic
maintenance cleanup, such as flushing buffers to disk.
o Networks use PIT to abort or repeat operations that are taking
too long to complete. I.e. resending packets if an
acknowledgement is not received before the timer goes off.
o More timers than actually exist can be simulated by
maintaining an ordered list of timer events, and setting the
physical timer to go off when the next scheduled event should
occur.
• On most systems the system clock is implemented by counting
interrupts generated by the PIT. Unfortunately this is limited in its
resolution to the interrupt frequency of the PIT, and may be subject to
some drift over time. An alternate approach is to provide direct access
to a high frequency hardware counter, which provides much higher
resolution and accuracy, but which does not support interrupts.

5.3.4 Blocking and Non-blocking I/O

• With blocking I/O a process is moved to the wait queue when an I/O request
is made, and moved back to the ready queue when the request completes,
allowing other processes to run in the meantime.
• With non-blocking I/O the I/O request returns immediately, whether the
requested I/O operation has ( completely ) occurred or not. This allows the
process to check for available data without getting hung completely if it is
not there.
• One approach for programmers to implement non-blocking I/O is to have a
multi-threaded application, in which one thread makes blocking I/O calls (
say to read a keyboard or mouse ), while other threads continue to update
the screen or perform other tasks.
• A subtle variation of the non-blocking I/O is the asynchronous I/O, in
which the I/O request returns immediately allowing the process to continue
on with other tasks, and then the process is notified ( via changing a process
variable, or a software interrupt, or a callback function ) when the I/O
operation has completed and the data is available for use. ( The regular non-
blocking I/O returns immediately with whatever results are available, but
does not complete the operation and notify the process later. )
Figure 5.8 - Two I/O methods: (a) synchronous and (b) asynchronous.

TOPIC - 5.4 KERNEL I/O SUBSYSTEM

5.4.1 I/O Scheduling

• Scheduling I/O requests can greatly improve overall efficiency.


Priorities can also play a part in request scheduling.
• The classic example is the scheduling of disk accesses, as discussed
in detail in chapter 12.
• Buffering and caching can also help, and can allow for more flexible
scheduling options.
• On systems with many devices, separate request queues are often kept
for each device:
Figure 5.9 - Device-status table.

5.4.2 Buffering

• Buffering of I/O is performed for ( at least ) 3 major reasons:


1. Speed differences between two devices. ( See Figure 13.10
below. ) A slow device may write data into a buffer, and when
the buffer is full, the entire buffer is sent to the fast device all
at once. So that the slow device still has somewhere to write
while this is going on, a second buffer is used, and the two
buffers alternate as each becomes full. This is known as double
buffering. ( Double buffering is often used in ( animated )
graphics, so that one screen image can be generated in a buffer
while the other ( completed ) buffer is displayed on the screen.
This prevents the user from ever seeing any half-finished
screen images. )
2. Data transfer size differences. Buffers are used in particular in
networking systems to break messages up into smaller packets
for transfer, and then for re-assembly at the receiving side.
3. To support copy semantics. For example, when an application
makes a request for a disk write, the data is copied from the
user's memory area into a kernel buffer. Now the application
can change their copy of the data, but the data which
eventually gets written out to disk is the version of the data at
the time the write request was made.
Figure 5.10 - Sun Enterprise 6000 device-transfer rates ( logarithmic ).

5.4.3 Caching

• Caching involves keeping a copy of data in a faster-access location


than where the data is normally stored.
• Buffering and caching are very similar, except that a buffer may hold
the only copy of a given data item, whereas a cache is just a duplicate
copy of some other data stored elsewhere.
• Buffering and caching go hand-in-hand, and often the same storage
space may be used for both purposes. For example, after a buffer is
written to disk, then the copy in memory can be used as a cached
copy, (until that buffer is needed for other purposes. )

5.4.4 Spooling and Device Reservation

• A spool ( Simultaneous Peripheral Operations On-Line ) buffers


data for ( peripheral ) devices such as printers that cannot support
interleaved data streams.
• If multiple processes want to print at the same time, they each send
their print data to files stored in the spool directory. When each file is
closed, then the application sees that print job as complete, and the
print scheduler sends each file to the appropriate printer one at a time.
• Support is provided for viewing the spool queues, removing jobs
from the queues, moving jobs from one queue to another queue, and
in some cases changing the priorities of jobs in the queues.
• Spool queues can be general ( any laser printer ) or specific ( printer
number 42. )
• OSes can also provide support for processes to request / get exclusive
access to a particular device, and/or to wait until a device becomes
available.

5.4.5 Error Handling

• I/O requests can fail for many reasons, either transient ( buffers
overflow ) or permanent ( disk crash ).
• I/O requests usually return an error bit ( or more ) indicating the
problem. UNIX systems also set the global variable errno to one of a
hundred or so well-defined values to indicate the specific error that
has occurred. ( See errno.h for a complete listing, or man errno. )
• Some devices, such as SCSI devices, are capable of providing much
more detailed information about errors, and even keep an on-board
error log that can be requested by the host.

5.4.6 I/O Protection

• The I/O system must protect against either accidental or deliberate


erroneous I/O.
• User applications are not allowed to perform I/O in user mode - All
I/O requests are handled through system calls that must be performed
in kernel mode.
• Memory mapped areas and I/O ports must be protected by the
memory management system, but access to these areas cannot be
totally denied to user programs. ( Video games and some other
applications need to be able to write directly to video memory for
optimal performance for example. ) Instead the memory protection
system restricts access so that only one process at a time can access
particular parts of memory, such as the portion of the screen memory
corresponding to a particular window.
Figure 5.11 - Use of a system call to perform I/O.

5.4.7 Kernel Data Structures

• The kernel maintains a number of important data structures pertaining


to the I/O system, such as the open file table.
• These structures are object-oriented, and flexible to allow access to a
wide variety of I/O devices through a common interface. ( See Figure
5.12 below. )
• Windows NT carries the object-orientation one step further,
implementing I/O as a message-passing system from the source
through various intermediaries to the device.
Figure 5.12 - UNIX I/O kernel structure.

1. What is a file?

A file is an abstract data type defined and implemented by the operating system. It is a sequence

of logical records.

2. What is a single-level directory?

A single-level directory in a multiuser system causes naming problems, since each file must have

a unique name.

3 What is a tree-structured directory?

A tree-structured directory allows a user to create subdirectories to organize files.

4. What are the different allocation methods.

The direct access nature of disks allows flexibility in the implementation of files. The main

problem here is how to allocate space to these files so that disk space is utilized effectively and

files can be accessed quickly. Three major methods of allocating disk space are:

Contiguous
Linked

Indexed

5. Difference between primary storage and secondary storage.

Primary memory is the main memory (Hard disk, RAM) where the operating system

resides. Secondary memory can be external devices like CD, floppy magnetic discs etc. secondary

storage cannot be directly accessed by the CPU and is also external memory storage.

6. List different file attributes.

Name

Type

Size

Protection

7. List out the access mechanisms.

Protection mechanisms provide controlled access by limiting the types of file access that can

be made. Access is permitted or denied depending on many factors. Several different types

of operations may be controlled –

i. Read ii. Write iii. Execute iv. Append v. Delete vi. List

8. When designing the file structure for an operating system, what attributes are

considered?

The file system provides the mechanism for on line storage and access to file contentsincluding

data and programs. The file system resides permanently on secondary storage which isdesigned

to hold a large amount of data permanently.

9. What is Free Space Management?.

Since disk space is limited, we should reuse the space from deleted files for new files. To keep

track of free disk space, the system maintains a free space list. The free space list records all free

disk blocks
1) List the attributes of the file and discuss it

2) How operations are performed on the files? Explain each operation in detail

3) a) Identify the importance of the extension associated with the files.

b) List the common file extensions associated with the same group of files

4) Explain file type and their function in detail

5) Compare and contrast of different file access methods

6) What is a directory? List the various operations performed on the directories?

7) Outline the advantages and disadvantages of single level directory

8) Compare and contrast two level directory and tree structured directory

9) What are the issues are associated with the file systems? Explain in detail

1. Explain various disk scheduling algorithms?

2. Explain file system mounting operation in detail

3. Explain the method used for implementing directories

4. Explain in detail the free space management with neat diagram

5. Explain the disk structure and disk attachment..

6. Give a brief note on worm disk and tapes.

7. What data are needed to manage open files? Explain

8. List and explain different directory implementations

9. Describe the services provided by the Kernel I/O subsystem in detail

10. What is the purpose of I/O system calls and device-driver? How do the devices vary?

11 explain about directory allocation methods?

You might also like