chp4 IO Management
chp4 IO Management
chp4 IO Management
Prepared By:
Amit K. Shrivastava
Asst. Professor
Nepal College Of Information Technology
4.1 I/O Sub- Systems
4.1.1 Concepts
▪ Management of I/O devices is a very important part of the operating
system - so important and so varied that entire I/O subsystems are devoted
to its operation. ( Consider the range of devices on a modern computer,
from mice, keyboards, disk drives, display adapters, USB devices, network
connections, audio I/O, printers, special devices for the handicapped, and
many special-purpose peripherals. )
▪I/O Subsystems must contend with two trends: (1) The gravitation towards
standard interfaces for a wide range of devices, making it easier to add
newly developed devices to existing systems, and (2) the development of
entirely new types of devices, for which the existing standard interfaces are
not always easy to apply.
▪Device drivers are modules that can be plugged into an OS to handle a
particular device or category of similar devices.
4.1.2 Application I/O Interface
▪ User application access to a wide variety of different devices is
accomplished through layering, and through encapsulating all of the
device-specific code into device drivers, while application layers are
presented with a common interface for all ( or at least large general
categories of ) devices.
Application I/O Interface(contd…)
▪ Device-driver layer hides differences among I/O controllers from kernel. New
devices talking already-implemented protocols need no extra work. Each OS
has its own I/O subsystem structures and device driver frameworks.
▪Devices vary in many dimensions
• Character-stream or block: A character-stream device transfers bytes one by one,
whereas a block device transfers a block of bytes as a unit.
• Sequential or random-access: A sequential device transfers data in a fixed order
that is determined by the device, whereas the user of random-access device can
instruct the device to seek to any of the available data storage locations.
• Synchronous or asynchronous: A synchronous device is one that performs data
transfers with predictable response time. An asynchronous device exhibits irregular
response time
• Sharable or dedicated: A shareable device can be used concurrently by several
processes or threads; a dedicated device cannot be.
• Speed of operation: Device speeds range from few bytes to few gigabytes per
second.
• Read-write, read only, or write only: Some devices perform both input and output,
whereas others support only one data direction.
Fig: A kernel I/O structure
Programmed I/O:
The simplest form form of I/O is to have the CPU do all the work. This method is
called programmed I/O. The action followed by operating system are summarized
in following manner. First the data are copied to the kernel. Then the operating
systems enters a tight loop outputting the characters one at a time. The essential
aspect of programmed I/O is that after outputting a character, the CPU continuously
polls the device to see if it is ready to accept another one. This behavior is often
called polling or busy waiting.
Interrupt - Driven I/O
• The basic interrupt mechanism works as follows. The
CPU hardware has a wire called the interrupt-request line
that the CPU senses after executing every instruction.
When the CPU detects that a controller has asserted a
signal on the interrupt request line, the CPU saves a
small amount of state, such as the current value of the
instruction pointer, and jumps to the interrupt-handler
routine at a fixed address in memory. The interrupt
handler determines the cause of the interrupt, performs
the necessary processing, and executes a return from
interrupt instruction to return the CPU to the execution
state prior to the interrupt. We say that the device
controller raises an interrupt by asserting a signal on the
interrupt request line, the CPU catches the interrupt and
dispatches to the interrupt handler, and the handler
clears the interrupt by servicing the device.
Interrupt-Driven I/O Cycle
DMA(Direct Memory Access)
• Many computers avoid burdening the main CPU with PIO by
offloading some of this work to a special-purpose processor
called a directmemory-access (DMA) controller. To initiate a
DMA transfer, the host writes a DMA command block into
memory. This block contains a pointer to the source of a
transfer, a pointer to the destination of the transfer, and a
count of the number of bytes to be transferred.
• The CPU writes the address of this command block to the
DMA controller, then goes on with other work. The DMA
controller proceeds to operate the memory bus directly,
placing addresses on the bus to perform transfers without the
help of the main CPU. A simple DMA controller is a standard
component in PCs,and bus-mastering I/O boards for the PC
usually contain their own high-speed DMA hardware.
Six Step Process to Perform DMA Transfer
4.1.3 Kernel I/O Subsystem
▪ Kernel provide many services related to I/O. The services that we describe are
I/O scheduling, buffering, caching, spooling, device reservation, and error
handling.
▪ Scheduling
• Some I/O request ordering via per-device queue
• Some OSs try fairness
• Some implement Quality Of Service (i.e. IPQOS)
▪ Buffering - store data in memory while transferring between devices
• To cope with device speed mismatch
• To cope with device transfer size mismatch
• To maintain “copy semantics”
• Double buffering – two copies of the data
➢ Kernel and user
➢ Varying sizes
➢ Full / being processed and not-full / being used
➢ Copy-on-write can be used for efficiency in some cases
▪ Caching - faster device holding copy of data
• Always just a copy
• Key to performance
• Sometimes combined with buffering
▪ Spooling - hold output for a device
• If device can serve only one request at a time
• i.e., Printing
Kernel I/O Subsystem(contd..)
▪ Device reservation - provides exclusive access to a device
• System calls for allocation and de-allocation
• Watch out for deadlock
▪ Error Handling - OS can recover from disk read, device unavailable,
transient write failures
• Retry a read or write, for example
• Some systems more advanced – Solaris FMA, AIX
➢Track error frequencies, stop using device with increasing frequency of
retry-able errors
• Most return an error number or code when I/O request fails
• System error logs hold problem reports
I/O Protection
User process may accidentally or purposefully attempt to disrupt normal
operation via illegal I/O instructions
• All I/O instructions defined to be privileged
• I/O must be performed via system calls
➢Memory-mapped and I/O port memory locations must be protected too
4.1.4 I/O Requests Handling
▪ Users request data using file names, which must ultimately be mapped to
specific blocks of data from a specific device managed by a specific device
driver.
▪Consider reading a file from disk for a process
• Determine device holding file
• Translate name to device representation
• Physically read data from disk into buffer
• Make data available to requesting process
• Return control to process
4.1.5 Performance
▪ The I/O system is a major factor in overall system performance, and can
place heavy loads on other major components of the system ( interrupt
handling, process switching, memory access, bus contention, and CPU load
for device drivers just to name a few. )
▪ Interrupt handling can be relatively expensive ( slow ), which causes
programmed I/O to be faster than interrupt-driven I/O when the time spent
busy waiting is not excessive.
▪ Network traffic can also put a heavy load on the system. Consider for
example the sequence of events that occur when a single character is typed in
a telnet session, as shown in figure 1.
▪ Several principles can be employed to increase the overall efficiency of I/O
processing:
• Reduce the number of context switches.
• Reduce the number of times data must be copied.
• Reduce interrupt frequency, using large transfers, buffering, and polling
where appropriate.
• Increase concurrency using DMA.
• Move processing primitives into hardware, allowing their operation to be
concurrent with CPU and bus operations.
• Balance CPU, memory, bus, and I/O operations, so a bottleneck in one
does not idle all the others.
Figure1: Intercomputer Communication
4.2 Mass-Storage Device
▪ A mass storage device (MSD) is any storage device that makes it possible to
store and port large amounts of data across computers, servers and within an
IT environment. MSDs are portable storage media that provide a storage
interface that can be both internal and external to the computer.
4.2.1 Disk Structure :
▪Disk drives are addressed as large 1-dimensional arrays of
logical blocks, where the logical block is the smallest unit of
transfer.
▪The 1-dimensional array of logical blocks is mapped into the
sectors of the disk sequentially
• Sector 0 is the first sector of the first track on the outermost
Cylinder.
• Mapping proceeds in order through that track, then the rest
Of the tracks in that cylinder, and then through the rest of the
cylinders from outermost to innermost.
• Logical to physical address should be easy
- Except for bad sectors
- Non-constant # of sectors per track via constant angular velocity
4.2.2 Disk Scheduling
• The operating system is responsible for using hardware
efficiently — for the disk drives, this means having a fast access
time and disk bandwidth.
• Access time has two major components
➢ " Seek time is the time for the disk are to move the heads to
the cylinder containing the desired sector.
➢ " Rotational latency is the additional time waiting for the disk
to rotate the desired sector to the disk head.
• Minimize seek time
• Seek time ≈ seek distance
• Disk bandwidth is the total number of bytes transferred, divided
by the total time between the first request for service and the
completion of the last transfer.
Disk Arm Scheduling Algorithm