Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as odt, pdf, or txt
Download as odt, pdf, or txt
You are on page 1of 26

Operating Systems

Chapter 5: Input/Output
5.1: Principles of I/O Hardware
5.1.1: I/O Devices
• Not much to say. Devices are varied.
• Block versus character devices:
• Devices, such as disks and CDROMs, with addressable chunks (sectors in this case) are called block devices,
These devices support seeking.
• Devices, such as Ethernet and modem connections, that are a stream of characters are called character devices.
These devices do not support seeking.
• Some cases, like tapes, are not so clear.

5.1.2: Device Controllers


These are the ``real devices'' as far as the OS is concerned. That is the OS code is written with the controller spec in hand not with the device
spec.
The figure in the book is so oversimplified as to be borderline false. The following picture is closer to the truth (but really there are several
I/O buses of different speeds).
• The controller abstracts away some of the low level features of the device.
• For disks, the controller does error checking, buffering and handles interleaving of sectors. (Sectors are interleaved if the controller or
CPU cannot handle the data rate and would otherwise have to wait a full revolution. This is not a concern with modern systems since
the electronics have increased in speed faster than the devices.)
• For analog monitors (CRTs) the controller does a great deal. Analog video is far from a bunch of ones and zeros.
• Controllers are also called adaptors.

Using a controller
Think of a disk controller and a read request. The goal is to copy data from the disk to some portion of the central memory. How do we do
this?
• The controller contains a microprocessor and memory and is connected to the disk (by a cable).
• When the controller asks the disk to read a sector, the contents come to the controller via the cable and are stored by the controller in
its memory.
• The question is how does the OS, which is running on another processor, let the controller know that a disk read is desired and how is
the data eventually moved from the controllers memory to the general system memory.
• Typically the interface the OS sees consists of some device registers located on the controller.
• These are memory locations into which the OS writes information such as sector to access, read vs. write, length, where in
system memory to put the data (for a read) or from where to take the data (for a write).
• There is also typically a device register that acts as a ``go button''.
• There are also devices registers that the OS reads, such as status of the controller, errors found, etc.

• So now the question is how does the OS read and write the device register.
• With Memory-mapped I/O the device registers appear as normal memory. All that is needed is to know at which address
each device regester appears. Then the OS uses normal load and store instructions to write the registers.
• Some systems instead have a special ``I/O space'' into which the registers are mapped and require the use of special I/O
instructions to accomplish the load and store. From a conceptual point of view there is no difference between the two models.
Homework: 2
5.1.3: Direct Memory Access (DMA)
• With or without DMA, the disk controller pulls the desired data from the
disk to its buffer (and pushes data from the buffer to the disk).
• Without DMA, i.e., with programmed I/O (PIO), the cpu then does loads
and stores (or I/O instructions) to copy the data from the buffer to the
desired memory location.
• With a DMA controller, the controller writes the memory without
intervention of the CPU.
• Clearly DMA saves CPU work. But this might not be important if the
CPU is limited by the memory or by system buses.
• Very important is that there is less data movement so the buses are used
less and the entire operation takes less time.
• Since PIO is pure software it is easier to change, which is an advantage.
• DMA does need a number of bus transfers from the CPU to the controller
to specify the DMA. So DMA is most effective for large transfers where
the setup is amortized.
• Why have the buffer? Why not just go from the disk straight to the
memory.
Answer: Speed matching. The disk supplies data at a fixed rate, which
might exceed the rate the memory can accept it. In particular the memory
might be busy servicing a request from the processor or from another
DMA controller.
Homework: 5

================ Start Lecture #12 ================

Note: I improved the presentation of disk controllers given last time (unfortunately, this was very easy to do). In particular, I suggest you
read the new small section entitled ``Using a controller''. Both the giant page and the page for lecture-11 now include this new section.
5.2: Principles of I/O Software
As with any large software system, good design and layering is important.

5.2.1: Goals of the I/O Software

Device independence
We want to have most of the OS, unaware of the characteristics of the specific devices attached to the system. Indeed we also want the OS to
be largely unaware of the CPU type itself.
Due to this device independence, programs are written to read and write generic devices and then at run time specific devices are assigned.
Writing to a disk has differences from writing to a terminal, but Unix cp and DOS copy do not see these differences. Indeed, most of the OS,
including the file system code, is unaware of whether the device is a floppy or hard disk.

Uniform naming
Recall that we discussed the value of the name space implemented by file systems. There is no dependence between the name of the file and
the device on which it is stored. So a file called IAmStoredOnAHardDisk might well be stored on a floppy disk.

Error handling
There are several aspects to error handling including: detection, correction (if possible) and reporting.
1. Detection should be done as close to where the error occurred as possible before more damage is done (fault containment). This is not
trivial.
2. Correction is sometimes easy, for example ECC memory does this automatically (but the OS wants to know about the error so that it
can schedule replacement of the faulty chips before unrecoverable double errors occur).
Other easy cases include successful retries for failed ethernet transmissions. In this example, while logging is appropriate, it is quite
reasonable for no action to be taken.
3. Error reporting tends to be awful. The trouble is that the error occurs at a low level but by the time it is reported the context is lost.
Unix/Linux in particular is horrible in this area.

Creating the illusion of synchronous I/O


• I/O must be asynchronous for good performance. That is the OS cannot simply wait for an I/O to complete. Instead, it proceeds with
other activities and responds to the notification when the I/O has finished.
• Users (mostly) want no part of this. The code sequence
Read X
Y <-- X+1
Print Y

should print a value one greater than that read. But if the assignment is
performed before the read completes, the wrong value is assigned.
• Performance junkies sometimes do want the asynchrony so that they can have
another portion of their program executed while the I/O is underway. That is
they implement a mini-scheduler in their application code.

Sharable vs dedicated devices


For devices like printers and tape drives, only one user at a time is permitted. These
are called serially reusable devices, and are studied next chapter. Devices like disks
and Ethernet ports can be shared by processes running concurrently.

Layering
Layers of abstraction as usual prove to be effective. Most systems are believed to use
the following layers (but for many systems, the OS code is not available for
inspection).
1. User level I/O routines.
2. Device independent I/O software.
3. Device drivers.
4. Interrupt handlers.
We give a bottom up explanation.

5.2.2: Interrupt Handlers


We discussed an interrupt handler before when studying page faults. Then it was
called ``assembly language code''.
In the present case, we have a process blocked on I/O and the I/O event has just
completed. So the goal is to make the process ready. Possible methods are.
• Releasing a semaphore on which the process is waiting.
• Sending a message to the process.
• Inserting the process table entry onto the ready list.
Once the process is ready, it is up to the scheduler to decide when it should run.

5.2.3: Device Drivers


The portion of the OS that ``knows'' the characteristics of the controller.
The driver has two ``parts'' corresponding to its two access points. Recall the following figure from the beginning of the course.

1. Accessed by the main line OS via the envelope in response to


an I/O system call. The portion of the driver accessed in this
way is sometimes call the ``top'' part.
2. Accessed by the interrupt handler when the I/O completes
(this completion is signaled by an interrupt). The portion of
the driver accessed in this way is sometimes call the
``bottom'' part.
Tanenbaum describes the actions of the driver assuming it is
implemented as a process (which he recommends). I give both that
view point and the self-service paradigm in which the driver is
invoked by the OS acting in behalf of a user process (more precisely
the process shifts into kernel mode).

Driver in a self-service paradigm


1. The user (A) issues an I/O system call.
2. The main line, machine independent, OS prepares a generic
request for the driver and calls (the top part of) the driver.
A. If the driver was idle (i.e., the controller was idle), the
driver writes device registers on the controller ending
with a command for the controller to begin the actual
I/O.
B. If the controller was busy (doing work the driver gave it previously), the driver simply queues the current request (the driver
dequeues this request below).

3. The driver jumps to the scheduler indicating that the current process should be blocked.
4. The scheduler blocks A and runs (say) B.
5. B starts running.
6. An interrupt arrives (i.e., an I/O has been completed).
7. The interrupt handler invokes (the bottom part of) the driver.
A. The driver informs the main line perhaps passing data and surely passing status (error, OK).
B. The top part is called to start another I/O if the queue is nonempty. We know the controller is free. Why?
Answer: We just received an interrupt saying so.

8. The driver jumps to the scheduler indicating that process A should be made ready.
9. The scheduler picks a ready process to run. Assume it picks A.
10.A resumes in the driver, which returns to the main line, which returns to the user code.

Driver as a process (Tanenbaum) (less detailed than above)


A. The user issues an I/O request. The main line OS prepares a generic request (e.g. read, not read using Buslogic BT-958 SCSI
controller) for the driver and the driver is awakened (perhaps a message is sent to the driver to do both jobs).
1. The driver wakes up.
A. If the driver was idle (i.e., the controller is idle), the driver writes device registers on the controller ending with
a command for the controller to begin the actual I/O.
B. If the controller is busy (doing work the driver gave it), the driver simply queues the current request (the driver
dequeues this below).
2. The driver blocks waiting for an interrupt or for more requests.
B. An interrupt arrives (i.e., an I/O has been completed).
1. The driver wakes up.
2. The driver informs the main line perhaps passing data and surely passing status (error, OK).
3. The driver finds the next work item or blocks.
A. If the queue of requests is non-empty, dequeue one and proceed as if just received a request from the main line.
B. If queue is empty, the driver blocks waiting for an interrupt or a request from the main line.
5.2.4: Device-Independent I/O Software
The device-independent code does most of the functionality, but not necessarily most of the code since there can be many drivers all
doing essentially the same thing in slightly different ways due to slightly different controllers.
C. Naming. Again an important O/S functionality. Must offer a consistent interface to the device drivers.
1. In Unix this is done by associating each device with a (special) file in the /dev directory.
2. The inodes for these files contain an indication that these are special files and also contain so called major and minor
device numbers.
3. The major device number gives the number of the driver. (These numbers are rather ad hoc, they correspond to the
position of the function pointer to the driver in a table of function pointers.)
4. The minor number indicates for which device (e.g., which scsi cdrom drive) the request is intended

D. Protection. A wide range of possibilities are actually done in real systems. Including both extreme examples of everything is
permitted and nothing is permitted (directly).
1. In ms-dos any process can write to any file. Presumably, our offensive nuclear missile launchers do not run dos.
2. In IBM and other mainframe OS's, normal processors do not access devices. Indeed the main CPU doesn't issue the
I/O requests. Instead an I/O channel is used and the mainline constructs a channel program and tells the channel to
invoke it.
3. Unix uses normal rwx bits on files in /dev (I don't believe x is used).

E. Buffering is necessary since requests come in a size specified by the user and data is delivered in a size specified by the
device.
F. Enforce exclusive access for non-shared devices like tapes.

5.2.5: User-Space Software


A good deal of I/O code is actually executed in user space. Some is in library routines linked into user programs and some is in
daemon processes.
G. Some library routines are trivial and just move their arguments into the correct place (e.g., a specific register) and then issue a
trap to the correct system call to do the real work.
H. Some, notably standard I/O (stdio) in Unix, are definitely not trivial. For example consider the formatting of floating point
numbers done in printf and the reverse operation done in scanf.
I. Printing to a local printer is often performed in part by a regular program (lpr in Unix) and part by a daemon (lpd in Unix).
The daemon might be started when the system boots or might be started on demand. I guess it is called a daemon because it is
not under the control of any user. Does anyone know the reason.
J. Printing uses spooling, i.e., the file to be printed is copied somewhere by lpr and then the daemon works with this copy. Mail
uses a similar technique (but generally it is called queuing, not spooling).

Homework: 6, 7, 8.

5.3: Disks
The ideal storage device is
K. Fast
L. Big (in capacity)
M.Cheap
N. Impossible
Disks are big and cheap, but slow.

5.3.1: Disk Hardware


Show a real disk opened up and illustrate the components
O. Platter
P. Surface
Q. Head
R. Track
S. Sector
T. Cylinder
U. Seek time
V. Rotational latency
W.Transfer time
Overlapping I/O operations is important. Many controllers can do overlapped seeks, i.e. issue a seek to one disk while another is
already seeking.
Despite what Tanenbaum says, modern disks cheat and do not have the same number of sectors on outer cylinders as on inner one.
Often the controller ``cover for them'' and protect the lie.
Again contrary to Tanenbaum, it is not true that when one head is reading from cylinder C, all the heads can read from cylinder C
with no penalty.

Choice of block size


X. We discussed this before when studying page size.
Y. Current commodity disk characteristics (not for laptops) result in about 15ms to transfer the first byte and 10K bytes per ms
for subsequent bytes (if contiguous).
1. Rotation rate is 5400, 7600, or 10,000 RPM (15K just now available).
2. Recall that 6000 RPM is 100 rev/sec or one rev per 10ms. So half a rev (the average time for to rotate to a given point)
is 5ms.
3. Transfer rates around 10MB/sec = 10KB/ms.
4. Seek time around 10ms.
Z. This favors large blocks, 100KB or more.
AA.But the internal fragmentation would be severe since many files are small.
AB.Multiple block sizes have been tried as have techniques to try to have consecutive blocks of a given file near each other.
AC.Typical block sizes are 4KB-8KB.

5.3.2: Disk Arm Scheduling Algorithms


These algorithms are relevant only if there are several I/O requests pending. For many PCs this is not the case. For most commercial
applications, I/O is crucial.
AD.FCFS (First Come First Served): Simple but has long delays.
AE.Pick: Same as FCFS but pick up requests for cylinders that are passed on the way to the next FCFS request.
AF.SSTF (Shortest Seek Time First): Greedy algorithm. Can starve requests for outer cylinders and almost always favors middle
requests.
AG.Scan (Look, Elevator): The method used by an old fashioned jukebox (remember ``Happy Days'') and by elevators. The disk
arm proceeds in one direction picking up all requests until there are no more requests in this direction at which point it goes
back the other direction. This favors requests in the middle, but can't starve any requests.
AH.C-Scan (C-look, Circular Scan/Look): Similar to Scan but only service requests when moving in one direction. When going
in the other direction, go directly to the furthest away request. This doesn't favor any spot on the disk. Indeed, it treats the
cylinders as though they were a clock, i.e. after the highest numbered cylinder comes cylinder 0.
AI.N-step Scan: This is what the natural implementation of Scan gives.
1. While the disk is servicing a Scan direction, the controller gathers up new requests and sorts them.
2. At the end of the current sweep, the new list becomes the next sweep.

Minimizing Rotational Latency


Use Scan, which is the same as C-Scan. Why?
Because the disk only rotates in one direction.
Homework: 9, 10.

================ Start Lecture #13 ================

RAID (Redundant Array of Inexpensive Disks)


AJ.Tanenbaum's treatment is not very good.
AK.The name RAID is from Berkeley.
AL.IBM changed the name to Redundant Array of Independent Disks. I wonder why?
AM.A simple form is mirroring, where two disks contain the same data.
AN.Another simple form is striping (interleaving) where consecutive blocks are spread across multiple disks. This helps
bandwidth, but is not redundant. Thus it shouldn't be called RAID, but it sometimes is.
AO.One of the normal RAID methods is to have N (say 4) data disks and one parity disk. Data is striped across the data disks
and the bitwise parity of these sectors is written in the corresponding sector of the parity disk.
AP.On a read if the block is bad (e.g., if the entire disk is bad or even missing), the system automatically reads the other blocks in
the stripe and the parity block in the stripe. Then the missing block is just the bitwise exclusive or of all these blocks.
AQ.For reads this is very good. The failure free case has no penalty (beyond the space overhead of the parity disk). The error
case requires N+1 (say 5) reads.
AR.A serious concern is the small write problem. Writing a sector requires 4 I/O. Read the old data sector, compute the change,
read the parity, compute the new parity, write the new parity and the new data sector. Hence one sector I/O became 4, which is
a 300% penalty.
AS.Writing a full stripe is not bad. Compute the parity of the N (say 4) data sectors to be written and then write the data sectors
and the parity sector. Thus 4 sector I/Os become 5, which is only a 20% penalty and is smaller for larger N, i.e., larger stripes.
AT.A variation is to rotate the parity. That is, for some stripes disk 1 has the parity, for others disk 2, etc. The purpose is to not
have a single parity disk since that disk is needed for all small writes and could become a point of contention.

5.3.3: Error Handling


Disks error rates have dropped in recent years. Moreover, bad block forwarding is done by the controller (or disk electronic) so this
topic is no longer as important for OS.

5.3.4: Track Caching


Often the disk/controller caches a track, since the seek penalty has already been paid. In fact modern disks have megabyte caches that
hold recently read blocks. Since modern disks cheat and don't have the same number of blocks on each track, it is better for the disk
electronics (and not the OS or controller) to do the caching since it is the only part of the system to know the true geometry.

5.3.5: Ram Disks


AU.Fairly clear. Organize a region of memory as a set of blocks and pretend it is a disk.
AV.A problem is that memory is volatile.
AW.Often used during OS installation, before disk drivers are available (there are many types of disk but all memory looks the
same so only one ram disk driver is needed).
5.4: Clocks
Also called timers.

5.4.1: Clock Hardware


AX.Generates an interrupt when timer goes to zero
AY.Counter reload can be automatic or under software (OS) control.
AZ.If done automatically, the interrupt occurs periodically and thus is perfect
for generating a clock interrupt at a fixed period.

5.4.2: Clock Software


BA.TOD: Bump a counter each tick (clock interupt). If counter is only 32 bits must worry about overflow so keep two counters:
low order and high order.
BB.Time quantum for RR: Decrement a counter at each tick. The quantum expires when counter is zero. Load this counter when
the scheduler runs a process.
BC.Accounting: At each tick, bump a counter in the process table entry for the currently running process.
BD.Alarm system call and system alarms:
1. Users can request an alarm at some future time and the
system also needs to do things specific future times
(e.g. turn off floppy motor).
2. The conceptually simplest solution is to have one timer
for each event. Instead, we simulate many timers with
just one.
3. The data structure on the right works well.
4. The time in each list entry is the time after the
preceding entry that this entry's alarm is to ring. For
example, if the time is zero, this event occurs at the
same time as the previous event. The other entry is a
pointer to the action to perform.
5. At each tick, decrement next-signal.
6. When next-signal goes to zero, process the first entry on the list and any others following immediately after with a
time of zero (which means they are to be simultaneous with this alarm). Then set next-signal to the value in the next
alarm.

BE.Profiling
1. Want a histogram giving how much time was spent in each 1KB (say) block of code.
2. At each tick check the PC and bump the appropriate counter.
3. At the end of the run can assign the 1K blocks to software modules.
4. If use fine granularity (say 10B instead of 1KB) get higher accuracy but more memory overhead.
Homework: 12

5.5: Terminals
5.5.1: Terminal Hardware
Quite dated. It is true that modern systems can communicate to a hardwired ascii terminal, but most don't. Serial ports are used, but
they are normally connected to modems and then some protocol (SLIP, PPP) is used not just a stream of ascii characters.

5.5.2: Memory-Mapped Terminals


BF.Less dated. But it still discusses the character not graphics interface.
BG.Today, the idea is to have the software write into video memory the bits to be put on the screen and then the graphics
controller converts these bits to analog signals for the monitor (actually laptop displays and very modern monitors are digital).
BH.But it is much more complicated than this. The graphics controllers can do a great deal of video themselves (like filling).
BI.This is a subject that would take many lectures to do well.

Keyboards
Tanenbaum description of keyboards is correct.
BJ.At each key press and key release a code is written into the keyboard controller and the computer is interrupted.
BK.By remembering which keys have been depressed and not released the software can determine Cntl-A, Shift-B, etc.

5.5.3: Input Software


BL.We are just looking at keyboard input. Once again graphics is too involved to be treated here.
BM.There are two fundamental modes of input, sometimes called raw and cooked.
BN.In raw mode the application sees every ``character'' the user types. Indeed, raw mode is character oriented.
1. All the OS does is convert the keyboard ``scan codes'' to ``characters'' and and pass these characters to the application.
2. Some examples
A. down-cntl down-x up-x up-cntl is converted to cntl-x
B. down-cntl up-cntl down-x up-x is converted to x
C. down-cntl down-x up-cntl up-x is converted to cntl-x (I just tried it to be sure).
D. down-x down-cntl up-x up-cntl is converted to x
3. Full screen editors use this mode.
BO.Cooked mode is line oriented. The OS delivers lines to the application program.
1. Special characters are interpreted as editing characters (erase-previous-character, erase-previous-word, kill-line, etc).
2. Erased characters are not seen by the application but are erased by the keyboard driver.
3. Need an escape character so that the editing characters can be passed to the application if desired.
4. The cooked characters must be echoed (what should one do if the application is also generating output at this time?)
BP.The (possibly cooked) characters must be buffered until the application issues a read (and an end-of-line EOL has been
received for cooked mode).

5.5.4: Output Software


Again too dated and the truth is too complicated to deal with in a few minutes.
Homework: 16.
Chapter 6: Deadlocks
A deadlock occurs when a every member of a set of processes is waiting
for an event that can only be caused by a member of the set.
Often the event waited for is the release of a resource.
In the automotive world deadlocks are called gridlocks.
• The processes are the cars.
• The resources are the spaces occupied by the cars
Reward: One point extra credit on the final exam for anyone who brings a
real (e.g., newspaper) picture of an automotive deadlock. You must bring
the clipping to the final and it must be in good condition. Hand it in with
your exam paper.
For a computer science example consider two processes A and B that each
want to print a file currently on tape.
1. A has obtained ownership of the printer and will release it after
printing one file.
2. B has obtained ownership of the tape drive and will release it after
reading one file.
3. A tries to get ownership of the tape drive, but is told to wait for B
to release it.
4. B tries to get ownership of the printer, but is told to wait for A to
release the printer.
Bingo: deadlock!

6.1: Resources:
The resource is the object granted to a process.
• Resources come in two types
1. Preemptable, meaning that the resource can be taken away from its current owner (and given back later). An example is
memory.
2. Non-preemptable, meaning that the resource cannot be taken away. An example is a printer.
• The interesting issues arise with non-preemptable resources so those are the ones we study.
• Life history of a resource is a sequence of
1. Request
2. Allocate
3. Use
4. Release
• Processes make requests, use the resourse, and release the resourse. The allocate decisions are made by the system and we will study
policies used to make these decisions.

6.2: Deadlocks
To repeat: A deadlock occurs when a every member of a set of processes is waiting for an event that can only be caused by a member of the
set.
Often the event waited for is the release of a resource.

6.2.1: (Necessary) Conditions for Deadlock


The following four conditions (Coffman; Havender) are necessary but not sufficient for deadlock. Repeat: They are not sufficient.
1. Mutual exclusion: A resource can be assigned to at most one process at a time (no sharing).
2. Hold and wait: A processing holding a resource is permitted to request another.
3. No preemption: A process must release its resources; they cannot be taken away.
4. Circular wait: There must be a chain of processes such that each member of the chain is waiting for a resource held by the next
member of the chain.

6.2.2: Deadlock Modeling


On the right is the Resource Allocation Graph, also called the Reusable Resource
Graph.
• The processes are circles.
• The resources are squares.
• An arc (directed line) from a process P to a resource R signifies that process P
has requested (but not yet been allocated) resource R.
• An arc from a resource R to a process P indicates that process P has been
allocated resource R.
Homework: 1.
Consider two concurrent processes P1 and P2 whose programs are.
P1: request R1 P2: request R2
request R2 request R1
release R2 release R1
release R1 release R2

On the board draw the resource allocation graph for various possible executions of the processes, indicating when deadlock occurs and when
deadlock is no longer avoidable.
There are four strategies used for dealing with deadlocks.
1. Ignore the problem
2. Detect deadlocks and recover from them
3. Avoid deadlocks by carefully deciding when to allocate resources.
4. Prevent deadlocks by violating one of the 4 necessary conditions.

6.3: Ignoring the problem--The Ostrich Algorithm


The ``put your head in the sand approach''.
• If the likelihood of a deadlock is sufficiently small and the cost of avoiding a deadlock is sufficiently high it might be better to ignore
the problem. For example if each PC deadlocks once per 100 years, the one reboot may be less painful that the restrictions needed to
prevent it.
• Clearly not a good philosophy for nuclear missile launchers.
• For embedded systems (e.g., missile launchers) the programs run are fixed in advance so many of the questions Tanenbaum raises
(such as many processes wanting to fork at the same time) don't occur.

6.4: Detecting Deadlocks and Recovering from them


6.4.1: Detecting Deadlocks with single unit resources
Consider the case in which there is only one instance of each resource.
• So a request can be satisfied by only one specific resource.
• In this case the 4 necessary conditions for deadlock are also sufficient.
• Remember we are making an assumption (single unit resources) that is often invalid. For example, many systems have several
printers and a request is given for ``a printer'' not a specific printer. Similarly, one can have many tape drives.
• So the problem comes down to finding a directed cycle in the resource allocation graph. Why?
Answer: Because the other three conditions are either satisfied by the system we are studying or are not in which case deadlock is not
a question. That is, conditions 1,2,3 are conditions on the system in general not on what is happening right now.
To find a directed cycle in a directed graph is not hard. The algorithm is in the book. The idea is simple.
1. For each node in the graph do a depth first traversal (hoping the graph is a DAG (directed acyclic graph), building a list as you go
down the DAG.
2. If you ever find the same node twice on your list, you have found a directed cycle and the graph is not a DAG and deadlock exists
among the processes in your current list.
3. If you never find the same node twice, the graph is a DAG and no deadlock occurs.
4. The searches are finite since the list size is bounded by the number of nodes.

6.4.2: Detecting Deadlocks with multiple unit resources


This is more difficult.
• The figure on the right shows a resource allocation graph with multiple unit resources.
• Each unit is represented by a dot in the box.
• Request edges are drawn to the box since they represent a request for any dot in the box.
• Allocation edges are drawn from the dot to represent that this unit of the resource has been assigned (but
all units of a resource are equivalent and the choice of which one to assign is arbitrary).
• Note that there is a directed cycle in black, but there is no deadlock. Indeed the middle process might
finish, erasing the magenta arc and permitting the blue dot to satisfy the rightmost process.
• The book gives an algorithm for detecting deadlocks in this more general setting. The idea is as follows.
1. look fora process that might be able to terminate (i.e., all its request arcs can be satisfied).
2. If one is found pretend that it does terminate (erase all its arcs), and repeat step 1.
3. If any processes remain, they are deadlocked.
• We will soon do in detail an algorithm (the Banker's algorithm) that has some of this flavor.

6.4.3: Recovery from deadlock

Preemption
Perhaps you can temporarily preempt a resource from a process. Not likely.
Rollback
Database (and other) systems take periodic checkpoints. If the system does take checkpoints, one can roll back to a checkpoint whenever a
deadlock is detected. Somehow must guarantee forward progress.

Kill processes
Can always be done but might be painful. For example some processes have had effects that can't be simply undone. Print, launch a missile,
etc.

================ Start Lecture #14 ================

Remark: We are doing 6.6 before 6.5 since 6.6 is easier.

6.6: Deadlock Prevention


Attack one of the coffman/havender conditions

6.6.1: Attacking Mutual Exclusion


Idea is to use spooling instead of mutual exclusion. Not possible for many kinds of resources

6.6.2: Attacking Hold and Wait


Require each processes to request all resources at the beginning of the run. This is often called One Shot.

6.6.3: Attacking No Preempt


Normally not possible.

6.6.4: Attacking Circular Wait


Establish a fixed ordering of the resources and require that they be requested in this order. So if a process holds resources #34 and #54, it can
request only resources #55 and higher.
It is easy to see that a cycle is no longer possible.
6.5: Deadlock Avoidance
Let's see if we can tiptoe through the tulips and avoid deadlock states even though our system does permit all four of the necessary
conditions for deadlock.
An optimistic resource manager is one that grants every request as soon as it can. To avoid deadlocks with all four conditions present, the
manager must be smart not optimistic.

6.5.1 Resource Trajectories


• We have two processes H (horizontal) and V.
• The origin represents them both starting.
• Their combined state is a point on the graph.
• The parts where the printer and plotter are needed by each process are
indicated.
• The dark green is where both processes have the plotter and hence
execution cannot reach this point.
• Light green represents both having the printer; also impossible.
• Pink is both having both printer and plotter; impossible.
• Gold is possible (H has plotter, V has printer), but you can't get there.
• The upper right corner is the goal both processes finished.
• The red dot is ... (cymbals) deadlock. We don't want to go there.
• The cyan is safe. From anywhere in the cyan we have horizontal and
vertical moves to the finish line without hitting any impossible area.
• The magenta interior is very interesting. It is
• Possible: each processor has a different resource
• Not deadlocked: each processor can move within the magenta
• Deadly: deadlock is unavoidable. You will hit a magenta-green
boundary and then will no choice but to turn and go to the red
dot.
• The cyan-magenta border is the danger zone
• The dashed line represents a possible execution pattern.
• With a uniprocessor no diagonals are possible. We either move to the right meaning H is executing or move up indicating V.
• The trajectory shown represents.
• H excuting a little.
• V excuting a little.
• H executes; requests the printer; gets it; executes some more.
• V executes; requests the plotter.
• The crisis is at hand!
• If the resource manager gives V the plotter, the magenta has been entered and all is lost. ``Abandon all hope yee who enter here''
--Dante.
• The right thing to do is to deny the request, let H execute moving horizontally under the magenta and dark green. At the end of the
dark green, no danger remains, both processes will complete successfully. Victory!
• This procedure is not practical for a general purpose OS since it requires knowing the programs in advance. That is, the resource
manager, knows in advance what requests each process will make and in what order.

6.5.2: Safe States


Avoiding deadlocks given some extra knowledge.
• Not surprisingly, the resource manager knows how many units of each resource it had to begin with.
• Also it knows how many units of each resource it has given to each process.
• It would be great to see all the programs in advance and thus know all future requests, but that is asking for too much.
• Instead, each process when it starts gives its maximum usage. That is each process at startup states, for each resource, the maximum
number of units it can possibly ask for. This is called the claim of the process.
• If during the run the process asks for more than its claim, abort it.
• If it claims more than it needs, the result is that the resource manager will be more conservative than need be and there will be
more waiting.
Definition: A state is safe if there one can find an ordering of the processes such that: if the processes are run in this order, they will all
terminate (assuming none exceeds its claim).
Give an example of all four possibilities. A state that is
1. Safe and deadlocked
2. Safe and not deadlocked
3. Not safe and deadlocked
4. Not safe and not deadlocked
A manager can determine if a state is safe.
• Since the manager know all the claims, it can determine the maximum amount of additional resources each process can request.
• The manager knows how many units of each resource it has left.
The manager then follows the following procedure, which is part of Banker's Algorithms discovered by Dijkstra, to determine if the state is
safe.
1. If there are no processes remaining, the state is safe.
2. Seek a process P whose max additional requests is less than what remains (for each resource).
• If no such process can be found, then the state is not safe.
• The banker (manager) knows that if it refuses all requests excepts those from P, then it will be able to satisfy all of P's
requests. Why?
Look at how P was chosen.

3. The banker now pretends that P has terminated (since the banker knows that it can guarantee this will happen). Hence the banker
pretends that all of P's currently held resources are returned. This makes the banker richer and hence perhaps a process that was not
eligible to be chosen as P previously, can now be chosen.
4. Repeat these steps.

Example 1
• One resource type R with 22 unit process claim current
• Three processes X, Y, and Z with claims 3, 11, and 19 respectively. X 3 1
• Currently the processes have 1, 5, and 10 units respectively.
Y 11 5
• Hence the manager currently has 6 units left.
• Also note that the max additional needs for processes are 2, 6, 9 Z 19 10
• So the manager cannot assure (with its current remaining supply of 6 units that Z can terminate. But that is total 16
not the question.
• This state is safe
1. Use 2 units to satisfy X; now the manager has 7 units.
2. Use 6 units to satisfy Y; now the manager has 12 units.
3. Use 9 units to satisfy Z; done!

Example 2
Assume that Z now requests 2 units and we grant them. process claim current
• Currently the processes have 1, 5, and 12 units respectively. X 3 1
• The manager has 4 units. Y 11 5
• The max additional needs are 2, 6, and 7. Z 19 12
• This state is unsafe
total 18
1. Use 2 unit to satisfy X; now the manager has 5 units.
2. Y needs 6 and Z needs 7 so we can't guarantee satisfying either
• Note that we were able to find a process that can terminate (X) but then we were stuck. So it is not enough to find one process. Must
find a sequence of all the processes.
Remark: An unsafe state is not necessarily a deadlocked state. Indeed, if one gets lucky all processes may terminate successfully. A safe
state means that the manager can guarantee that no deadlock will occur.

6.5.3: The Banker's Algorithm (Dijkstra) for a Single Resource


The algorithm is simple: Stay in safe states.
• Check before any process starts that the state is safe (this means that no process claims more than the manager has). If not, then this
process is trying to claim more than the system has so cannot be run.
• When the manager receives a request, it pretends to grant it and checks if the resulting state is safe. If it is safe the request is granted,
if not the process is blocked.
• When a resource is returned, the manager checks to see if any of the pending requests can be granted (i.e., if the result would now be
safe). If so the request is granted and the manager checks to see if another can be granted.

6.5.4: The Banker's Algorithm for Multiple Resources


At a high level the algorithm is identical: Stay in safe states.
• What is a safe state?
• The same definition (if processes are run in a certain order they will all terminate).
• Checking for safety is the same idea as above. The difference is that to tell if there enough free resources for a processes to terminate,
the manager must check that for all resources, the number of free units is at least equal to the max additional need of the process.

Limitations of the banker's algorithm


• Often users don't know the maximum requests a process will make. They can estimate conservatively (i.e., use big numbers for the
claim) but then the manager becomes very conservative.
• New processes arriving cause a problem (but not so bad as Tanenbaum suggests).
• The process's claim must be less than the total number of units of the resource in the system. If not, the process is not accepted
by the manager.
• Since the state without the new process is safe, so is the state with the new process! Just use the order you had originally and
put the new process at the end.
• Insuring fairness (starvation freedom) needs a little more work, but isn't too hard either (once an hour stop taking new
processes until all current processes finish).
• A resource becoming unavailable (e.g., a tape drive breaking), can result in an unsafe state.
Homework: 11, 14 (do not hand in).

6.7: Other Issues


6.7.1: Two-phase locking
This is covered (MUCH better) in a database text. We will skip it.

6.7.2: Non-resource deadlocks


You can get deadlock from semaphores as well as resources. This is trivial. Semaphores can be considered resources. P(S) is request S and
V(S) is release S. The manager is the module implementing P and V. When the manager returns from P(S), it has granted the resource S.

6.7.3: Starvation
As usual FCFS is a good cure. Often this is done by priority aging and picking the highest priority process to get the resource. Also can
periodically stop accepting new processes until all old ones get their resources. The End: Good luck on the final

http://cs.nyu.edu/~gottlieb/courses/2000-01-spring/os/chapters/chapter-1.html

You might also like