Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Operating Systems: Chapter 3: Memory Management

Download as odt, pdf, or txt
Download as odt, pdf, or txt
You are on page 1of 51

Operating Systems

Chapter 3: Memory Management


Also called storage management or space management.
Memory management must deal with the storage hierarchy present in modern machines.
• Registers, cache, central memory, disk, tape (backup)
• Move data from level to level of the hierarchy.
• How should we decide when to move data up to a higher level?
• Fetch on demand (e.g. demand paging, which is dominant now).
• Prefetch
• Read-ahead for file I/O.
• Large cache lines and pages.
• Extreme example. Entire job present whenever running.
We will see in the next few lectures that there are three independent decision:
1. Segmentation (or no segmentation)
2. Paging (or no paging)
3. Fetch on demand (or no fetching on demand)
Memory management implements address translation.
• Convert virtual addresses to physical addresses
• Also called logical to real address translation.
• A virtual address is the address expressed in the program.
• A physical address is the address understood by the computer hardware.
• The translation from virtual to physical addresses is performed by the Memory Management Unit or (MMU).
• Another example of address translation is the conversion of relative addresses to absolute addresses by the linker.
• The translation might be trivial (e.g., the identity) but not in a modern general purpose OS.
• The translation might be difficult (i.e., slow).
• Often includes addition/shifts/mask--not too bad.
• Often includes memory references.
• VERY serious.
• Solution is to cache translations in a Translation Lookaside Buffer (TLB). Sometimes called a translation buffer
(TB).
Homework: 7.

When is address translation performed?


1. At compile time
• Compiler generates physical addresses.
• Requires knowledge of where the compilation unit will be loaded.
• No linker.
• Loader is trivial.
• Primitive.
• Rarely used (MSDOS .COM files).

2. At link-edit time (the ``linker lab'')


• Compiler
• Generates relocatable addresses for each compilation unit.
• References external addresses.
• Linkage editor
• Converts the relocatable addr to absolute.
• Resolves external references.
• Misnamed ld by unix.
• Also converts virtual to physical addresses by knowing where the linked program will be loaded. Linker lab ``does''
this, but it is trivial since we assume the linked program will be loaded at 0.
• Loader is still trivial.
• Hardware requirements are small.
• A program can be loaded only where specified and cannot move once loaded.
• Not used much any more.

3. At load time
• Similar to at link-edit time, but do not fix the starting address.
• Program can be loaded anywhere.
• Program can move but cannot be split.
• Need modest hardware: base/limit registers.
• Loader sets the base/limit registers.

4. At execution time
• Addresses translated dynamically during execution.
• Hardware needed to perform the virtual to physical address translation quickly.
• Currently dominates.
• Much more information later.

Extensions
• Dynamic Loading
• When executing a call, check if module is loaded.
• If not loaded, call linking loader to load it and update tables.
• Slows down calls (indirection) unless you rewrite code dynamically.
• Not used much.
• Dynamic Linking
• The traditional linking described above is today often called static linking.
• With dynamic linking, frequently used routines are not linked into the program. Instead, just a stub is linked.
• When the routine is called, the stub checks to see if the real routine is loaded (it may have been loaded by another program).
• If not loaded, load it.
• If already loaded, share it. This needs some OS help so that different jobs sharing the library don't overwrite each
other's private memory.
• Advantages of dynamic linking.
• Saves space: Routine only in memory once even when used many times.
• Bug fix to dynamically linked library fixes all applications that use that library, without having to relink the
application.
• Disadvantages of dynamic linking.
• New bugs in dynamically linked library infect all applications.
• Applications ``change'' even when they haven't changed.
Note: I will place ** before each memory management scheme.

3.1: Memory management without swapping or paging


Entire process remains in memory from start to finish and does not move.
The sum of the memory requirements of all jobs in the system cannot exceed the size of physical memory.
** 3.1.1: Monoprogramming without swapping or paging (Single User)
The ``good old days'' when everything was easy.
• No address translation done by the OS (i.e., address translation is
not performed dynamically during execution).
• Either reload the OS for each job (or don't have an OS, which is
almost the same), or protect the OS from the job.
• One way to protect (part of) the OS is to have it in ROM.
• Of course, must have the OS (read-write) data in ram.
• Can have a separate OS address space only accessible in
supervisor mode.
• Might just put some drivers in ROM (BIOS).
• The user employs overlays if the memory needed by a job exceeds
the size of physical memory.
• Programmer breaks program into pieces.
• A ``root'' piece is always memory resident.
• The root contains calls to load and unload various pieces.
• Programmer's responsibility to ensure that a piece is already
loaded when it is called.
• No longer used, but we couldn't have gotten to the moon in
the 60s without it (I think).
• Overlays have been replaced by dynamic address translation
and other features (e.g., demand paging) that have the
system support logical address sizes greater than physical address sizes.
• Fred Brooks (leader of IBM's OS/360 project and author of ``The mythical man month'') remarked that the OS/360 linkage
editor was terrific, especially in its support for overlays, but by the time it came out, overlays were no longer used.

3.1.2: Multiprogramming and Memory Usage


Goal is to improve CPU utilization, by overlapping CPU and I/O
• Consider a job that is unable to compute (i.e., it is waiting for I/O) a fraction p of the time.
• Then, with monoprogramming, the CPU utilization is 1-p.
• Note that p is often > .5 so CPU utilization is poor.
• But, if the probability that a job is waiting for I/O is p and n jobs are in memory, then the probability that all n are waiting for I/O is
approximately p^n.
• So, with a multiprogramming level (MPL) of n, the CPU utilization is approximately 1-p^n.
• If p=.5 and n=4, then 1-p^n = 15/16, which is much better than 1/2, which would occur for monoprogramming (n=1).
• This is a crude model, but it is correct that increasing MPL does increase CPU utilization up to a point.
• The limitation is memory, which is why we discuss it here instead of process management. That is, we must have many jobs loaded at
once, which means we must have enough memory for them There are other issues as well and we will discuss them.
• Some of the CPU utilization is time spent in the OS executing context switches so the gains are not a great as the crude model
predicts.
Homework: 1, 3.

3.1.3: Multiprogramming with fixed partitions


• This was used by IBM for system 360 OS/MFT (multiprogramming with a
fixed number of tasks).
• Can have a single input queue instead of one for each partition.
• So that if there are no big jobs can use big partition for little jobs.
• But I don't think IBM did this.
• Can think of the input queue(s) as the ready list(s) with a scheduling
policy of FCFS in each partition.
• The partition boundaries are not movable (must reboot to move a job).
• MFT can have large internal fragmentation, i.e., wasted space
inside a region
• Each process has a single ``segment'' (we will discuss segments later)
• No sharing between process.
• No dynamic address translation.
• At load time must ``establish addressability''.
• i.e. must set a base register to the location at which the process
was loaded (the bottom of the partition).
• The base register is part of the programmer visible register set.
• This is an example of address translation during load time.
• Also called relocation.
• Storage keys are adequate for protection (IBM method).
• Alternative protection method is base/limit registers.
• An advantage of base/limit is that it is easier to move a job.
• But MFT didn't move jobs so this disadvantage of storage keys is moot.
• Tanenbaum says jobs were ``run to completion''. This must be wrong as that
would mean monoprogramming.
• He probably means that jobs not swapped out and each queue is FCFS without preemption.

================ Start Lecture #10 ================

Note: Typo on lab 2. The last input set has a process with a zero value for IO, which is an error. Specifically, you should replace
5 (0 3 200 3) (0 9 500 3) (0 20 500 3) (100 1 100 3) (100 100 500 3)
with
5 (0 3 200 3) (0 9 500 3) (0 20 500 3) (100 1 100 3) (100 100 500 3)

End of Note.

3.2: Swapping
Moving entire processes between disk and memory is called swapping.

3.2.1: Multiprogramming with variable partitions


• Both the number and size of the partitions change with time.
• IBM OS/MVT (multiprogramming with a varying number of tasks).
• Also early PDP-10 OS.
• Job still has only one segment (as with MFT) but now can be of any size up to the size of the machine and can change with time.
• A single ready list.
• Job can move (might be swapped back in a different place).
• This is dynamic address translation (during run time).
• Must perform an addition on every memory reference (i.e. on every address translation) to add the start address of the partition.
• Called a DAT (dynamic address translation) box by IBM.
• Eliminates internal fragmentation.
• Find a region the exact right size (leave a hole for the remainder).
• Not quite true, can't get a piece with 10A755 bytes. Would get say 10A760. But internal fragmentation is much reduced
compared to MFT. Indeed, we say that internal fragmentation has been eliminated.
• Introduces external fragmentation, i.e., holes outside any region.
• What do you do if no hole is big enough for the request?
• Can compactify
• Transition from bar 3 to bar 4 in diagram below.
• This is expensive.
• Not suitable for real time (MIT ping pong).
• Can swap out one process to bring in another
• Bars 5-6 and 6-7 in diagram

Homework: 4
• There are more processes than holes. Why?
• Because next to a process there might be a process or a hole but next to a hole there must be a process
• So can have ``runs'' of processes but not of holes
• If after a process equally likely to have a process or a hole, you get about twice as many processes as holes.
• Base and limit registers are used.
• Storage keys not good since compactifying would require changing many keys.
• Storage keys might need a fine granularity to permit the boundaries move by small amounts. Hence many keys would need to
be changed

MVT Introduces the ``Placement Question'', which hole (partition) to choose


• Best fit, worst fit, first fit, circular first fit, quick fit, Buddy
• Best fit doesn't waste big holes, but does leave slivers and is expensive to run.
• Worst fit avoids slivers, but eliminates all big holes so a big job will require compaction. Even more expensive than best fit
(best fit stops if it finds a perfect fit).
• Quick fit keeps lists of some common sizes (but has other problems, see Tanenbaum).
• Buddy system
• Round request to next highest power of two (causes internal fragmentation).
• Look in list of blocks this size (as with quick fit).
• If list empty, go higher and split into buddies.
• When returning coalesce with buddy.
• Do splitting and coalescing recursively, i.e. keep coalescing until can't and keep splitting until successful.
• See Tanenbaum for more details (or an algorithms book).
• A current favorite is circular first fit (also know as next fit)
• Use the first hole that is big enough (first fit) but start looking where you left off last time.
• Doesn't waste time constantly trying to use small holes that have failed before, but does tend to use many of the big holes,
which can be a problem.
• Buddy comes with its own implementation. How about the others?
• Bit map
• Only question is how much memory does one bit represent.
• Big: Serious internal fragmentation
• Small: Many bits to store and process
• Linked list
• Each item on list says whether Hole or Process, length, starting location
• The items on the list are not taken from the memory to be used by processes
• Keep in order of starting address
• Double linked
• Boundary tag
• Knuth
• Use the same memory for list items as for processes
• Don't need an entry in linked list for blocks in use, just the avail blocks are linked
• For the blocks currently in use, just need a hole/process bit at each end and the length. Keep this in the block itself.
• See knuth, the art of computer programming vol 1
Homework: 2, 5.

MVT Also introduces the ``Replacement Question'', which victim to swap out
We will study this question more when we discuss demand paging.
Considerations in choosing a victim
• Cannot replace a job that is pinned, i.e. whose memory is tied down. For example, if Direct Memory Access (DMA) I/O is scheduled
for this process, the job is pinned until the DMA is complete.
• Victim selection is a medium term scheduling decision
• Job that has been in a wait state for a long time is a good candidate.
• Often choose as a victim a job that has been in memory for a long time.
• Another point is how long should it stay swapped out.
• For demand paging, where swaping out a page is not as drastic as swapping out a job, choosing the victim is an important memory
management decision and we shall study several policies,
NOTEs:
1. So far the schemes presented have had two properties:
i. Each job is stored contiguously in memory. That is, the job is contiguous in physical addresses.
ii. Each job cannot use more memory than exists in the system. That is, the virtual addresses space cannot exceed the physical
address space.

2. Tanenbaum now attacks the second item. I wish to do both and start with the first.br>
3. Tanenbaum (and most of the world) uses the term ``paging'' to mean what I call demand paging. This is unfortunate as it mixes
together two concepts.
i. Paging (dicing the address space) to solve the placement problem and essentially eliminate external fragmentation.
ii. Demand fetching, to permit the total memory requirements of all loaded jobs to exceed the size of physical memory.

4. Tanenbaum (and most of the world) uses the term virtual memory as a synonym for demand paging. Again I consider this
unfortunate.
i. Demand paging is a fine term and is quite descriptive
ii. Virtual memory ``should'' be used in contrast with physical memory to describe any virtual to physical address translation.

** (non-demand) Paging
Simplest scheme to remove the requirement of contiguous physical memory.
• Chop the program into fixed size pieces called pages (invisible to the programmer).
• Chop the real memory into fixed size pieces called page frames or simply frames.
• Size of a page (the page size) = size of a frame (the frame size).
• Sprinkle the pages into the frames.
• Keep a table (called the page table) having an entry for each page. The page table entry or PTE for page p contains the number of
the frame f that contains page p.
================ Start Lecture #11 ================

Example: Assume a decimal machine with page size = frame size = 1000.
Assume PTE 3 contains 459.
Then virtual address 3372 corresponds to physical address 459372.
Properties of (non-demand) paging.
• Entire job must be memory resident to run.
• No holes, i.e. no external fragmentation.
• If there are 50 frames available and the page size is 4KB than a job requiring <= 200KB will fit, even if the available frames are
scattered over memory.
• Hence (non-demand) paging is useful.
• Introduces internal fragmentation approximately equal to 1/2 the page size for every process (really every segment).
• Can have a job unable to run due to insufficient memory and have some (but not enough) memory available. This is not called
external fragmentation since it is not due to memory being fragmented.
• Eliminates the placement question. All pages are equally good since don't have external fragmentation.
• Replacement question remains.
• Since page boundaries occur at ``random'' points and can change from run to run (the page size can change with no effect on the
program--other than performance), pages are not appropriate units of memory to use for protection and sharing. This is discussed
further when we introduce segmentation.
Homework: 13
Address translation
• Each memory reference turns into 2 memory references
1. Reference the page table
2. Reference central memory
• This would be a disaster!
• Hence the MMU caches page#-->frame# translations. This cache is kept near the processor and can be accessed rapidly.
• This cache is called a translation lookaside buffer (TLB) or translation buffer (TB).
• For the above example, after referencing virtual address 3372, entry 3 in the TLB would contain 459.
• Hence a subsequent access to virtual address 3881 would be translated to physical address 459881 without a memory reference.
Choice of page size is discuss below.
Homework: 8.

3.3: Virtual Memory (meaning fetch on demand)


Idea is that a program can execute even if only the active portion of its address space is memory resident. That is, swap in and swap out
portions of a program. In a crude sense this can be called ``automatic overlays''.
Advantages
• Can run a program larger than the total physical memory.
• Can increase the multiprogramming level since the total size of the active, i.e. loaded, programs (running + ready + blocked) can
exceed the size of the physical memory.
• Since some portions of a program are rarely if ever used, it is an inefficient use of memory to have them loaded all the time. Fetch on
demand will not load them if not used and will unload them during replacement if they are not used for a long time (hopefully).

3.2.1: Paging (meaning demand paging)


Fetch pages from disk to memory when they are referenced, with a hope of
getting the most actively used pages in memory.
• Very common: dominates modern operating systems
• Started by the Atlas system at Manchester University in the 60s
(Fortheringham).
• Each PTE continues to have the frame number if the page is loaded.
• But what if the page is not loaded (exists only on disk)?
• The PTE has a flag indicating if it is loaded (can think of the
X in the diagram on the right as indicating that this flag is not
set).
• If not loaded, the location on disk could be kept in the PTE
(not shown in the diagram). But normally it is not (discussed
below).
• When a reference is made to a non-loaded page (sometimes
called a non-existent page, but that is a bad name), the system
has a lot of work to do. We give more details below.
1. Choose a free frame if one exists
2. If not
a. Choose a victim frame.
• More later on how to choose a victim.
• Called the replacement question
b. Write victim back to disk if dirty,
c. Update the victim PTE to show that it is not
loaded and where on disk it has been put
(perhaps the disk location is already there).
3. Copy the referenced page from disk to the free frame.
4. Update the PTE of the referenced page to show that it is loaded and give the frame number.
5. Do the standard paging address translation (p#,off)-->(f#,off).
• Really not done quite this way
• There is ``always'' a free frame because ...
• There is a deamon active that checks the number of free frames and if this is too low, chooses victims and ``pages them out''
(writing them back to disk if dirty).
• Choice of page size is discussed below.
Homework: 11.

3.3.2: Page tables


A discussion of page tables is also appropriate for (non-demand) paging, but the issues are more acute with demand paging since the tables
can be much larger. Why?
1. The total size of the active processes is no longer limited to the size of physical
memory. Since the total size of the processes is greater, the total size of the page
tables is greater and hence concerns over the size of the page table are more
acute.
2. With demand paging an important question is the choice of a victim page to page
out. Data in the page table can be useful in this choice.
We must be able access to the page table very quickly since it is needed for every
memory access.
Unfortunate laws of hardware.
• Big and fast are essentially incompatible.
• Big and fast and low cost is hopeless.
So we can't just say, put the page table in fast processor registers, and let it be huge, and
sell the system for $1500.

For now, put the (one-level) page table in main memory.


• Seems too slow since all memory references require two reference.
• TLB very helpful to reduce the average delay as mentioned above (discussed
later in more detail).
• It might be too big.
• Currently we are considering contiguous virtual addresses ranges (i.e. the virtual addresses have no holes).
• Typically put the stack at one end of virtual address and the global (or static) data at the other end and let them grow towards
each other
• The memory in between is unused.
• This unused memory can be huge (in address range) and hence the page table will mostly contain unneeded PTEs
• Works fine if the maximum virtual address size is small, which was once true (e.g., the PDP-11 as discussed by Tanenbaum) but is no
longer the case.

Contents of a PTE
Each page has a corresponding page table entry (PTE). The information in a PTE is for use by the hardware. Information set by and used by
the OS is normally kept in other OS tables. The page table format is determined by the hardware so access routines are not portable. The
following fields are often present.
1. The valid bit. This tells if the page is currently loaded (i.e., is in a frame). If set, the frame pointer is valid. It is also called the
presence or presence/absence bit. If a page is accessed with the valid bit zero, a page fault is generated by the hardware.
2. The frame number. This is the main reason for the table. It is needed for virtual to physical address translation.
3. TheModified bit. Indicates that some part of the page has been written since it was loaded. This is needed when the page is evicted so
the OS can know that the page must be written back to disk.
4. The referenced bit. Indicates that some word in the page has been referenced. Used to select a victim: unreferenced pages make good
victims by the locality property.
5. Protection bits. For example one can mark text pages as execute only. This requires that boundaries between regions with different
protection are on page boundaries. Normally many consecutive (in logical address) pages have the same protection so many page
protection bits are redundant. Protection is more naturally done with segmentation.

Multilevel page tables (not on 202 exams)


Recall the previous diagram. Most of the virtual memory is the unused space between the data and stack regions. However, with demand
paging this space does not waste real memory. But the single large page table does waste real memory.
The idea of multi-level page tables (a similar idea is used in
Unix inode-based file systems) is to add a level of indirection
and have a page table containing pointers to page tables.
• Imagine one big page table.
• Call it the second level page table and cut it into pieces
each the size of a page.
Note that you can get many PTEs in one page so you
will have far fewer of these pages than PTEs
• Now construct a first level page table containing PTEs
that point to these pages.
• This first level PT is small enough to store in memory.
• But since we still have the 2nd level PT, we have made
the world bigger not smaller!
• Don't store in memory those 2nd level page tables all of
whose PTEs refer to unused memory. That is use
demand paging on the (second level) page table
• For a two level page table the virtual address is divided
into three pieces
+-----+-----+-------+
| P#1 | P#2 | Offset|
+-----+-----+-------+

• P#1 gives the index into the first level page table.
• Follow the pointer in the corresponding PTE to reach
the frame containing the relevant 2nd level page table.
• P#2 gives the index into this 2nd level page table
• Follow the pointer in the corresponding PTE to reach
the frame containing the (originally) requested frame.
• Offset gives the offset in this frame where the requested
word is located.
Do an example on the board
The VAX used a 2-level page table structure, but with some
wrinkles (see Tanenbaum for details).
Naturally, there is no need to stop at 2 levels. In fact the SPARC
has 3 levels and the Motorola 68030 has 4 (and the number of bits of Virtual Address used for P#1, P#2, P#3, and P#4 can be varied).

3.3.4: Associative memory (TLBs)


Note: Tanenbaum suggests that ``associative memory'' and ``translation lookaside buffer'' are synonyms. This is wrong. Associative memory
is a general structure and translation lookaside buffer is a special case.
An associative memory is a content addressable memory. That is you access the memory by giving the value of some field and the
hardware searches all the records and returns the record whose field contains the requested value.
For example
Name | Animal | Mood | Color
======+========+==========+======
Moris | Cat | Finicky | Grey
Fido | Dog | Friendly | Black
Izzy | Iguana | Quiet | Brown
Bud | Frog | Smashed | Green

If the index field is Animal and Iguana is given, the associative memory returns
Izzy | Iguana | Quiet | Brown

A Translation Lookaside Buffer or TLB is an associate memory where the index field is the page number. The other fields include the
frame number, dirty bit, valid bit, and others.
• A TLB is small and expensive but at least it is fast. When the page number is in the TLB, the frame number is returned very quickly.
• On a miss, the page number is looked up in the page table. The record found is placed in the TLB and a victim is discarded. There is
no placement question since all entries are accessed at the same time. But there is a replacement question.
Homework: 15.

3.3.5: Inverted page tables


Keep a table indexed by frame number with the entry f containing the number of the page currently loaded in frame f.
• Since modern machine have a smaller physical address space than virtual address space, the table is smaller
• But on a TLB miss, must search the inverted page table.
• Would be hopelessly slow except that some tricks are employed.
• The book mentions some but not all of the tricks, we are skipping this topic.
3.4: Page Replacement Algorithms
These are solutions to the replacement question.
Good solutions take advantage of locality.
• Temporal locality: If a word is referenced now, it is likely to be referenced in the near future.
• This argues for caching referenced words, i.e. keeping the referenced word near the processor for a while.
• Spatial locality: If a word is referenced now, nearby words are likely to be referenced in the near future.
• This argues for prefetching words around the currently referenced word.
• These are lumped together into locality: If any word in a page is referenced, each word in the page is ``likely'' to be referenced.
• So it is good to bring in the entire page on a miss and to keep the page in memory for a while.
• When programs begin there is no history so nothing to base locality on. At this point the paging system is said to be undergoing a
``cold start''.
• Programs exhibit ``phase changes'', when the set of pages referenced changes abruptly (similar to a cold start). At the point of a phase
change, many page faults occur because locality is poor.
Pages belonging to processes that have terminated are of course perfect choices for victims.
Pages belonging to processes that have been blocked for a long time are good choices as well.

Random
A lower bound on performance. Any decent scheme should do better.

3.4.1: The optimal page replacement algorithm (opt PRA) (aka Belady's min PRA)
Replace the page whose next reference will be furthest in the future.
• Also called Belady's min algorithm.
• Provably optimal. That is, generates the fewest number of page faults.
• Unimplementable: Requires predicting the future.
• Good upper bound on performance.

================ Start Lecture #12 ================


3.4.2: The not recently used (NRU) PRA
Divide the frames into four classes and make a random selection from the lowest nonempty class.
0. Not referenced, not modified
1. Not referenced, modified
2. Referenced, not modified
3. Referenced, modified
Assumes that in each PTE there are two extra flags R (sometimes called U, for used) and M (often called D, for dirty).
Also assumes that a page in a lower priority class is cheaper to evict.
• If not referenced, probably not referenced again soon so not so important.
• If not modified, do not have to write it out so the cost of the eviction is lower.
• When a page is brought in, OS resets R and M (i.e. R=M=0)
• On a read, hardware sets R.
• On a write, hardware sets R and M.
We again have the prisoner problem, we do a good job of making little ones out of big ones, but not the reverse. Need more resets.
Every k clock ticks, reset all R bits
• Why not reset M?
Answer: Must have M accurate to know if victim needs to be written back
• Could have two M bits one accurate and one reset, but I don't know of any system (or proposal) that does so.
What if the hardware doesn't set these bits?
• OS can use tricks
• When the bits are reset, make the PTE indicate the page is not resident (i.e. lie). On the page fault, set the appropriate bit(s).

3..4.3: FIFO PRA


Simple but poor since usage of the page is ignored.
Belady's Anomaly: Can have more frames yet generate more faults. Example given later.

3.4.4: Second chance PRA


Similar to the FIFO PRA but when time choosing a victim, if the page at the head of the queue has been referenced (R bit set), don't evict it.
Instead reset R and move the page to the rear of the queue (so it looks new). The page is being a second chance.
What if all frames have been referenced?
Becomes the same as fifo (but takes longer).
Might want to turn off the R bit more often (say every k clock ticks).

3.4.5: Clock PRA


Same algorithm as 2nd chance, but a better (and I would say obvious) implementation: Use a circular list.
Do an example.

LIFO PRA
This is terrible! Why?
Ans: All but the last frame are frozen once loaded so you can replace only one frame. This is especially bad after a phase shift in the program
when it is using all new pages.

3.4.6:Least Recently Used (LRU) PRA


When a page fault occurs, choose as victim that page that has been unused for the longest time, i.e. that has been least recently used.
LRU is definitely
• Implementable: The past is knowable.
• Good: Simulation studies have shown this.
• Difficult. Essentially need to either:
1. Keep a time stamp in each PTE, updated on each reference and scan all the PTEs when choosing a victim to find the PTE with
the oldest timestamp.
2. Keep the PTEs in a linked list in usage order, which means on each reference moving the PTE to the end of the list
Homework: 19, 20

A hardware cutsie in Tanenbaum


• For n pages, keep an nxn bit matrix.
• On a reference to page i, set row i to all 1s and col i to all 0s
• At any time the 1 bits in the rows are ordered by inclusion. I.e. one row's 1s are a subset of another row's 1s, which is a subset of a
third. (Tanenbaum forgets to mention this.)
• So the row with the fewest 1s is a subset of all the others and is hence least recently used
• Cute, but still impractical.

3.4.7: Approximating LRU in Software

The Not Frequently Used (NFU) PRA


• Include a counter in each PTE (and have R in each PTE).
• Set counter to zero when page is brought into memory.
• For each PTE, every k clock ticks.
1. Add R to counter.
2. Clear R.
• Choose as victim the PTE with lowest count.

The Aging PRA


NFU doesn't distinguish between old references and recent one. The following modification does distinguish.
• Include a counter in each PTE (and have R in each PTE).
• Set counter to zero when page is brought into memory.
• For each PTE, every k clock ticks.
1. Shift counter right one bit.
2. Insert R as new high order bit (HOB).
3. Clear R.
• Choose as victim the PTE with lowest count.
R counter
1 10000000
0 01000000
1 10100000
1 11010000
0 01101000
0 00110100
1 10011010
1 11001101
0 01100110
Homework: 21, 25

3.5: Modeling Paging Algorithms


3.5.1: Belady's anomaly
Consider a system that has no pages loaded and that uses the FIFO PRU.
Consider the following ``reference string'' (sequences of pages referenced).
0 1 2 3 0 1 4 0 1 2 3 4

If we have 3 frames this generates 9 page faults (do it).


If we have 4 frames this generates 10 page faults (do it).
Theory has been developed and certain PRA (so called ``stack algorithms'') cannot suffer this anomaly for any reference string. FIFO is
clearly not a stack algorithm. LRU is.
Repeat the above calculations for LRU.

3.6: Design issues for (demand) Paging


3.6.1 & 3.6.2: The Working Set Model and Local vs Global Policies
I will do these in the reverse order (which makes more sense). Also Tanenbaum doesn't actually define the working set model, but I shall.
A local PRA is one is which a victim page is chosen among the pages of the same process that requires a new page. That is the number of
pages for each process is fixed. So LRU means the page least recently used by this process.
• Of course we can't have a purely local policy, why?
Answer: A new process has no pages and even if we didn't apply this for the first page loaded, the process would remain with only
one page.
• Perhaps wait until a process has been running a while.
• A global policy is one in which the choice of victim is made among all pages of all processes.
If we apply global LRU indiscriminately with some sort of RR processor scheduling policy, and memory is somewhat over-committed, then
by the time we get around to a process, all the others have run and have probably paged out this process.
If this happens each process will need to page fault at a high rate; this is called thrashing. It is therefore important to get a good idea of how
many pages a process needs, so that we can balance the local and global desires.

The working set policy (Peter Denning)


The goal is to specify which pages a given process needs to have memory resident in order for the give process to run without too many page
faults.
• But this is impossible since it requires predicting the future.
• So we make the assumption that the immediate future is well approximated by the immediate past.
• Measure time in units of memory references, so t=1045 means the time when the 1045th memory reference is issued.
• In fact we measure time separately for each process, so t=1045 really means the time when this process made its 1045th memory
reference.
• W(t,&omega) is the set of pages referenced (by the given process) from time t-ω to time t.
• That is, W(t,ω) is the set pages referenced during the window of size ω ending at time t.
• That is, W(t,ω) is the set of pages referenced by the last ω memory references ending at reference t.
• W(t,ω) is called the working set at time t (with window ω).
• Netscape doesn't (yet) support ω to give the Greek letter. Ouch
• w(t,ω) is the size of the set W(t,ω), i.e. is the number of pages referenced in the window.
The idea of the working set policy is to ensure that each process keeps its working set in memory.
• One possibility is to allocate w(t,ω) frames to each process (this number differs for each process and changes with time) and then use
a local policy.
• What if there aren't enough frames to do this?
• Ans: Reduce the multiprogramming level (MPL)! That is, we have a connection between memory management and process
management. This is the suspend/resume arcs we saw way back when.
Interesting questions include:
• What value should be used for ω?
Experiments have been done and ω is surprisingly robust (i.e., for a given system a fixed value works reasonably for a wide variety of
job mixes)
• How should we calculate W(t,ω)?
Hard so do exactly so ...
... various approximations to the working set, have been devised.
1. Wsclock
• Use the aging algorithm above to maintain a counter for each PTE and declare a page whose counter is above a certain
threshold to be part of the working set.
• Apply the clock algorithm globally (i.e. to all pages) but refuse to page out any page in a working set, the resulting algorithm
is called wsclock.
• What if we find there are no pages we can page out?
Answer: Reduce the MPL.
2. Page Fault Frequency (PFF)
• For each process keep track of the page fault frequency, which is the number of faults divided by the number of references.
• Actually, must use a window or a weighted calculation since you are really interested in the recent page fault frequency.
• If the PFF is too high, allocate more frames to this process. Either
1. Raise its number of frames and use local policy; or
2. Bar its frames from eviction (for a while) and use a global policy.
• What if there are not enough frames?
Answer: Reduce the MPL.

================ Start Lecture #13 ================

3.6.3: Page size


• Page size ``must'' be a multiple of the disk block size. Why?
Answer: When copying out a page if you have a partial disk block, you must do a read/modify/write (i.e., 2 I/Os).
• Important property of I/O that we will learn later this term is that eight I/Os each 1KB takes considerably longer than one 8KB I/O
• Characteristics of a large page size.
• Good for user I/O.
• If I/O done using physical addresses, then I/O crossing a page boundary is not contiguous and hence requires multiple
I/Os
• If I/O uses virtual addresses, then page size doesn't effect this aspect of I/O. That is the addresses are contiguous in
virtual address and hence one I/O is done.
• Good for demand paging I/O.
• Better to swap in/out one big page than several small pages.
• But if page is too big you will be swapping in data that is really not local and hence might well not be used.
• Large internal fragmentation (1/2 page size).
• Small page table.
• A very larg page size leads to very few pages. Process will have many faults if using demand paging and the process
frequently references more regions than frames.
• A mall page size has the opposite characteristics.

3.6.4: Implementation Issues


Don't worry about instruction backup. Very machine dependent and modern implementations tend to get it right.

Locking (pinning) pages


We discussed pinning jobs already. The same (mostly I/O) considerations apply to pages.

Shared pages
Really should share segments.
• Must keep reference counts or something so that when a process terminates, pages (even dirty pages) it shares with another process
are not automatically discarded.
• Similarly, a reference count would make a widely shared page (correctly) look like a poor choice for a victim.
• A good place to store the reference count would be in a structure pointed to by both PTEs. If stored in the PTEs, must keep them
consistent between processes.

Backing Store
The issue is where on disk do we put pages.
• For program text, which is presumably read only, a good choice is the file itself.
• What if we decide to keep the data and stack each contiguous on the backing store. Data and stack grow so must be prepared to grow
the space on disk and leads to the same issues and problems as we saw with MVT.
• If those issues/problems are painful, we can scatter the pages on the disk.
• That is we employ paging!
• This is NOT demand paging.
• Need a table to say where the backing space for each page is located.
• This corresponds to the page table used to tell where in real memory a page is located.
• The format of the ``memory page table'' is determined by the hardware since the hardware modifies/accesses it.
• The format of the ``disk page table'' is decided by the OS designers and is machine independent.
• If the format of the memory page table was flexible, then we might well keep the disk information in it as well.

Paging Daemons
Done earlier

Page Fault Handling (Not on the 202 exams)


What happens when a process, say process A, gets a page fault?
1. The hardware detects the fault and traps to the kernel (switches to supervisor mode and saves state).
2. Some assembly language code save more state, establishes the C-language (or another programming language) environment, and
``calls'' the OS.
3. The OS determines that a page fault occurred and which page was referenced.
4. If the virtual address is invalid, process A is killed. If the virtual address is valid, the OS must find a free frame. If there is no free
frames, the OS selects a victim frame. Call the process owning the victim frame, process B. (If the page replacement algorithm is
local process B is process A.)
5. If the victim frame is dirty, the OS schedules an I/O write to copy the frame to disk. Thus, if the victim frame is dirty, process B is
blocked (it might already be blocked for some other reason). Process A is also blocked since it needs to wait for this frame to be free.
The process scheduler is invoked to perform a context switch.
• Tanenbaum ``forgot'' some here.
• The process selected by the scheduler (say process C) runs.
• Perhaps C is preempted for D or perhaps C blocks and D runs and then perhaps D is blocked and E runs, etc.
• When the I/O to write the victim frame completes, a Disk interrupt occurs. Assume processes C is running at the time.
• Hardware trap / assembly code / OS determines I/O done.
• Processes B is moved from blocked to ready (unless B is also blocked for some other reason).
• The scheduler picks a process to run, maybe A, maybe B, maybe C, maybe another processes.
• At some point the scheduler does pick process A to run. Recall that at this point A is still executing OS code.

6. Now the O/S has a clean frame (this may be much later in wall clock time if a victim frame had to be written). The O/S schedules an
I/O to read the desired page into this clean frame. Process A is blocked (perhaps for the second time) and hence the process scheduler
is invoked to perform a context switch.
7. A Disk interrupt occurs when the I/O completes (trap / asm / OS determines I/O done). The PTE is updated.
8. The O/S may need to fix up process A (e.g. reset the program counter to re-execute the instruction that caused the page fault).
9. Process A is placed on the ready list and eventually is chosen by the scheduler to run. Recall that process A is executing O/S code.
10.The OS returns to the first assembly language routine.
11.The assembly language routine restores registers, etc. and ``returns'' to user mode.
Process A is unaware that all this happened.

3.7: Segmentation
Up to now, the virtual address space has been contiguous.
• Among other issues this makes memory management difficult when there are more that two dynamically growing regions.
• With two regions you start them on opposite sides of the virtual space as we did before.
• Better is to have many virtual address spaces each starting at zero.
• This split up is user visible.
• Without segmentation (equivalently said with just one segment) all procedures are packed together so if
one changes in size all the virtual addresses following are changed and the program must be re-linked.
• Eases flexible protection and sharing (share a segment). For example, can have a shared library.
Homework: 29.

** Two Segments
Late PDP-10s and TOPS-10
• One shared text segment, that can also contain shared (normally read only) data.
• One (private) writable data segment.
• Permission bits on each segment.
• Which kind of segment is better to evict?
• Swap out shared segment hurts many tasks.
• The shared segment is read only (probably) so no writeback is needed.
• ``One segment'' is OS/MVT done above.
** Three Segments
Traditional Unix shown at right.
1. Shared text marked execute only.
2. Data segment (global and static variables).
3. Stack segment (automatic variables).

** Four Segments
Just kidding.

** General (not necessarily demand) Segmentation


• Permits fine grained sharing and protection.
• Visible division of program.
• Variable size segments.
• Address = (seg#, offset).
• Does not mandate how stored in memory.
• One possibility is that the entire program must be in memory in order to run it. Use whole process swapping. Very early
versions of Unix did this.
• Can also implement demand segmentation.
• Can combine with demand paging (done below).
• Requires a segment table with a base and limit value for each segment. Similar to a page table.
• Entries are called STEs, Segment Table Entries.
• (seg#, offset) --> if (offset<limit) base+offset else error.
• Segmentation exhibits external fragmentation, just as whole program swapping. Since segments are smaller than programs (several
segments make up one program), the external fragmentation is not as bad.

** Demand Segmentation
Same idea as demand paging applied to segments.
• If a segment is loaded, base and limit are stored in the STE and the valid bit is set in the PTE.
• The PTE is accessed for each memory reference (not really, TLB).
• If the segment is not loaded, the valid bit is unset. The base and limit as well as the disk address of the segment is stored in the an OS
table.
• A reference to a non-loaded segment generate a segment fault (analogous to page fault).
• To load a segment, we must solve both the placement question and the replacement question (for demand paging, there is no
placement question).
• I believe demand segmentation was once implemented but am not sure. It is not used in modern systems.
The following table mostly from Tanenbaum compares demand paging with demand segmentation.
Demand Demand
Consideration
Paging Segmentation
Programmer aware No Yes
How many addr spaces 1 Many
VA size > PA size Yes Yes
Protect individual
No Yes
procedures separately
Accommodate elements
No Yes
with changing sizes
Ease user sharing No Yes
let the VA size Sharing, Protection,
Why invented
exceed the PA size independent addr spaces

Internal fragmentation Yes No, in principle


External fragmentation No Yes
Placement question No Yes
Replacement question Yes Yes

** 3.7.2: Segmentation with paging


Combines both segmentation and paging to get advantages of both at a cost in complexity. This is very common now.
• A virtual address becomes a triple: (seg#, page#, offset).
• Each segment table entry (STE) points to the page table for that segment. Compare this with a multilevel page table.
• The size of each segment is a multiple of the page size (since the segment consists of pages). Perhaps not. Can keep the exact size in
the STE (limit value) and shoot the process if it referenced beyond the limit. In this case the last page of each segment is partially
valid.
• The page# field in the address gives the entry in the chosen page table and the offset gives the offset in the page.
• From the limit field, one can easily compute the size of the segment in pages (which equals the size of the corresponding page table
in PTEs). Implementations may require the size of a segment to be a multiple of the page size in which case the STE would store the
number of pages in the segment.
• A straightforward implementation of segmentation with paging would requires 3 memory references (STE, PTE, referenced word) so
a TLB is crucial.
• Some books carelessly say that segments are of fixed size. This is wrong. They are of variable size with a fixed maximum and
possible with the requirement that the size of a segment is a multiple of the page size.
• The first example of segmentation with paging was Multics.
• Keep protection and sharing information on segments. This works well for a number of reasons.
1. A segment is variable size.
2. Segments and their boundaries are user (i.e., linker) visible.
3. Segments are shared by sharing their page tables. This eliminates the problem mentioned above with shared pages.
• Do replacement on pages so there is no placement question and no external fragmentation.
• Do fetch-on-demand with pages (i.e., do demand paging).
• In general, segmentation with demand paging works well and is widely used. The only problems are the complexity and the resulting
3 memory references for each user memory reference. The complexity is real, but can be managed. The three memory references
would be fatal were it not for TLBs, which considerably ameliorate the problem. TLBs have high hit rates and for a TLB hit there is
essentially no penalty.
Homework: 30.
Some last words on memory management.
• Segmentation / Paging / Demand Loading (fetch-on-demand)
• Each is a yes or no alternative.
• Gives 8 possibilities.
• Placement and Replacement.
• Internal and External Fragmentation.
• Page Size and locality of reference.
• Multiprogramming level and medium term scheduling.

================ Start Lecture #14 ================

Operating Systems
Chapter 4: File Systems
Requirements
1. Size: Store very large amounts of data.
2. Persistence: Data survives the creating process.
3. Access: Multiple processes can access the data concurrently.
Solution: Store data in files that together form a file system.

4.1: Files
4.1.1: File Naming
Very important. A major function of the file system.
• Does each file have a unique name?
Answer: Often no. We will discuss this below when we study links.
• Extensions, e.g. the ``html'' in ``class-notes.html''.
1. Conventions just for humans: letter.teq (my convention).
2. Conventions giving default behavior for some programs.
• The emacs editor thinks .html files should be edited in html mode but
can edit them in any mode and can edit any file in html mode.
• Netscape thinks .html means an html file, but
<html> ... </html> works as well
• Gzip thinks .gz means a compressed file but accepts a --suffix flag

3. Required extensions for programs


• The gnu C compiler (and probably others) requires C programs be named *.c and assembler programs be named *.s

4. Required extensions by operating systems


• MS-DOS treats .com files specially
• Windows 95 requires (as far as I can tell) shortcuts to end in .lnk.

• Case sensitive?
Unix: yes. Windows: no.

4.1.2: File structure


A file is a
1. Byte stream
• Unix, dos, windows (I think).
• Maximum flexibility.
• Minimum structure.

2. (fixed size) Record stream: Out of date


• 80-character records for card images.
• 133-character records for line printer files. Column 1 was for control (e.g., new page) Remaining 132 characters were printed.

3. Varied and complicated beast.


• Indexed sequential.
• B-trees.
• Supports rapidly finding a record with a specific key.
• Supports retrieving (varying size) records in key order.
• Treated in depth in database courses.

================ Start Lecture #9 ================

4.1.3: File types


Examples
1. (Regular) files.
2. Directories: studied below.
3. Special files (for devices). Uses the naming power of files to unify many actions.
dir # prints on screen
dir > file # result put in a file
dir > /dev/tape # results written to tape

4. ``Symbolic'' Links (similar to ``shortcuts''): Also studied below.


``Magic number'': Identifies an executable file.
• There can be several different magic numbers for different types of executables.
• unix: #!/usr/bin/perl
Strongly typed files:
• The type of the file determines what you can do with the file.
• This make the easy and (hopefully) common case easier and, more importantly safer.
• It tends to make the unusual case harder. For example, you have a program that turns out data (.dat) files. But you want to use it to
turn out a java file but the type of the output is data and cannot be easily converted to type java.

4.1.4: File access


There are basically two possibilities, sequential access and random access (a.k.a. direct access). Previously, files were declared to be
sequential or random. Modern systems do not do this. Instead all files are random and optimizations are applied when the system
dynamically determines that a file is (probably) being accessed sequentially.
1. With Sequential access the bytes (or records) are accessed in order (i.e., n-1, n, n+1, ...). Sequential access is the most common and
gives the highest performance. For some devices (e.g. tapes) access ``must'' be sequential.
2. With random access, the bytes are accessed in any order. Thus each access must specify which bytes are desired.

4.1.5: File attributes


A laundry list of properties that can be specified for a file For example:
• hidden
• do not dump
• owner
• key length (for keyed files)
4.1.6: File operations
• Create: Essential if a system is to add files. Need not be a separate system call (can be merged with open).
• Delete: Essential if a system is to delete files.
• Open: Not essential. An optimization in which the translation from file name to disk locations is perform only once per file
rather than once per access.
• Close: Not essential. Free resources.
• Read: Essential. Must specify filename, file location, number of bytes, and a buffer into which the data is to be placed. Several
of these parameters can be set by other system calls and in many OS's they are.
• Write: Essential if updates are to be supported. See read for parameters.
• Seek: Not essential (could be in read/write). Specify the offset of the next (read or write) access to this file.
• Get attributes: Essential if attributes are to be used.
• Set attributes: Essential if attributes are to be user settable.
• Rename: Tanenbaum has strange words. Copy and delete is not acceptable for big files. Moreover copy-delete is not atomic.
Indeed link-delete is not atomic so even if link (discussed below) is provided, renaming a file adds functionality.
Homework: 2, 3, 4.
Read and understand ``copyfile'' on page 155.
Notes on copyfile
• Normally in unix one wouldn't call read and write directly.
• Indeed, for copyfile, getchar() and putchar() would be nice since they take care of the buffering (standard I/O, stdio).
• Tanenbaum is correct that the error reporting is atrocious.
The worst is exiting the loop on error and thus generating an exit(0) as if nothing happened.

4.1.7: Memory mapped files


Conceptually simple and elegant. Associate a segment with each file and then normal memory operations take the place of I/O.
Thus copyfile does not have fgetc/fputc (or read/write). Instead it is just like memcopy
while ( (dest++)* = (src++)* );

The implementation is via segmentation with demand paging but the backing store for the pages is the file itself. This all sounds great
but ...
• How do you tell the length of a newly created file? You know which pages were written but not what words in those pages. So
a file with one byte or 10, looks like a page.
• What if same file is accessed by both I/O and memory mapping.
• What if the file is bigger than the size of virtual memory (will not be a problem for systems built 3 years from now as all will
have enormous virtual memory sizes).

4.2: Directories
Unit of organization.

4.2.1: Hierarchical directory systems


Possibilities
• One directory in the system
• One per user
• One tree
• One tree per user
• One forest
• One forest per user
These are not as wildly different as they sound.
• If the system only has one directory, but allows the character / in a file name. Then one could fake a tree by having a file
named
/allan/gottlieb/courses/arch/class-notes.html
rather than a directory allan, a subdirectory gottlieb, ..., a file class-notes.html.
• Dos (windows) is a forest, unix a tree. In dos there is no common parent of a:\ and c:\.
• But windows explorer makes the dos forest look quite a bit like a tree. Indeed, the gnome file manager for linux, looks A LOT
like windows explorer.
• You can get an effect similar to (but not the same as) one X per user by having just one X in the system and having
permissions that permits each user to visit only a subset. Of course if the system doesn't have permissions, this is not possible.
• Today's systems have a tree per system or a forest per system.

4.2.2: Path Names


You can specify the location of a file in the file hierarchy by using either anabsolute versus or a Relative path to the file
• An absolute path starts at the (or a if we have a forest) root.
• A relative path starts at the current (a.k.a working) directory.
• The special directories . and .. represent the current directory and the parent of the current directory respectively.
Homework: 1, 8.

4.2.3: Directory operations


• Create: Produces an ``empty'' directory. Normally the directory created actually contains . and .., so is not really empty
• Delete: Requires the directory to be empty (i.e., to just contain . and ..). Commands are normally written that will first empty
the directory (except for . and ..) and then delete it. These commands make use of file and directory delete system calls.
• Opendir: Same as for files (creates a ``handle'')
• Closedir: Same as for files
• Readdir: In the old days (of unix) one could read directories as files so there was no special readdir (or opendir/closedir). It
was believed that the uniform treatment would make programming (or at least system understanding) easier as there was less
to learn.
However, experience has taught that this was not a good idea since the structure of directories then becomes exposed. Early
unix had a simple structure (and there was only one). Modern systems have more sophisticated structures and more
importantly they are not fixed across implementations.
• Rename: As with files
• Link: Add a second name for a file; discussed below.
• Unlink: Remove a directory entry. This is how a file is deleted. But if there are many links and just one is unlinked, the file
remains. Discussed in more detail below.

4.3: File System Implementation


4.3.1; Implementing Files
• A disk cannot read or write a single word. Instead it can read or write a sector, which is often 512 bytes.
• Disks are written in blocks whose size is a multiple of the sector size.
• When we study I/O in the next chapter I will bring in some physically large (and hence old) disks so that we can see what they
look like and understand better sectors (and tracks, and cylinders, and heads, etc.).

Contiguous allocation
• This is like OS/MVT.
• The entire file is stored as one piece.
• Simple and fast for access, but ...
• Problem with growing files
• Must either evict the file itself or the file it is bumping into.
• Same problem with an OS/MVT kind of system if jobs grow.
• Problem with external fragmentation.
• Not used for general purpose systems. Ideal for systems where files do not change size.
Homework: 7.

Linked allocation
• The directory entry contains a pointer to the first block of the file.
• Each block contains a pointer to the next.
• Horrible for random access.
• Not used.

FAT (file allocation table)


• Used by dos and windows (but not windows/NT).
• Directory entry points to first block (i.e. specifies the block number).
• A FAT is maintained in memory having one (word) entry for each disk block. The entry for block N contains the block
number of the next block in the same file as N.
• This is linked but the links are store separately.
• Time to access a random block is still is linear in size of file but now all the references are to this one table which is in
memory. So it is bad but not horrible for random access.
• Size of table is one word per disk block. If one writes all blocks of size 4K and uses 4-byte words, the table is one megabyte
for each disk gigabyte. Large but not prohibitive.
• If write blocks of size 512 bytes (the sector size of most disks) then the table is 8 megs per gig, which might be prohibitive.

Inodes
• Used by unix.
• Directory entry points to inode (index-node).
• Inode points to first few data blocks, often called direct blocks.
• Inode also points to an indirect block, which points to disk blocks.
• Inode also points to a double indirect, which points an indirect ...
• For some implementations there are triple indirect as well.
• The inode is in memory for open files. So references to direct blocks take just one I/O.
• For big files most references require two I/Os (indirect + data).
• For huge files most references require three I/Os (double indirect, indirect, and data).
4.3.2; Implementing Directories
Recall that a directory is a mapping that converts file (or subdirectory) names to the files (or subdirectories) themselves.

Trivial File System (CP/M)


• Only one directory in the system.
• Directory entry contains pointers to disk blocks.
• If need more blocks, get another directory entry.

MS-DOS and Windows (FAT)


• Subdirectories supported.
• Directory entry contains metatdata such as date and size as well as pointer to first block.

Unix
• Each entry contains a name and a pointer to the corresponding inode.
• Metadata is in the inode.
• Early unix had limit of 14 character names.
• Name field now is varying length.
• To go down a level in directory takes two steps: get inode, get file (or subdirectory).
• Do on the blackboard the steps for /allan/gottlieb/courses/os/class-notes.html
Homework: 11

================ Start Lecture #10 ================

4.3.3: Shared files (links)


• ``Shared'' files is Tanenbaum's terminology.
• More descriptive would be ``multinamed files''.
• If a file exists, one can create another name for it (quite possibly in another directory).
• This is often called creating a (or another) link to the file.
• Unix has two flavor of links, hard links and symbolic links or symlinks.
• Dos/windows has symlinks, but I don't believe it has hard links.
• These links often cause confusion, but I really believe that the diagrams I created make it all clear.

Hard Links
• Symmetric multinamed files.
• When a hard like is created another name is created for the same file.
• The two names have equal status.
• It is not, I repeat NOT true that one name is the ``real name'' and the other is ``just
a link''.
Start with an empty file system (i.e., just the root directory) and then execute:
cd /
mkdir /A; mkdir /B
touch /A/X; touch /B/Y

We have the situation shown on the right.


Note that names are on edges not nodes. When there are no multinamed files, it doesn't much matter.
Now execute
ln /B/Y /A/New

This gives the new diagram to the right.


At this point there are two equally valid name for the right hand yellow file, /B/Y and
/A/New. The fact that /B/Y was created first is NOT detectable.
• Both point to the same inode.
• Only one owner (the one who created the file initially).
• One date, one set of permissions, one ... .

Assume Bob created /B and /B/Y and Alice created /A, /A/X, and /A/New. Later Bob tires
of /B/Y and removes it by executing
rm /B/Y

The file /A/New is still fine (see third diagram on the right). But it is owned by Bob, who
can't find it! If the system enforces quotas bob will likely be charged (as the owner), but he
can neither find nor delete the file (since bob cannot unlink, i.e. remove, files from /A)
Since hard links are only permitted to files (not directories) the resulting file system is a dag (directed acyclic graph). That is there are
no directed cycles. We will now proceed to give away this useful property by studying symlinks, which can point to directories.

Symlinks
• Asymmetric multinamed files.
• When a symlink is created another file is created, one that points to the original file.
Again start with an empty file system and this time execute
cd /
mkdir /A; mkdir /B
touch /A/X; touch /B/Y
ln -s /B/Y /A/New

We now have an additional file /A/New, which is a symlink to /B/Y.


• The file named /A/New has the name /B/Y as its data (not metadata).
• The system notices that A/New is a diamond (symlink) so reading /A/New will
return the contents of /B/Y (assuming the reader has read permission for /B/Y).
• If /B/Y is removed /A/New becomes invalid.
• If a new /B/Y is created, A/New is once again valid.
• Removing /A/New has no effect of /B/Y.
• If a user has write permission for /B/Y, then writing /A/New is possible and writes /B/Y.
The bottom line is that, with a hard link, a new name is created that has equal status to the original name. This can cause some
surprises (e.g., you create a link but I own the file). With a symbolic link a new file is created (owned by the creator naturally) that
points to the original file.
Question: Consider the hard link setup above. If Bob removes /B/Y and then creates another /B/Y, what happens to /A/X?
Answer: Nothing. /A/X is still a file with the same contents as the original /B/Y.
Question: What about with a symlink?
Answer: /A/X becomes invalid and then valid again, this time pointing to the new /B/Y. (It
can't point to the old /B/Y as that is completely gone.)
What about symlinking a directory?
cd /
mkdir /A; mkdir /B
touch /A/X; touch /B/Y
ln -s /B /A/New

Is there a file named /A/New/Y ?


Yes.
What happens if you execute cd /A/New/.. ?
• Answer: Not clear!
• Clearly you are changing directory to the parent directory of /A/New. But is that /A or /?
• The command interpreter I use offers both possibilities.
• cd -L /A/New/.. takes you to A (L for logical).
• cd -P /A/New/.. takes you to / (P for physical).
• cd /A/New/.. takes you to A (logical is the default).
What did I mean when I said the pictures made it all clear?
Answer: From the file system perspective it is clear. Not always so clear what programs will do.

4.3.4: Disk space management


All general purpose systems use a (non-demand) paging algorithm for file storage. Files are broken into fixed size pieces, called
blocks that can be scattered over the disk. Note that although this is paging, it is never called paging.
The file is completely stored on the disk, i.e., it is not demand paging.
• Actually it is more complicated as various optimizations are performed to try to have consecutive blocks of a single file stored
consecutively on the disk.
• One can imagine systems that store only parts of the file on disk with the rest on tertiary storage (some kind of tape).
• This would be just like demand paging.
• Perhaps NASA does this with their huge datasets.
• Caching (as done for example in microprocessors) is also the same as demand paging.
• We unify these concepts in the computer architecture course.

Choice of block size


• We discussed this before when studying page size.
• Current commodity disk characteristics (not for laptops) result in about 15ms to transfer the first byte and 10K bytes per ms
for subsequent bytes (if contiguous).
• We will explain the following terms in the I/O chapter.
• Rotation rate is 5400, 7600, or 10,000 RPM (15K just now available).
• Recall that 6000 RPM is 100 rev/sec or one rev per 10ms. So half a rev (the average time for to rotate to a given point)
is 5ms.
• Transfer rates around 10MB/sec = 10KB/ms.
• Seek time around 10ms.
• This favors large blocks, 100KB or more.
• But the internal fragmentation would be severe since many files are small.
• Multiple block sizes have been tried as have techniques to try to have consecutive blocks of a given file near each other.
• Typical block sizes are 4KB anf8KB.

Storing free blocks


There are several possibilities.
• An in-memory bit map.
• One bit per block
• If blocksize=4K, 1 bit per 32K bits
• So 32GB disk (potentially all free) needs 1MB ram
• Bit map paged in.
• Linked list with each free block pointing to next: Extra disk access per block.
• Linked list with links stored contiguously, i.e. an array of pointers to free blocks. Store this in free blocks and keep one in
memory.

4,3.5: File System reliability

Bad blocks on disks


Not so much of a problem now. Disks are more reliable and, more importantly, disks take care of the bad blocks themselves. That is,
there is no OS support needed to map out bad blocks. But if a block goes bad, the data is lost (not always).

Backups
All modern systems support full and incremental dumps.
• A level 0 dump is a called a full dump (i.e., dumps everything).
• A level n dump (n>0) is called an incremental dump and the standard unix utility dumps all files that have changed since the
previous level n-1 dump.
• Other dump utilities dump all files that have changed since the last level n dump.
• Keep on the disk the dates of the most recent level i dumps for all i. In Unix this is traditionally in /etc/dumpdates.
• What about the nodump attribute?
• Default policy (for Linux at least) is to dump such files anyway when doing a full dump, but not dump them for
incremental dumps.
• Another way to say this is the nodump attribute is honored for level n dumps if n>1.
• The dump command has an option to override the default policy (can specify k so that nodump is honored for level n
dumps if n>k).

Consistency
• Fsck (file system check) and chkdsk (check disk)
• If the system crashed, it is possible that not all metadata was written to disk. As a result the file system may be
inconsistent. These programs check, and often correct, inconsistencies.
• Scan all inodes (or fat) to check that each block is in exactly one file, or on the free list, but not both.
• Also check that the number of links to each file (part of the metadata in the file's inode) is correct (by looking at all
directories).
• Other checks as well.
• Offers to ``fix'' the errors found (for most errors).
• ``Journaling'' file systems
• An idea from database theory (transaction logs).
• Eliminates the need for fsck.
• NTFS has had journaling from day 1.
• Many Unix systems have it. IBM's AIX converted to journaling in the early 90s.
• Linux does not yet have journaling, a serious shortcoming. It is under very active development.
• FAT does not have journaling.

4.3.6 File System Performance

Buffer cache or block cache


An in-memory cache of disk blocks
• Demand paging again!
• Clearly good for reads as it is much faster to read memory than to read a disk.
• What about writes?
• Must update the buffer cache (otherwise subsequent reads will return the old value).
• The major question is whether the system should also update the disk block.
• The simplest alternative is write through in which each write is performed at the disk before it declared complete.
• Since floppy disk drivers adopt a write through policy, one can remove a floppy as soon as an operation is
complete.
• Write through results in heavy I/O write traffic.
• If a block is written many times all the writes are sent the disk. Only the last one was ``needed''.
• If a temporary file is created, written, read, and deleted, all the disk writes were wasted.
• DOS
• The other alternative is write back in which the disk is not updated until the in-memory copy is evicted (i.e., the
replacement question).
• Much less write traffic than write through.
• Trouble if a crash occurs.
• Used by Unix and others for hard disks.
• Can write dirty blocks periodically, say every minute. This limits the possible damage, but also the possible
gain.
• Ordered writes. Do not write a block containing pointers until the block pointed to has been written. Especially
if the block pointed to contains pointers since the version of these pointers on disk may be wrong and you are
giving a file pointers to some random blocks.
• Research in ``log-structured'' file systems tries to make all writes sequential (i.e., writes are treated as if going to a log file).
Homework: 12.

4.4: Security
Very serious subject. Could easily be a course in itself. My treatment is very brief.

4.4.1: Security environment


• Accidental data loss
• Fires, floods, etc
• System errors
• Human errors

• Intruders
• Sadly an enormous problem.
• The NYU ``greeting'' no longer includes the word ``welcome'' since that was somehow interpreted as some sort of
license to break in.
• Indeed, the greeting is not friendly.
• It once was.
• Below I have a nasty version from a few years ago.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
WARNING: UNAUTHORIZED PERSONS ........ DO NOT PROCEED
~~~~~~~ ~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
This computer system is operated by New York University (NYU) and may be
accessed only by authorized users. Authorized users are granted specific,
limited privileges in their use of the system. The data and programs
in this system may not be accessed, copied, modified, or disclosed without
prior approval of NYU. Access and use, or causing access and use, of this
computer system by anyone other than as permitted by NYU are strictly pro-
hibited by NYU and by law and may subject an unauthorized user, including
unauthorized employees, to criminal and civil penalties as well as NYU-
initiated disciplinary proceedings. The use of this system is routinely
monitored and recorded, and anyone accessing this system consents to such
monitoring and recording. Questions regarding this access policy or other
topics should be directed (by e-mail) to comment@nyu.edu or (by phone) to
212-998-3333.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

================ Start Lecture #11 ================

Note: There was a typo or two in the symlinking directories section (picture should show /B not /B/new, and cd -P goes to / not /B).
The giant page and the lecture-10 page have been fixed.
• Privacy
• An enormously serious (societal) subject.

4.4.2: Famous flaws


• Good bathroom reading.
• Trojan horse attack: Planing a program of your choosing in place of a well known program and having an unsuspecting user
execute it.
• Some trivial examples:
• Install a new version of login that does everything normal, but then mails the username and plaintext password to
gottlieb@nyu.edu.
• Put a new version of ls in your home directory and ask the sysadmin for help. ``Hopefully he types ls while in your
directory and has . early in his path''.

4.4.3: The internet worm


• A worm divides itself and sends one portion to another machine.
• Different from a virus (see below).
• The famous internet (Morris) worm exploited silly bugs in unix to crack systems automatically.
• Specifically, it exploited careless use of gets(), which does not check the length of its argument.
• Attacked Sun and Vax unix systems.
• NYU was hit hard; but not our lab, which had IBM RTs.

4.4.4: Generic Security attacks


More bathroom reading

Viruses
• A virus attaches itself to (``infects'') a part of the system so that it remains until explicitly removed. In particular, rebooting the
system does not remove it.
• Attach to an existing program or to a portion of the disk that is used for booting.
• When the virus is run it tries to attach itself to other files.
• Often implemented the same was as a binary patch: Change the first instruction to jump to somewhere where you put the
original first instruction, then your patch, then a jump back to the second instruction.

4.4.5: Design principles for security


More bathroom reading

4.4.6: User authentication

Passwords
• Software to crack passwords is publically available.
• Use this software for prevention.
• One way to prevent cracking passwords is to use instead one time passwords: e.g. SecurId.
• Current practice here and elsewhere is that when you telnet to a remote machine, your password is sent in the clear along the
ethernet.
So maybe .rhosts aren't that bad after all.

Physical identification
Opens up a bunch of privacy questions. For example, should we require fingerprinting for entering the subway?
Homework: 15, 16, 19, 24.

4.5: Protection mechanisms


4.5.1: Protection domains
• We distinguish between Objects, which are passive, and subjects, which are active.
• For example, processes (subjects) examine files (objects).
• Protection domain: A collection of (object, rights) pairs.
• At any given time a subject is given a protection domain that specifies its rights.
• In Unix a subject's domain is determined by its (uid, gid) (and whether it is in kernel mode).
• Generates a matrix called the protection or permission matrix.
• Each row corresponds to a domain (i.e. a subject at some time).
• Each column corresponds to an object (e.g., a file or device).
• Each entry gives the rights the domain/subject has on this object.
• Can model Unix suid/sgid by permitting columns whose headings are domains and the only right possible in the
corresponding entries is entry. If this right is present, the subject corresponding to the row can s[ug]id to the new domain,
which corresponds to the column.

4.5.2: Access Control Lists (ACLs)


Keep the columns of the matrix separate and drop the null entries.

4.5.3: Capabilities
Keep the rows of the matrix separate and drop the null entries.
4.5.4: Protection models
Give objects and subjects security levels and enforce:
• A subject may read only those objects whose level is at or below her own.
• A subject may write only those objects whose level is at or above her own.

4.5.5: Covert channels


The bad guys are getting smart and use other means of getting out information. For example give good service for a zero and bad for
a one. The figure of merit is the rate at which bits can be sent, i.e. the bandwidth of the covert channel.
Homework: 20.

You might also like