Operating System Ch10
Operating System Ch10
Operating System Ch10
• Background
• Demand Paging
• Performance of Demand Paging
• Page Replacement
• Page-Replacement Algorithms
Background
• In Chapter 8, we discussed various memory-management
strategies.
• All these strategies have the same goal: to keep many processes in
memory simultaneously to allow multiprogramming.
• Virtual memory is a technique that allows the execution of
processes that may not be completely in memory.
• Advantage of virtual memory is that programs can be larger than
physical memory.
• In many cases, the entire program is not needed to be in physical
memory (examples):
– Programs have code to handle unusual error conditions.
– Arrays, lists, and tables are often allocated more memory than
they actually need.
– Certain options and features of a program may be used rarely.
1
Background (Cont.)
• The ability to execute a program that is only partially in
memory would have many benefits:
– A program would no longer be constrained by the
amount of physical memory that is available.
– Because each user program could take less physical
memory, more programs could be run at the same
time, with increase in CPU utilization and
throughput, but with no increase in response time or
turnaround time.
– Less I/O would be needed to load or swap each user
program into memory, so each user program would
run faster.
Background (Cont.)
• Virtual memory is the separation of user logical memory
from physical memory. This separation allows an
extremely large virtual memory to be provided for
programmers when only a smaller physical memory is
available.
– Only part of the program needs to be in memory for
execution.
– Logical address space can therefore be much larger
than physical address space.
– Need to allow pages to be swapped in and out.
• Virtual memory can be implemented via:
– Demand paging
– Demand segmentation
2
Diagram: virtual memory larger than physical memory
Page 0
Page 1
Page 2
Memory
Page n
Map
Virtual Physical
Memory Memory
Disk
Space
Operating System Concepts 10.5
Demand Paging
• A demand-paging system is similar to a paging system with
swapping.
• We use a lazy swapper, a lazy swapper never swaps a page
into memory unless that page will be needed.
• Since we swap pages not entire process we call it pager
instead of swapper.
• Since the pager swaps only the pages are needed to be in
physical memory (not entire process), this will yield to:
– Less swap time
– Less I/O needed
– Less amount of physical memory needed
– More users
– Faster response time
3
Transfer of a Pages Memory to Contiguous Disk Space
Program A
Swap Out 0 1 2
3 4 5
Program B
6 7 8
Main
Memory Swap In
Disk Space
Operating System Concepts 10.7
Valid-Invalid bit
• We need hardware support to distinguish between those
pages that are in memory and those pages are on the disk.
• The valid-invalid bit scheme can be used:
– Valid: indicates that the associated pages is both legal
and in memory.
– Invalid: indicates that the page either is not valid (not in
logical address space) or is valid but is currently on the
disk.
• What happens if the process tries to use a page that was
not brought into memory?
• Access to a page marked invalid causes a page-fault trap.
4
Valid-Invalid Bit (Cont.)
• With each page table entry a valid–invalid bit is associated
(1 ⇒ in-memory, 0 ⇒ not-in-memory)
• Initially valid–invalid but is set to 0 on all entries.
• Example of a page table snapshot.
Frame # valid-invalid bit
1
1
1
1
0
M
0
0
page table
• During address translation, if valid–invalid bit in page table
entry is 0 ⇒ page fault.
Operating System Concepts 10.9
0 A 0
Valid-Invalid bit
1 B Frame # 1
2
2 C
0 4 v 3
3 D
1 i 4 A
4 E 5
2 6 v A B
5 F 6 C
3 i 7
6 G C D E
4 i 8
7 H
5 9 v 9 F
F
Logical
Memory 6 i
Physical
7 i Memory
Disk Space
Operating System Concepts 10.10
5
Steps in Handling a Page Fault
(2) Trap
(1) reference
Load M i
Free
Frame
Page
(4) Bring in
Table Physical missing page
(6) Restart Memory
instruction (5) Reset
page table
Operating System Concepts 10.11
6
What happens if there is no free frame?
7
Performance of Demand Paging (Cont.)
• We are faced with three major components of the page-
fault service time:
1. Service the page-fault interrupt.
2. Read in the page.
3. Restart the process.
• A typical hard disk has:
– An average latency of 8 milliseconds.
– A seek of 15 milliseconds.
– A transfer time of 1 milliseconds.
– Total paging time = (8+15+1)= 24 milliseconds,
including hardware and software time, but no
queuing (wait) time.
Operating System Concepts 10.15
8
Page Replacement
• Example: Assume each process contains 10 pages and
uses only 5 pages.
– If we had 40 frames in physical memory then we can
run 8 processes instead of 4 processes. Increasing
the degree of multiprogramming.
– If we run 6 processes (each of which is 10 pages in
size), but uses only 5 pages. We have higher CPU
utilization and throughput, also 10 frames to spare
(i.e., 6x5=30 frames needed out of 40 frames).
– It is possible each process tries to use all 10 of its
pages, resulting in a need for 60 frames when only
40 are available.
9
The Operating System has Several Options
Page Replacement
(1)
O i (2) Change to invalid
F victim
F v (4) Reset page table
for new page (3)
Page table
Physical
Swap desired
Memory page in
10
Page Replacement (Cont.)
11
Page-Replacement Algorithms
• We want a page replacement algorithm with the lowest
page-fault rate.
• We evaluate an algorithm by running it on a particular
string of memory references (reference string) and
computing the number of page faults on that string.
• The string of memory references is called a reference
string.
• We can generate reference strings by tracing a given
system and recording the address of each memory
reference.
• In our examples, the reference string is
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5.
1 4 1 6
12
Graph of Page Faults vs. Number of Frames
16
12
Number
of Page
Faults 8
1 2 3 4 5
Number of Frames
In Out
Tail Front
13
Example: FIFO Algorithm
• Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
• Let 3 frames are initially empty (3 pages can be in memory at a time per
process).
• The first 3 references (1, 2, 3) cause page faults, and are brought into
these empty frames.
1 1 4 5
2 2 1 3 9 page faults
3 3 2 4
• 4 frames
1 1 5 4
2 2 1 5 10 page faults
3 3 2
4 4 3
14
Optimal (OPT) Algorithm
• An optimal algorithm has the lowest page-fault rate of
all algorithms.
• An optimal algorithm will never suffer from Belady’s
anomaly.
• Replace the page that will not be used for the longest
period of time.
• This algorithms guarantees the lowest possible page-
fault rate for a fixed number of frames.
• The optimal algorithm is difficult to implement, because
it requires future knowledge of the reference string.
• Similar situation with Shortest-Job-First in CPU
scheduling.
4 5
15
Least Recently Used (LRU) Algorithm
1 5
2 8 page faults
3 5 4
4 3
16
LRU Algorithm Implementations
• LRU algorithm may require hardware assistance; two
implementations:
1. Counter implementation:
– Associate with each page-table entry a time-of-use field, and
add to the CPU a logical clock or counter.
– The clock is incremented for every memory reference.
– Whenever a reference to a page is made, the content of the
clock register is copied to the time-of-use field in the page table
for that page.
– We replace the page with the smallest time value.
– Requires a search of the page table to find LRU page and a
write to memory for each memory access.
– In summary, every page entry has a counter; every time page is
referenced through this entry, copy the clock into the counter.
– When a page needs to be changed, look at the counters to
determine which are to change.
17
Example: LRU Implementation Using Stack
• Reference String: 4 7 0 7 1 0 1 2 1 2 7 1 2
(a) (b)
2 7
1 2
0 1
7 0
4 4 Tail pointer
(LRU page)
Stack before (a) Stack after (b)
• Reference Bit.
• Additional Reference Bits (8-bits).
• Second-Chance Algorithm.
• Second-Chance Algorithm’s Implementation.
• Enhanced Second-Chance Algorithm (2-bits).
18
Reference Bit
19
Second-Chance Algorithm (one-bit).
• The basic algorithm of second-chance is a FIFO.
• When a page has been selected, inspect its reference
bit:
1. If the value is 0 replace the page.
2. If the value is 1, we give that page a second
chance and move on to select the next FIFO
page.
• When a page gets a second chance:
1. Its reference bit is cleared.
2. Its arrival time is reset to the current time.
3. Will not be replaced until all other pages are
replaced or given second chance.
Operating System Concepts 10.39
20
Example: Second-Chance Algorithm’s Implementation.
0 0 clear
0 0 clear
1 0 clear
Next
victim
1 1
0 0
21
Counting Algorithms
• Keep a counter of the number of references that have been
made to each page, and develop the following two
schemes:
1. LFU (Least Frequently Used) Algorithm:
– Replaces page with smallest count.
– Suffers from the situation in which a page is used
heavily during the initial phase of a process, but then is
never used again.
2. MFU (Most Frequently Used) Algorithm:
– Based on the argument that the page with the smallest
count was probably just brought in and has yet to be
used.
22