Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Rev Midterm

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

1

CHAPTER

Introduction
Practice Exercises
1.1

The three main puropses are:

To provide an environment for a computer user to execute programs


on computer hardware in a convenient and efficient manner.

To allocate the separate resources of the computer as needed to


solve the problem given. The allocation process should be as fair
and efficient as possible.

As a control program it serves two major functions: (1) supervision


of the execution of user programs to prevent errors and improper use
of the computer, and (2) management of the operation and control
of I/O devices.
1.2

Generally, operating systems for batch systems have simpler requirements than for personal computers. Batch systems do not have to be
concerned with interacting with a user as much as a personal computer.
As a result, an operating system for a PC must be concerned with response time for an interactive user. Batch systems do not have such
requirements. A pure batch system also may have not to handle time
sharing, whereas an operating system must switch rapidly between different jobs.

1.3

The four steps are:

1.4

a.

Reserve machine time.

b.

Manually load program into memory.

c.

Load starting address and begin execution.

d.

Monitor and control execution of program from console.

Single-user systems should maximize use of the system for the user. A
GUI might waste CPU cycles, but it optimizes the users interaction
with the system.
1

Chapter 1 Introduction

1.5

The main difficulty is keeping the operating system within the fixed time
constraints of a real-time system. If the system does not complete a task
in a certain time frame, it may cause a breakdown of the entire system it
is running. Therefore when writing an operating system for a real-time
system, the writer must be sure that his scheduling schemes dont allow
response time to exceed the time constraint.

1.6

Point. Applications such as web browsers and email tools are performing
an increasingly important role in modern desktop computer systems. To
fulfill this role, they should be incorporated as part of the operating
system. By doing so, they can provide better performance and better
integration with the rest of the system. In addition, these important
applications can have the same look-and-feel as the operating system
software.
Counterpoint. The fundamental role of the operating system is to manage system resources such as the CPU, memory, I/O devices, etc. In addition, its role is to run software applications such as web browsers and
email applications. By incorporating such applications into the operating
system, we burden the operating system with additional functionality.
Such a burden may result in the operating system performing a less-thansatisfactory job at managing system resources. In addition, we increase
the size of the operating system thereby increasing the likelihood of
system crashes and security violations.

1.7

The distinction between kernel mode and user mode provides a rudimentary form of protection in the following manner. Certain instructions
could be executed only when the CPU is in kernel mode. Similarly, hardware devices could be accessed only when the program is executing in
kernel mode. Control over when interrupts could be enabled or disabled
is also possible only when the CPU is in kernel mode. Consequently, the
CPU has very limited capability when executing in user mode, thereby
enforcing protection of critical resources.

1.8

The following operations need to be privileged: Set value of timer, clear


memory, turn off interrupts, modify entries in device-status table, access
I/O device. The rest can be performed in user mode.

1.9

The data required by the operating system (passwords, access controls,


accounting information, and so on) would have to be stored in or passed
through unprotected memory and thus be accessible to unauthorized
users.

1.10

Although most systems only distinguish between user and kernel modes,
some CPUs have supported multiple modes. Multiple modes could be
used to provide a finer-grained security policy. For example, rather than
distinguishing between just user and kernel mode, you could distinguish between different types of user mode. Perhaps users belonging to
the same group could execute each others code. The machine would go
into a specified mode when one of these users was running code. When
the machine was in this mode, a member of the group could run code
belonging to anyone else in the group.

Practice Exercises

Another possibility would be to provide different distinctions within


kernel code. For example, a specific mode could allow USB device drivers
to run. This would mean that USB devices could be serviced without
having to switch to kernel mode, thereby essentially allowing USB device
drivers to run in a quasi-user/kernel mode.
1.11

A program could use the following approach to compute the current


time using timer interrupts. The program could set a timer for some
time in the future and go to sleep. When it is awakened by the interrupt,
it could update its local state, which it is using to keep track of the
number of interrupts it has received thus far. It could then repeat this
process of continually setting timer interrupts and updating its local
state when the interrupts are actually raised.

1.12

The Internet is a WAN as the various computers are located at geographically different places and are connected by long-distance network links.

OperatingSystem
Structures

CHAPTER

Practice Exercises
2.1

System calls allow user-level processes to request services of the operating system.

2.2

The five major activities are:

2.3

2.4

a.

The creation and deletion of both user and system processes

b.

The suspension and resumption of processes

c.

The provision of mechanisms for process synchronization

d.

The provision of mechanisms for process communication

e.

The provision of mechanisms for deadlock handling

The three major activities are:


a.

Keep track of which parts of memory are currently being used and
by whom.

b.

Decide which processes are to be loaded into memory when memory space becomes available.

c.

Allocate and deallocate memory space as needed.

The three major activities are:

Free-space management.
Storage allocation.
Disk scheduling.
2.5

It reads commands from the user or from a file of commands and executes
them, usually by turning them into one or more system calls. It is usually
not part of the kernel since the command interpreter is subject to changes.

2.6

In Unix systems, a fork system call followed by an exec system call need
to be performed to start a new process. The fork call clones the currently
5

Chapter 2 Operating-System Structures

executing process, while the exec call overlays a new process based on a
different executable over the calling process.
2.7

System programs can be thought of as bundles of useful system calls.


They provide basic functionality to users so that users do not need to
write their own programs to solve common problems.

2.8

As in all cases of modular design, designing an operating system in


a modular way has several advantages. The system is easier to debug
and modify because changes affect only limited sections of the system
rather than touching all sections of the operating system. Information
is kept only where it is needed and is accessible only within a defined
and restricted area, so any bugs affecting that data must be limited to a
specific module or layer.

2.9

The five services are:


a.

Program execution. The operating system loads the contents (or


sections) of a file into memory and begins its execution. A userlevel program could not be trusted to properly allocate CPU time.

b.

I/O operations. Disks, tapes, serial lines, and other devices must be
communicated with at a very low level. The user need only specify
the device and the operation to perform on it, while the system
converts that request into device- or controller-specific commands.
User-level programs cannot be trusted to access only devices they
should have access to and to access them only when they are otherwise unused.

c.

File-system manipulation. There are many details in file creation,


deletion, allocation, and naming that users should not have to perform. Blocks of disk space are used by files and must be tracked.
Deleting a file requires removing the name file information and
freeing the allocated blocks. Protections must also be checked to
assure proper file access. User programs could neither ensure adherence to protection methods nor be trusted to allocate only free
blocks and deallocate blocks on file deletion.

d.

Communications. Message passing between systems requires messages to be turned into packets of information, sent to the network
controller, transmitted across a communications medium, and reassembled by the destination system. Packet ordering and data
correction must take place. Again, user programs might not coordinate access to the network device, or they might receive packets
destined for other processes.

e.

Error detection. Error detection occurs at both the hardware and


software levels. At the hardware level, all data transfers must be
inspected to ensure that data have not been corrupted in transit. All
data on media must be checked to be sure they have not changed
since they were written to the media. At the software level, media
must be checked for data consistency; for instance, whether the
number of allocated and unallocated blocks of storage match the
total number on the device. There, errors are frequently process-

Practice Exercises

independent (for instance, the corruption of data on a disk), so there


must be a global program (the operating system) that handles all
types of errors. Also, by having errors processed by the operating
system, processes need not contain code to catch and correct all the
errors possible on a system.
2.10

For certain devices, such as handheld PDAs and cellular telephones, a


disk with a file system may be not be available for the device. In this
situation, the operating system must be stored in firmware.

2.11

Consider a system that would like to run both Windows XP and three
different distributions of Linux (e.g., RedHat, Debian, and Mandrake).
Each operating system will be stored on disk. During system boot-up, a
special program (which we will call the boot manager) will determine
which operating system to boot into. This means that rather initially
booting to an operating system, the boot manager will first run during
system startup. It is this boot manager that is responsible for determining
which system to boot into. Typically boot managers must be stored at
certain locations of the hard disk to be recognized during system startup.
Boot managers often provide the user with a selection of systems to boot
into; boot managers are also typically designed to boot into a default
operating system if no choice is selected by the user.

CHAPTER

Processes
Practice Exercises
3.1

a.

A method of time sharing must be implemented to allow each


of several processes to have access to the system. This method
involves the preemption of processes that do not voluntarily give
up the CPU (by using a system call, for instance) and the kernel
being reentrant (so more than one process may be executing kernel
code concurrently).

b.

Processes and system resources must have protections and must


be protected from each other. Any given process must be limited in
the amount of memory it can use and the operations it can perform
on devices like disks.

c.

Care must be taken in the kernel to prevent deadlocks between


processes, so processes arent waiting for each others allocated
resources.

3.2

The CPU current-register-set pointer is changed to point to the set containing the new context, which takes very little time. If the context is
in memory, one of the contexts in a register set must be chosen and be
moved to memory, and the new context must be loaded from memory
into the set. This process takes a little more time than on systems with
one set of registers, depending on how a replacement victim is selected.

3.3

Only the shared memory segments are shared between the parent process and the newly forked child process. Copies of the stack and the
heap are made for the newly created process.

3.4

The exactly once semantics ensure that a remore procedure will be


executed exactly once and only once. The general algorithm for ensuring this combines an acknowledgment (ACK) scheme combined with
timestamps (or some other incremental counter that allows the server to
distinguish between duplicate messages).
The general strategy is for the client to send the RPC to the server along
with a timestamp. The client will also start a timeout clock. The client
9

10

Chapter 3 Processes

will then wait for one of two occurrences: (1) it will receive an ACK from
the server indicating that the remote procedure was performed, or (2) it
will time out. If the client times out, it assumes the server was unable
to perform the remote procedure so the client invokes the RPC a second
time, sending a later timestamp. The client may not receive the ACK for
one of two reasons: (1) the original RPC was never received by the server,
or (2) the RPC was correctly received and performed by the server
but the ACK was lost. In situation (1), the use of ACKs allows the server
ultimately to receive and perform the RPC. In situation (2), the server will
receive a duplicate RPC and it will use the timestamp to identify it as a
duplicate so as not to perform the RPC a second time. It is important to
note that the server must send a second ACK back to the client to inform
the client the RPC has been performed.
3.5

The server should keep track in stable storage (such as a disk log) information regarding what RPC operations were received, whether they
were successfully performed, and the results associated with the operations. When a server crash takes place and a RPC message is received,
the server can check whether the RPC had been previously performed
and therefore guarantee exactly once semanctics for the execution of
RPCs.

CHAPTER

Threads
Practice Exercises
4.1

(1) A Web server that services each request in a separate thread. 2) (A


parallelized application such as matrix multiplication where (different
parts of the matrix may be worked on in parallel. (3) An (interactive GUI
program such as a debugger where a thread is used (to monitor user
input, another thread represents the running (application, and a third
thread monitors performance.

4.2

(1) User-level threads are unknown by the kernel, whereas the kernel
is aware of kernel threads. (2) On systems using either M:1 or M:N
mapping, user threads are scheduled by the thread library and the kernel
schedules kernel threads. (3) Kernel threads need not be associated with
a process whereas every user thread belongs to a process. Kernel threads
are generally more expensive to maintain than user threads as they must
be represented with a kernel data structure.

4.3

Context switching between kernel threads typically requires saving the


value of the CPU registers from the thread being switched out and restoring the CPU registers of the new thread being scheduled.

4.4

Because a thread is smaller than a process, thread creation typically


uses fewer resources than process creation. Creating a process requires
allocating a process control block (PCB), a rather large data structure.
The PCB includes a memory map, list of open files, and environment
variables. Allocating and managing the memory map is typically the
most time-consuming activity. Creating either a user or kernel thread
involves allocating a small data structure to hold a register set, stack,
and priority.

4.5

Yes. Timing is crucial to real-time applications. If a thread is marked as


real-time but is not bound to an LWP, the thread may have to wait to
be attached to an LWP before running. Consider if a real-time thread is
running (is attached to an LWP) and then proceeds to block (i.e. must
perform I/O, has been preempted by a higher-priority real-time thread,
is waiting for a mutual exclusion lock, etc.) While the real-time thread is
11

12

Chapter 4 Threads

blocked, the LWP it was attached to has been assigned to another thread.
When the real-time thread has been scheduled to run again, it must first
wait to be attached to an LWP. By binding an LWP to a real-time thread
you are ensuring the thread will be able to run with minimal delay once
it is scheduled.
4.6

Please refer to the supporting Web site for source code solution.

CHAPTER

CPU Scheduling
Practice Exercises
5.1

n! (n factorial = n n 1 n 2 ... 2 1).

5.2

Preemptive scheduling allows a process to be interrupted in the midst of


its execution, taking the CPU away and allocating it to another process.
Nonpreemptive scheduling ensures that a process relinquishes control
of the CPU only when it finishes with its current CPU burst.

5.3

a.

10.53

b.

9.53

c. 6.86
Remember that turnaround time is finishing time minus arrival time, so
you have to subtract the arrival times to compute the turnaround times.
FCFS is 11 if you forget to subtract arrival time.
5.4

5.5

Processes that need more frequent servicing, for instance, interactive


processes such as editors, can be in a queue with a small time quantum. Processes with no need for frequent servicing can be in a queue
with a larger quantum, requiring fewer context switches to complete the
processing, and thus making more efficient use of the computer.
a.

The shortest job has the highest priority.

b.

The lowest level of MLFQ is FCFS.

c.

FCFS gives the highest priority to the job having been in existence

the longest.
d.
5.6

None.

It will favor the I/O-bound programs because of the relatively short CPU
burst request by them; however, the CPU-bound programs will not starve
because the I/O-bound programs will relinquish the CPU relatively often
to do their I/O.

13

14

Chapter 5 CPU Scheduling

5.7

PCS scheduling is done local to the process. It is how the thread library
schedules threads onto available LWPs. SCS scheduling is the situation

where the operating system schedules kernel threads. On systems using


either many-to-one or many-to-many, the two scheduling models are
fundamentally different. On systems using one-to-one, PCS and SCS are
the same.
5.8

Yes, otherwise a user thread may have to compete for an available LWP
prior to being actually scheduled. By binding the user thread to an LWP,
there is no latency while waiting for an available LWP; the real-time user
thread can be scheduled immediately.

CHAPTER

Process
Synchronization
Practice Exercises
6.1

The system clock is updated at every clock interrupt. If interrupts were


disabled particularly for a long period of time it is possible the system clock could easily lose the correct time. The system clock is also used
for scheduling purposes. For example, the time quantum for a process
is expressed as a number of clock ticks. At every clock interrupt, the
scheduler determines if the time quantum for the currently running process has expired. If clock interrupts were disabled, the scheduler could
not accurately assign time quantums. This effect can be minimized by
disabling clock interrupts for only very short periods.

6.2

Please refer to the supporting Web site for source code solution.

6.3

These operating systems provide different locking mechanisms depending on the application developers needs. Spinlocks are useful for multiprocessor systems where a thread can run in a busy-loop (for a short
period of time) rather than incurring the overhead of being put in a sleep
queue. Mutexes are useful for locking resources. Solaris 2 uses adaptive
mutexes, meaning that the mutex is implemented with a spin lock on
multiprocessor machines. Semaphores and condition variables are more
appropriate tools for synchronization when a resource must be held for
a long period of time, since spinning is inefficient for a long duration.

6.4

Volatile storage refers to main and cache memory and is very fast. However, volatile storage cannot survive system crashes or powering down
the system. Nonvolatile storage survives system crashes and powereddown systems. Disks and tapes are examples of nonvolatile storage. Recently, USB devices using erasable program read-only memory (EPROM)
have appeared providing nonvolatile storage. Stable storage refers to
storage that technically can never be lost as there are redundant backup
copies of the data (usually on disk).

6.5

A checkpoint log record indicates that a log record and its modified data
has been written to stable storage and that the transaction need not to be
redone in case of a system crash. Obviously, the more often checkpoints
15

16

Chapter 6 Process Synchronization

are performed, the less likely it is that redundant updates will have to
be performed during the recovery process.

System performance when no failure occursIf no failures occur,


the system must incur the cost of performing checkpoints that are
essentially unnecessary. In this situation, performing checkpoints
less often will lead to better system performance.

The time it takes to recover from a system crashThe existence


of a checkpoint record means that an operation will not have to
be redone during system recovery. In this situation, the more often
checkpoints were performed, the faster the recovery time is from a
system crash.

The time it takes to recover from a disk crashThe existence of


a checkpoint record means that an operation will not have to be
redone during system recovery. In this situation, the more often
checkpoints were performed, the faster the recovery time is from a
disk crash.

6.6

A transaction is a series of read and write operations upon some data followed by a commit operation. If the series of operations in a transaction
cannot be completed, the transaction must be aborted and the operations that did take place must be rolled back. It is important that the
series of operations in a transaction appear as one indivisible operation
to ensure the integrity of the data being updated. Otherwise, data could
be compromised if operations from two (or more) different transactions
were intermixed.

6.7

A schedule that is allowed in the two-phase locking protocol but not in


the timestamp protocol is:
step
1
2
3
4
5
6
7
8
9

T0
lock-S(A)
read(A)

T1

Precedence

lock-X(B)
write(B)
unlock(B)
lock-S(B)
read(B)
unlock(A)
unlock(B)

T1 T0

This schedule is not allowed in the timestamp protocol because at step


7, the W-timestamp of B is 1.
A schedule that is allowed in the timestamp protocol but not in the
two-phase locking protocol is:

Practice Exercises

step
1
2
3
4
5

T0
write(A)

T1

17

T2

write(A)
write(A)
write(B)
write(B)

This schedule cannot have lock instructions added to make it legal under
two-phase locking protocol because T1 must unlock (A) between steps 2
and 3, and must lock (B) between steps 4 and 5.

You might also like