Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Operating System 5,6

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Slide-05

Operating-System Design and Implementation


There are, of course, no complete solutions to such problems, but there are approaches that have
proved successful.
Design goals
The first problem in designing a system is to define goals and specifications. At the highest level, the
design of the system will be affected by the choice of hardware and the type of system: traditional
desktop/laptop, mobile, distributed, or real time.

Beyond this highest design level, the requirements may be much harder to specify. The requirements
can, however, be divided into two basic groups:
1)user goals: Users want certain obvious properties in a system. The system should be
convenient to use, easy to learn and to use, reliable, safe, and fast. Of course, these specifications are not
particularly useful in the system design, since there is no general agreement on how to achieve them.
2)system goals: A similar set of requirements can be defined by the developers who must
design, create, maintain, and operate the system. The system should be easy to design, implement, and
maintain; and it should be flexible, reliable, error free, and efficient. Again, these requirements are vague
and may be interpreted in various ways.
There is, in short, no unique solution to the problem of defining the requirements for an operating
system. The wide range of systems in existence shows that different requirements can result in a
large variety of solutions for different environments. For example, the requirements for Wind
River VxWorks, a realtime operating system for embedded systems, must have been
substantially different from those for Windows Server, a large multiaccess operating system
designed for enterprise applications.

Mechanisms and Policies


One important principle is the separation of policy from mechanism.
--Mechanisms: Mechanisms determine how to do something;
--Policies: policies determine what will be done.
For example, the timer construct is a mechanism for ensuring CPU protection, but deciding how
long the timer is to be set for a particular user is a policy decision.
--Difference between mechanisms and policies: The separation of policy and mechanism is
important for flexibility. Policies are likely to change across places or over time. In the worst case,
each change in policy would require a change in the underlying mechanism. A general
mechanism flexible enough to work across a range of policies is preferable. A change in policy
would then require redefinition of only certain parameters of the system. For instance, consider a
mechanism for giving priority to certain types of programs over others. If the mechanism is
properly separated from policy, it can be used either to support a policy decision that I/O-
intensive programs should have priority over CPU-intensive ones or to support the
opposite policy.
--Different operating systems use different strategies for policies and mechanisms.
--Even commercial and open source operating systems use different strategies for
policies and mechanisms. The “standard” Linux kernel has a specific CPU scheduling
algorithm, which is a mechanism that supports a certain policy. However, anyone is
free to modify or replace the scheduler to support a different policy.
--Policy decisions are important for all resource allocation. Whenever it is necessary to decide
whether or not to allocate a resource, a policy decision must be made. Whenever the question is
how rather than what, it is a mechanism that must be determined.

Implementation
Once an operating system is designed, it must be implemented. Because operating systems are
collections of many programs, written by many people over a long period of time, it is difficult to
make general statements about how they are implemented.
--Early operating systems were written in assembly language.
--Now, most are written in higher-level languages such as C or C++, with small amounts In fact,
more than one higher level language is often used. The lowest levels of the kernel might be
written in assembly language and C. Higher-level routines might be written in C and
C++, and system libraries might be written in C++ or even higher-level languages.
--Example: Android’s kernel is written mostly in C with some assembly language. Most Android
system libraries are written in C or C++, and its application frameworks—which provide the
developer interface to the system—are written mostly in Java.
The advantages of using a higher-level language in implementing operating system:
The advantages of using a higher-level language, or at least a systems implementation language,
for implementing operating systems are the same as those gained when the language is used for
application programs: the code can be written faster, is more compact, and is easier to
understand and debug.
In addition, improvements in compiler technology will improve the generated code for the entire
operating system by simple recompilation. Finally, an operating system is far easier to port to
other hardware if it is written in a higher-level language. This is particularly important for
operating systems that are intended to run on several different hardware systems, such as small
embedded devices, Intel x86 systems, and ARM chips running on phones and tablets.

The disadvantages of using a higher-level language in implementing operating system:


The only possible disadvantages of implementing an operating system in a higher-level language
are reduced speed and increased storage requirements. This, however, is not a major issue in
today’s systems. Although an expert assembly-language programmer can produce efficient
small routines, for large programs a modern compiler can perform complex analysis and apply
sophisticated optimizations that produce excellent code. Modern processors have deep
pipelining and multiple functional units that can handle the details of complex dependencies
much more easily than can the human mind.
Operating-System Debugging
Broadly, debugging is the activity of finding and fixing errors in a system, both in hardware and in
software. Performance problems are considered bugs, so debugging can also include
performance tuning, which seeks to improve performance (in terms of time, memory etc) by
removing processing bottlenecks.

Failure Analysis
If a process fails, most operating systems write the error information to a log file to alert system
administrators or users that the problem occurred. The operating system can also take a core
dump—a capture of the memory of the process—and store it in a file for later analysis. (Memory
was referred to as the “core” in the early days of computing.) Running programs and core dumps
can be probed by a debugger, which allows a programmer to explore the code and
memory of a process at the time of failure.
--Debugging user-level process code is a challenge. Operating-system kernel debugging is even
more complex because of the size and complexity of the kernel, its control of the hardware, and
the lack of user-level debugging tools. A failure in the kernel is called a crash. When a crash
occurs, error information is saved to a log file, and the memory state is saved to a crash dump.
Tools and techniques of operating system debugging:
Operating-system debugging and process debugging frequently use different tools and
techniques due to the very different nature of these two tasks. Consider that a kernel failure in
the file-system code would make it risky for the kernel to try to save its state to a file on the file
system before rebooting. A common technique is to save the kernel’s memory state to a section
of disk set aside for this purpose that contains no file system. If the kernel detects an
unrecoverable error, it writes the entire contents of memory, or at least the kernel-owned parts of
the system memory, to the disk area. When the system reboots, a process runs to gather the data
from that area and write it to a crash dump file within a file system for analysis. Obviously, such
strategies would be unnecessary for debugging ordinary user-level processes.

Performance Monitoring and Tuning


To identify bottlenecks, we must be able to monitor system performance. Thus, the operating
system must have some means of computing and displaying measures of system behavior. Tools
may be characterized as providing either per-process or system-wide observations. To make
these observations, tools may use one of two approaches—
1) counters
2) tracing
1) Counters: Operating systems keep track of system activity through a series of counters, such
as the number of system calls made or the number of operations performed to a network device
or disk. The following are examples of Linux tools that use counters:
Per-Process
• ps—reports information for a single process or selection of processes
• top—reports real-time statistics for current processes
System-Wide
• vmstat—reports memory-usage statistics
• netstat—reports statistics for network interfaces
• iostat—reports I/O usage for disks

2) Tracing:
Whereas counter-based tools simply inquire on the current value of certain statistics that are
maintained by the kernel, tracing tools collect data for a specific event—such as the steps
involved in a system-call invocation. The following are examples of Linux tools that trace events:
Per-Process
• strace—traces system calls invoked by a process
• gdb—a source-level debugger
System-Wide
• perf—a collection of Linux performance tools
• tcpdump—collects network packets
BCC
--Debugging the interactions between user-level and kernel code is nearly impossible without a
toolset that understands both sets of code and can instrument their interactions.
--For that toolset to be truly useful, it must be able to debug any area of a system, including
areas that were not written with debugging in mind, and do so without affecting system reliability.
--This toolset must also have a minimal performance impact—ideally it should have no impact
when not in use and a proportional impact during use.

The BCC toolkit meets these requirements and provides a dynamic, secure, low-impact
debugging environment. BCC (BPF Compiler Collection) is a rich toolkit that provides tracing
features for Linux systems. BCC is a front-end interface to the eBPF (extended Berkeley Packet
Filter) tool.
--eBPF programs are written in a subset of C and are compiled into eBPF instructions, which can
be dynamically inserted into a running Linux system.
--The eBPF instructions can be used to capture specific events (such as a certain system call
being invoked) or to monitor system performance (such as the time required to perform disk I/O).
--To ensure that eBPF instructions are well behaved, they are passed through a verifie before
being inserted into the running Linux kernel. The verifier checks to make sure that the instructions
do not affect system performance or security.
--Although eBPF provides a rich set of features for tracing within the Linux kernel, it traditionally
has been very difficult to develop programs using its C interface. BCC was developed to make it
easier to write tools using eBPF by providing a front-end interface in Python. ABCC tool is written
in Python and it embeds C code that interfaces with the eBPF instrumentation, which in turn
interfaces with the kernel. The BCC tool also compiles the C program into eBPF instructions and
inserts it into the kernel using either probes or trace points, two techniques that allow tracing
events in the Linux kernel.
Slide- 06
Threads
A thread is a basic unit of CPU utilization; it comprises a thread ID, a program
counter (PC), a register set, and a stack. It shares with other threads belonging
to the same process its code section, data section, and other operating-system
resources, such as open files and signals. A traditional process has a single
thread of control. If a process has multiple threads of control, it can perform
more than one task at a time. Below we highlight a few examples of
multithreaded applications:
• An application that creates photo thumbnails from a collection of images
may use a separate thread to generate a thumbnail from each separate
image.
• Aweb browser might have one thread display images or text while another
thread retrieves data from the network.
• A word processor may have a thread for displaying graphics, another
thread for responding to keystrokes from the user, and a third thread for
performing spelling and grammar checking in the background.
Benefits of multithreading
The benefits of multithreaded programming can be broken down into four
major categories:
1. Responsiveness. Multithreading an interactive application may allow a
program to continue running even if part of it is blocked or is performing
a lengthy operation, thereby increasing responsiveness to the user.
This quality is especially useful in designing user interfaces. For instance,
consider what happens when a user clicks a button that results in the
performance of a time-consuming operation. A single-threaded application
would be unresponsive to the user until the operation had been
completed. In contrast, if the time-consuming operation is performed in
a separate, asynchronous thread, the application remains responsive to
the user.
2. Resource sharing. Processes can share resources only through techniques
such as shared memory and message passing. Such techniques must
be explicitly arranged by the programmer. However, threads share the
memory and the resources of the process towhich they belong by default.
The benefit of sharing code and data is that it allows an application to
have several different threads of activity within the same address space.
3. Economy. Allocating memory and resources for process creation is
costly. Because threads share the resources of the process to which they
belong, it is more economical to create and context-switch threads.
Empirically gauging the difference in overhead can be difficult, but in
general thread creation consumes less time and memory than process
creation. Additionally, context switching is typically faster between
threads than between processes.
4. Scalability. The benefits of multithreading can be even greater in a multiprocessor
architecture, where threads may be running in parallel on
different processing cores. A single-threaded process can run on only
one processor, regardless how many are available. We explore this issue
further in the following section.
Multicore programming
Earlier in the history of computer design, in response to the need for more computing
performance, single-CPU systems evolved into multi-CPU systems. A later, yet similar,
trend in system design is to place multiple computing cores on a single processing chip
where each core appears as a separate CPU to the operating system. We refer to such
systems as multicore,
and multithreaded programming provides a mechanism for more efficient use of these
multiple computing cores and improved concurrency. Consider an application with four
threads. On a system with a single computing core, concurrencymerelymeans that the
execution of the threadswill be interleaved over time (Figure below), because the
processing core is capable of executing only one thread at a time.

On a system with multiple cores, however, concurrency means that some threads can run
in parallel, because the system can assign a separate thread to each core (Figure below).
Notice the distinction between concurrency and parallelism in this discussion. Aconcurrent system
supportsmore than one task by allowing all the tasks to make progress. In contrast, a parallel system can
perform more than one task simultaneously. Thus, it is possible to have concurrency without parallelism.
Before the advent of multiprocessor and multicore architectures, most computer systems had only a
single processor, and CPU schedulers were designed
to provide the illusion of parallelism by rapidly switching between processes, thereby allowing each
process to make progress. Such processes were running concurrently, but not in parallel.
Programming challenges:
The trend toward multicore systems continues to place pressure on system designers and application
programmers to make better use of the multiple computing cores. Designers of operating systems must
write scheduling algorithms that use multiple processing cores to allow the parallel execution shown in
Figure below.

For application programmers, the challenge is to modify existing programs as well as design new programs
that are multithreaded. In general, five areas present challenges in programming for multicore systems:
1. Identifying tasks. This involves examining applications to find areas that can be divided into separate,
concurrent tasks. Ideally, tasks are independent of one another and thus can run in parallel on individual
cores.
2. Balance. While identifying tasks that can run in parallel, programmers must also ensure that the tasks
perform equal work of equal value. In some instances, a certain task may not contribute as much value to
the overall process as other tasks. Using a separate execution core to run that
task may not be worth the cost.
3. Data splitting. Just as applications are divided into separate tasks, the data accessed and
manipulated by the tasks must be divided to run on separate cores.
4. Data dependency. The data accessed by the tasks must be examined for dependencies between
two or more tasks. When one task depends on data from another, programmers must ensure that the
execution of the tasks is synchronized to accommodate the data dependency.
5. Testing and debugging. When a program is running in parallel on multiple cores,many different
execution paths are possible. Testing and debugging such concurrent programs is inherently more
difficult than testing and debugging single-threaded applications.

Because of these challenges,many software developers argue that the advent of multicore
systemswill require an entirely new approach to designing software systems in the future. (Similarly,
many computer science educators believe that software development must be taught with increased
emphasis on parallel programming.)
Types of Parallelism:
In general, there are two types of parallelism: data parallelism and task parallelism. Data parallelism
focuses on distributing subsets of the same data across multiple computing cores and performing the
same operation on each core. Consider, for example, summing the contents of an array of size N. On a
single-core system, one thread would simply sum the elements [0] . . . [N − 1].
On a dual-core system, however, thread A, running on core 0, could sum the elements [0] . . . [N∕2 − 1]
while thread B, running on core 1, could sum the elements [N∕2] . . . [N − 1]. The two threads would be
running in parallel on separate computing cores.
Task parallelism involves distributing not data but tasks (threads) across multiple computing cores. Each
thread is performing a unique operation. Different threads may be operating on the same data, or
theymay be operating on different data. Consider again our example above. In contrast to that situation,
an example of task parallelism might involve two threads, each performing a unique statistical operation
on the array of elements. The threads again are
operating in parallel on separate computing cores, but each is performing a unique operation.
Fundamentally, then, data parallelism involves the distribution of data across multiple cores, and
task parallelism involves the distribution of tasks across multiple cores, as shown in Figure below.
However, data and task parallelism are not mutually exclusive, and an application may in fact use
a hybrid of these two strategies.
Multithreading models
Support for threads may be provided either at the user level, for user threads, or by the
kernel, for kernel threads. User threads are supported above the kernel and
are managed without kernel support, whereas kernel threads are supported
and managed directly by the operating system. Virtually all contemporary
operating systems—includingWindows, Linux, and macOS— support kernel
threads.
Ultimately, a relationship must exist between user threads and kernel
threads, as illustrated in Figure below:
1. Many-to-one model:
The many-to-one model (Figure below) maps many user-level threads to one kernel thread.
Threadmanagement is done by the thread library in user space, so it is efficient. However, the
entire process will block if a thread makes a blocking system call.

However, only one thread can access the kernel at a time, multiple threads are unable to run
in parallel on multicore systems. Green threads—a thread library available
for Solaris systems and adopted in early versions of Java—used the many-toone
model. However, very few systems continue to use the model because of
its inability to take advantage of multiple processing cores, which have now
become standard on most computer systems.
1. One-to-one model:
The one-to-one model (Figure below) maps each user thread to a kernel thread. It
provides more concurrency than the many-to-one model by allowing another
thread to run when a thread makes a blocking system call. It also allows multiple
threads to run in parallel on multiprocessors. The only drawback to this
model is that creating a user thread requires creating the corresponding kernel
thread, and a large number of kernel threads may burden the performance of
a system. Linux, along with the family of Windows operating systems, implement
the one-to-one model.
1. Many to many model:
The many-to-many model (Figure below) multiplexes many user-level threads to
a smaller or equal number of kernel threads. The number of kernel threads
may be specific to either a particular application or a particular machine (an
application may be allocated more kernel threads on a system with eight
processing cores than a system with four cores).
Let’s consider the effect of this design on concurrency. Whereas the manyto- one model allows
the developer to create as many user threads as she wishes, it does not result in parallelism,
because the kernel can schedule only one kernel thread at a time. The one-to-one model allows
greater concurrency, but the developer has to be careful not to create too many threads within
an application. (In fact, on some systems, she may be limited in the number of threads she can
create.) The many-to-many model suffers from neither of these shortcomings: developers can
create as many user threads as necessary, and the corresponding kernel threads can run in
parallel on a multiprocessor. Also, when a thread performs a blocking system call, the kernel can
schedule another thread for execution. One variation on the many-to-many model still
multiplexes many userlevel threads to a smaller or equal number of kernel threads but also
allows a user-level thread to be bound to a kernel thread. This variation is sometimes referred to
as the two-level model (Figure below). Although the many-to-many model appears to be the
most flexible of the models discussed, in practice it is difficult to implement. In addition, with an
increasing number of processing cores appearing on most systems, limiting the number of kernel
threads has become less important. As a result, most operating systems now use the one-to-one
model.

You might also like