Tutorial 4 Solution - OS
Tutorial 4 Solution - OS
Faculty of Engineering
Computer and Systems Engineering Department
S H E E T 4
1- What is a thread ? Explain the difference between a heavyweight process and a multithreaded
process.
A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register
set, and a stack. It shares with other threads belonging to the same process its code section, data
section, and other operating-system resources, such as open files and signals. A traditional (or
heavyweight) process has a single thread of control. A multithreaded process has multiple threads
of control, it can perform more than one task at a time.
2- Give two examples of a multithreaded process.
1- A word processor may have a thread for displaying graphics, another thread for
responding to keystrokes from the user, and a third thread for performing spelling and
grammar checking in the background.
2- A web browser might have one thread display images or text while another thread
retrieves data from the network.
4- Explain the difference between concurrency in a system of a single computing core and concurrency
in a multicore system.
Consider an application with four threads. On a system with a single computing core, concurrency
merely means that the execution of the threads will be interleaved over time (See the figure
below), because the processing core is capable of executing only one thread at a time.
On a system with multiple cores, however, concurrency means that the threads can run in parallel,
because the system can assign a separate thread to each core (See the figure below).
5- Can we have concurrency without parallelism? Explain your answer with an example.
It is clear from the above answer of Q.4 that there is a distinction between parallelism and
concurrency. A system is parallel if it can perform more than one task simultaneously. In
contrast, a concurrent system supports more than one task by allowing all the tasks to make
progress. Thus, it is possible to have concurrency without parallelism. In old systems , most
computer had only a single processor. CPU schedulers were designed to provide the illusion of
parallelism by rapidly switching between processes in the system, thereby allowing each
process to make progress. Such processes were running concurrently, but not in parallel.
1. Identifying tasks : This involves examining applications to find areas that can be divided into
separate, concurrent tasks. Ideally, tasks are independent of one another and thus can run in
parallel on individual cores.
2. Balance: While identifying tasks that can run in parallel, programmers must also ensure that the
tasks perform equal work of equal value. In some instances, a certain task may not contribute as
much value to the overall process as other tasks. Using a separate execution core to run that task
may not be worth the cost.
3. Data splitting: Just as applications are divided into separate tasks, the data accessed and
manipulated by the tasks must be divided to run on separate cores.
4. Data dependency: The data accessed by the tasks must be examined for dependencies between
two or more tasks. When one task depends on data from another, programmers must ensure that
the execution of the tasks is synchronized to accommodate the data dependency.
5. Testing and debugging: When a program is running in parallel on multiple cores, many different
execution paths are possible. Testing and debugging such concurrent programs is inherently more
difficult than testing and debugging single-threaded applications.
7- Explain the differences between user and kernel threads.
User threads are supported above the kernel and are managed without kernel support, whereas
kernel threads are supported and managed directly by the operating system.
8- A relationship must exist between user threads and kernel threads. Explain three common ways of
establishing such a relationship.
1. Many-to-One Model:
The many-to-one model maps many user-level threads to one kernel thread. A library
scheduler handles the mapping, and the library completely handles all user threads
programming facilities. However, the entire process will block if a thread makes a
blocking system call. Also, because only one thread can access the kernel at a time,
multiple threads are unable to run in parallel on multicore systems. Very few systems
continue to use the model because of its inability to take advantage of multiple
processing cores. Green threads—a thread library available for Solaris systems and
adopted in early versions of Java—used the many-to-one model.
2. One-to-One Model :
The one-to-one model maps each user thread to a kernel thread. It provides more
concurrency than the many-to-one model by allowing another thread to run when a
thread makes a blocking system call. It also allows multiple threads to run in parallel
on multiprocessors. The only drawback to this model is that creating a user thread
requires creating the corresponding kernel thread. Because the overhead of creating
kernel threads can burden the performance of an application, most implementations
of this model restrict the number of threads supported by the system. Linux, along
with the family of Windows operating systems, implement the one-to-one model.
3. Many-to-Many Model:
The many-to-many model multiplexes many user-level threads to a smaller or equal
number of kernel threads. The number of kernel threads may be specific to either a
particular application or a particular machine (an application may be allocated more
kernel threads on a multiprocessor than on a single processor).
One variation on the many-to-many model still multiplexes many user level threads to
a smaller or equal number of kernel threads but also allows a user-level thread to be
bound to a kernel thread. This variation is sometimes referred to as the two-level
model.
9- Differentiate between the three different models you have mentioned in your answer in Q.8 , in
terms of concurrency.
Whereas the many to-one model allows the developer to create as many user threads as he/she
wishes, it does not result in true concurrency, because the kernel can schedule only one thread at
a time. The one-to-one model allows greater concurrency, but the developer has to be careful not
to create too many threads within an application (and in some instances may be limited in the
number of threads he/she can create). The many-to-many model suffers from neither of these
shortcomings: developers can create as many user threads as necessary, and the corresponding
kernel threads can run in parallel on a multiprocessor. Also, when a thread performs a blocking
system call, the kernel can schedule another thread for execution.