Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Assignment Os: Saqib Javed 11912

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

ASSIGNMENT OS

SAQIB JAVED 11912

----------------------------------------------------------------------------------------------------
--------
TO SIR UMER SARWAR

31-12-2020
ASSIGNMENT NO. 3
ANSWERS OF THE QUESTIONS
Question # 1: Provide two programming examples in which
multithreading provides better performance than a single-
threaded solution.
Ans: The process of executing multiple threads simultaneously is known as
multithreading. Some examples where multi-threading improves performance include:

Matrix multiplication:
Individual rows and columns of the matrices can be multiplied in separate
threads, reducing the wait time of the processor for addition .

UI updates:
We can render some UI elements such as a view or images on a background
thread so that the main thread does not block the view, causing performance issues and
noticeable lags.

Multi-threading does not perform well for any sequential program. For
example; program to calculate an individual tax return. Another example where
multithreading does not work well would-be shell program like “Korn” shell.

A Web server that services each request in a separate thread. A parallelized


application such as matrix multiplication where different parts of the matrix may be
worked on in parallel.

An (interactive GUI program such as a debugger where a thread is used to


monitor user input, another thread represents the running application, and a third thread
monitors performance.

Question # 2: What are two differences between user-level


threads and kernel-level threads? Under what circumstances
is one type better than the other?
ANS:
A thread is a flow of execution through the process code, with its own program counter
that keeps track of which instruction to execute next, system registers which hold its
current working variables, and a stack which contains the execution history.
A thread is also called a lightweight process. Threads provide a way to improve
application performance through parallelism. Threads represent a software approach to
improving performance of operating system by reducing the overhead thread is
equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process.
Each thread represents a separate flow of control. Threads have been successfully
used in implementing network servers and web server. They also provide a suitable
foundation for parallel execution of applications on shared memory multiprocessors.
The following figure shows the working of a single-threaded and a multithreaded
process.
Threads are implemented in following two ways −
User Level Threads − User managed threads.
Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.
User Level Threads
In this case, the thread management kernel is not aware of the existence of threads.
The thread library contains code for creating and destroying threads, for passing
message and data between threads, for scheduling thread execution and for saving and
restoring thread contexts. The application starts with a single thread.

Advantages
Thread switching does not require Kernel mode privileges.
User level thread can run on any operating system.
Scheduling can be application specific in the user level thread.
User level threads are fast to create and manage.

Disadvantages
In a typical operating system, most system calls are blocking.
Multithreaded application cannot take advantage of multiprocessing.

Kernel Level Threads


In this case, thread management is done by the Kernel. There is no thread
management code in the application area. Kernel threads are supported directly by the
operating system. Any application can be programmed to be multithreaded. All of the
threads within an application are supported within a single process.
The Kernel maintains context information for the process as a whole and for individuals’
threads within the process. Scheduling by the Kernel is done on a thread basis. The
Kernel performs thread creation, scheduling and management in Kernel space. Kernel
threads are generally slower to create and manage than the user threads.

Advantages
Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
Kernel routines themselves can be multithreaded.

Disadvantages
Kernel threads are generally slower to create and manage than the user threads.
Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.

Sr.No User level thread Kernel level thread


1 User-level threads are faster to create and Kernel-level threads are slower to create
manage. and manage.
2 Implementation is by a thread library at the Operating system supports creation of
user level. Kernel threads.
3 User-level thread is generic and can run on Kernel-level thread is specific to the
any operating system. operating system.
4 Multi-threaded applications cannot take Kernel routines themselves can be
advantage of multiprocessing multithreaded.

Question # 3: Describe the actions taken by a kernel


to context-switch between kernel level threads.
ANS:
Context switching between kernel threads typically requires saving the value of
the CPU registers from the thread being switched out and restoring the CPU registers of
the new thread being scheduled.

Context Switching involves storing the context or state of a process or thread so


that it can be reloaded when required and execution can be resumed from the same
point as earlier. This is a feature of a multitasking operating system and allows a single
CPU to be shared by multiple processes.
Actions taken by a kernel to context-switch between kernel-level threads are-Context
switching between kernel threads typically requires saving the value of the CPU
registers from the thread being switched out and restoring the CPU registers of the new
thread being scheduled.
In response to a clock interrupt, the OS saves the PC and user stack pointer of the
currently executing process, and transfers control to the kernel clock interrupt handler,
the clock interrupt handler saves the rest of the registers, as well as other machine
state, such as the state of the floating-point registers, in the process PCB.

Context switching between user threads is quite similar to switching between


kernel threads, although it is dependent on the threads library and how it maps user
threads to kernel threads. In general, context switching between user threads involves
taking a user thread of its LWP and replacing it with another thread. This act typically
involves saving and restoring the state of the registers.

Question # 4: What resources are used when a thread is


created? How do they differ from those used when a process
is created?
ANS:
When a thread is created, the threads does not require any new resources to
execute the thread shares the resources like memory of the process to which they
belong to. The benefit of code sharing is that it allows an application to have several
different threads of activity all within the same address space.

Whereas a new process creation is very heavyweight because it always


requires new address space to be created and even if they share the memory then the
inter process communication is expensive when compared to the communication
between the threads.

Because a thread is smaller than a process, thread creation typically uses


fewer resources than process creation. Creating a process requires allocating a process
control block (PCB), a rather large data structure.
The PCB includes a memory map, list of open files, and environment
variables. Allocating and managing the memory map is typically the most time-
consuming activity. Creating either a user or kernel thread involves allocating a small
data structure to hold a register set, stack, and priority.

How Processes Work:


An executing program needs more than just the binary code that tells the
computer what to do. The program needs memory and various operating system
resources in order to run. A “process” is what we call a program that has been loaded
into memory along with all the resources it needs to operate. The “operating system” is
the brains behind allocating all these resources, and comes in different flavors such as
macOS, iOS, Microsoft Windows, Linux, and Android. The OS handles the task of
managing the resources needed to turn your program into a running process.

How Threads Work

A thread is the unit of execution within a process. A process can have


anywhere from just one thread to many threads. When a process starts, it is assigned
memory and resources. Each thread in the process shares that memory and resources.
In single-threaded processes, the process contains one thread. The process and the
thread are one and the same, and there is only one thing happening.

 The program starts out as a text file of programming code,


 The program is compiled or interpreted into binary form,
 The program is loaded into memory,
 The program becomes one or more running processes.
 Processes are typically independent of each other,
 While threads exist as the subset of a process.
 Threads can communicate with each other more easily than processes can,

But threads are more vulnerable to problems caused by other threads in the same
process.
Question # 5: Provide two programming examples in which
multithreading does not provide better performance than a
single-threaded solution.
ANS:
If we are running a trivial program (constant time complexity) in a separate
thread, the overhead of creating the threads exceeds the tasks performed by them, thus
decreasing performance when compared to a single threaded alternative. Some
examples include:

Trivial operations on a list of numbers:


Multi threading won’t speed up the operations since the time taken by the
operations is constant, and other elements of the list may or may not wait for the
previous to finish.

Allocating memory to a set of data variables:


Allocating memory is a very fast task, and the overhead of creating multiple
threads to process separate blocks of variables exceeds the performance gained by
multi-threading.

There are a variety of reasons why multi-threading would be slower than single-
threading. In many ways, the question is: why do you think multi-threading would be
faster? Using multiple threads introduces extra work. Swapping contexts, to switch back
and forth between threads, is extra work. Doing synchronization, to ensure that multiple
threads communicate safely and wait for each other to complete, is extra work. Multiple
threads typically increase your working set of memory addresses, thereby requiring
more cache and as a result, causes more cache evictions (which is, you guessed it,
extra work).

Suppose you have an application (or part of an application) which is all one big
critical section protected by a lock. Only one thread at a time can hold the lock. The
throughput of the critical section can never be higher with multiple threads than it is with
one thread. In the early days of multiprocessor support in the Linux kernel, there was
such a thing, called the “big kernel lock”. It limited the performance of the kernel so that
the OS really couldn’t run any faster than it would on a single processor. (User mode
programs could run in parallel, so it wasn’t a complete bust!).
Another case is when the performance of the program is limited by, say,
memory bandwidth or disk bandwidth. More threads don’t help, and probably lose a little
because of switching costs.

----------------------------------------------------------------------------------------------------
--------

You might also like