Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
17 views

Operating System Lect12

Uploaded by

gogochor5
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Operating System Lect12

Uploaded by

gogochor5
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Operating System

Lecture #12
Thread and Thread Scheduling
Objectives
• Threads
• Types of threads
• Process vs thread
• User vs Kernel Thread
• Thread Scheduling
Threads
• A thread is a path of execution within a process. A process can contain
multiple threads.
• A thread is also known as lightweight process.
• The idea is to achieve parallelism by dividing a process into multiple
threads.
• For example, in a browser, multiple tabs can be different threads. MS
Word uses multiple threads: one thread to format the text, another thread
to process inputs, etc.
Threads
• The primary difference is that threads within the same process run in a
shared memory space, while processes run in separate memory spaces.
• Threads are not independent of one another like processes are, and as a
result threads share with other threads their code section, data section, and
OS resources (like open files and signals). But, like process, a thread has
its own program counter (PC), register set, and stack space.
Need of Thread
• It takes far less time to create a new thread in an existing process than to
create a new process.
• Threads can share the common data, they do not need to use Inter- Process
communication.
• Context switching is faster when working with threads.
• It takes less time to terminate a thread than a process.
Types of Threads
• In the operating system, there are two types of threads.
• Kernel level thread.
• User-level thread.
User-level thread

• The operating system does not recognize the user-level thread.


• User threads can be easily implemented and it is implemented by the user.
• If a user performs a user-level thread blocking operation, the whole process is
blocked.
• The kernel level thread does not know nothing about the user level thread.
• The kernel-level thread manages user-level threads as if they are single-threaded
processes.
Kernel level thread
• The kernel thread recognizes the operating system. There is a thread control block
and process control block in the system for each thread and process in the kernel-
level thread.
• The kernel-level thread is implemented by the operating system. The kernel knows
about all the threads and manages them.
• The kernel-level thread offers a system call to create and manage the threads from
user-space. The implementation of kernel threads is more difficult than the user
thread.
• Context switch time is longer in the kernel thread.
Process vs Thread
Task
• Explore the differences of Process and Threads?
• Advantages and Disadvantages of User and Kernel level thread.
Process
Process vs Thread Thread

Processes use more resources and hence they are termed as Threads share resources and hence they are termed as
heavyweight processes. lightweight processes.

Creation and termination times of threads are faster


Creation and termination times of processes are slower.
compared to processes.

Processes have their own code and data/file. Threads share code and data/file within a process.

Communication between processes is slower. Communication between threads is faster.

Context Switching in processes is slower. Context switching in threads is faster.

Threads, on the other hand, are interdependent. (i.e they can


Processes are independent of each other.
read, write or change another thread’s data)

Eg: Opening two different browsers. Eg: Opening two tabs in the same browser.
User Level Thread
• Advantages of ULT –
• Can be implemented on an OS that doesn’t support multithreading.
• Simple representation since thread has only program counter, register set, stack space.
• Simple to create since no intervention of kernel.
• Thread switching is fast since no OS calls need to be made.
• Limitations of ULT –
• No or less co-ordination among the threads and Kernel.
• If one thread causes a page fault, the entire process blocks.
Kernel Level Thread
• Advantages of KLT –
• Since kernel has full knowledge about the threads in the system, scheduler may
decide to give more time to processes having large number of threads.
• Good for applications that frequently block.
• Limitations of KLT –
• Slow and inefficient.
• It requires thread control block so it is an overhead.
Single Threaded vs Multi Threaded
Many to Many Model
• Allows many user-level threads to be mapped to many kernel threads
• The many-to-many model multiplexes any number of user threads onto an equal
or smaller number of kernel threads.
• Allows the operating system to create a sufficient number of kernel threads. The
operating system creates a pool of kernel-level threads called the thread-pool. As
long as a kernel-level thread is available in the thread-pool it is allocated to a
user-level thread.
• This model provides the best accuracy on concurrency and when a thread
performs a blocking system call, the kernel can schedule another thread for
execution.
Many to One Model
• Many user-level threads mapped to a single kernel thread
• Thread management is done in user space by the thread library.
• Drawback: The Entire process will block if a thread makes a blocking system call
• Only one thread can access the Kernel at a time, so multiple threads are unable to
run in parallel on multiprocessors.
• If the user-level thread libraries are implemented in the operating system in such
a way that the system does not support them, then the Kernel threads use the
many-to-one relationship modes.
One to One Model
• Each user-level thread maps to kernel thread
• There is one-to-one relationship of user-level thread to the kernel-level thread. This
model provides more concurrency than the many-to-one model.
• It also allows another thread to run when a thread makes a blocking system call.
• It supports multiple threads to execute in parallel on microprocessors.
• Drawback: creating a user thread requires creating a corresponding kernel thread. Since
the user can develop a malicious application to create unlimited threads. Therefore, the
system too have to create those many kernel-level threads. As a result, the system might
slow down.
Thread Scheduling
• Scheduling of threads involves two boundary scheduling,
• Scheduling of user level threads (ULT) to kernel level threads (KLT) via
lightweight process (LWP) by the application developer.
• Scheduling of kernel level threads by the system scheduler to perform
different unique OS functions.
Lightweight Process (LWP)
• Light-weight process are threads in the user space that acts as an interface
for the ULT to access the physical CPU resources.
• Thread library schedules which thread of a process to run on which LWP
and how long.
• The number of LWP created by the thread library depends on the type of
application.
Lightweight Process (LWP)
• In the case of an I/O bound application, the number of LWP depends on the
number of user-level threads.
• This is because when an LWP is blocked on an I/O operation, then to invoke
the other ULT the thread library needs to create and schedule another LWP.
• Thus, in an I/O bound application, the number of LWP is equal to the
number of the ULT.
• In the case of a CPU bound application, it depends only on the application.
Each LWP is attached to a separate kernel-level thread.
Lightweight Process (LWP)
Thread Scheduling
• In real-time, the first boundary of thread scheduling is beyond specifying
the scheduling policy and the priority.
• It requires two controls to be specified for the User level threads:
• Contention scope,
• Allocation domain.
Contention Scope
• The word contention here refers to the competition or fight among the
User level threads to access the kernel resources.
• Thus, this control defines the extent to which contention takes place.
• It is defined by the application developer using the thread library.
• Depending upon the extent of contention it is classified as
• Process Contention Scope
• System Contention Scope.
Process Contention Scope (PCS)
• The contention takes place among threads within a same process.
• The thread library schedules the high-prioritized PCS thread to access the
resources via available LWPs.
• Priority as specified by the application developer during thread creation.
System Contention Scope (SCS)
• The contention takes place among all threads in the system. In this case,
every SCS thread is associated to each LWP by the thread library and are
scheduled by the system scheduler to access the kernel resources.
• In LINUX and UNIX operating systems, the POSIX Pthread library
provides a function Pthread_attr_setscope to define the type of contention
scope for a thread during its creation.
Allocation Domain
• The allocation domain is a set of one or more resources for which a thread is
competing.
• In a multicore system, there may be one or more allocation domains where each
consists of one or more cores.
• One ULT can be a part of one or more allocation domain. Due to this high
complexity in dealing with hardware and software architectural interfaces, this
control is not specified.
• But by default, the multicore system will have an interface that affects the
allocation domain of a thread.
Consider a scenario, an operating system with three process P1, P2, P3 and 10 user
level threads (T1 to T10) with a single allocation domain. 100% of CPU resources will
be distributed among all the three processes. The amount of CPU resources allocated
to each process and to each thread depends on the contention scope, scheduling policy
and priority of each thread defined by the application developer using thread library
and also depends on the system scheduler. These User level threads are of a different
contention scope.
Formulas to calculate LWPs
• Number of Kernel Level Threads = Total Number of LWP
• Total Number of LWP = Number of LWP for SCS + Number
of LWP for PCS
• Number of LWP for SCS = Number of SCS threads
• Number of LWP for PCS = Depends on application
developer
• Number of SCS threads = ? Number of LWP for PCS = ?
• Number of SCS threads = ? Number of LWP for SCS = ?
• Total Number of LWP = ? Number of Kernel Level Threads
= ?
Any Query??

You might also like